WorldWideScience

Sample records for content-based image analysis

  1. iScreen: Image-Based High-Content RNAi Screening Analysis Tools.

    Science.gov (United States)

    Zhong, Rui; Dong, Xiaonan; Levine, Beth; Xie, Yang; Xiao, Guanghua

    2015-09-01

    High-throughput RNA interference (RNAi) screening has opened up a path to investigating functional genomics in a genome-wide pattern. However, such studies are often restricted to assays that have a single readout format. Recently, advanced image technologies have been coupled with high-throughput RNAi screening to develop high-content screening, in which one or more cell image(s), instead of a single readout, were generated from each well. This image-based high-content screening technology has led to genome-wide functional annotation in a wider spectrum of biological research studies, as well as in drug and target discovery, so that complex cellular phenotypes can be measured in a multiparametric format. Despite these advances, data analysis and visualization tools are still largely lacking for these types of experiments. Therefore, we developed iScreen (image-Based High-content RNAi Screening Analysis Tool), an R package for the statistical modeling and visualization of image-based high-content RNAi screening. Two case studies were used to demonstrate the capability and efficiency of the iScreen package. iScreen is available for download on CRAN (http://cran.cnr.berkeley.edu/web/packages/iScreen/index.html). The user manual is also available as a supplementary document. © 2014 Society for Laboratory Automation and Screening.

  2. Content-based Image Hiding Method for Secure Network Biometric Verification

    Directory of Open Access Journals (Sweden)

    Xiangjiu Che

    2011-08-01

    Full Text Available For secure biometric verification, most existing methods embed biometric information directly into the cover image, but content correlation analysis between the biometric image and the cover image is often ignored. In this paper, we propose a novel biometric image hiding approach based on the content correlation analysis to protect the network-based transmitted image. By using principal component analysis (PCA, the content correlation between the biometric image and the cover image is firstly analyzed. Then based on particle swarm optimization (PSO algorithm, some regions of the cover image are selected to represent the biometric image, in which the cover image can carry partial content of the biometric image. As a result of the correlation analysis, the unrepresented part of the biometric image is embedded into the cover image by using the discrete wavelet transform (DWT. Combined with human visual system (HVS model, this approach makes the hiding result perceptually invisible. The extensive experimental results demonstrate that the proposed hiding approach is robust against some common frequency and geometric attacks; it also provides an effective protection for the secure biometric verification.

  3. Content Based Retrieval System for Magnetic Resonance Images

    International Nuclear Information System (INIS)

    Trojachanets, Katarina

    2010-01-01

    The amount of medical images is continuously increasing as a consequence of the constant growth and development of techniques for digital image acquisition. Manual annotation and description of each image is impractical, expensive and time consuming approach. Moreover, it is an imprecise and insufficient way for describing all information stored in medical images. This induces the necessity for developing efficient image storage, annotation and retrieval systems. Content based image retrieval (CBIR) emerges as an efficient approach for digital image retrieval from large databases. It includes two phases. In the first phase, the visual content of the image is analyzed and the feature extraction process is performed. An appropriate descriptor, namely, feature vector is then associated with each image. These descriptors are used in the second phase, i.e. the retrieval process. With the aim to improve the efficiency and precision of the content based image retrieval systems, feature extraction and automatic image annotation techniques are subject of continuous researches and development. Including the classification techniques in the retrieval process enables automatic image annotation in an existing CBIR system. It contributes to more efficient and easier image organization in the system.Applying content based retrieval in the field of magnetic resonance is a big challenge. Magnetic resonance imaging is an image based diagnostic technique which is widely used in medical environment. According to this, the number of magnetic resonance images is enormously growing. Magnetic resonance images provide plentiful medical information, high resolution and specific nature. Thus, the capability of CBIR systems for image retrieval from large database is of great importance for efficient analysis of this kind of images. The aim of this thesis is to propose content based retrieval system architecture for magnetic resonance images. To provide the system efficiency, feature

  4. Content-Based Image Retrial Based on Hadoop

    Directory of Open Access Journals (Sweden)

    DongSheng Yin

    2013-01-01

    Full Text Available Generally, time complexity of algorithms for content-based image retrial is extremely high. In order to retrieve images on large-scale databases efficiently, a new way for retrieving based on Hadoop distributed framework is proposed. Firstly, a database of images features is built by using Speeded Up Robust Features algorithm and Locality-Sensitive Hashing and then perform the search on Hadoop platform in a parallel way specially designed. Considerable experimental results show that it is able to retrieve images based on content on large-scale cluster and image sets effectively.

  5. High-content analysis of single cells directly assembled on CMOS sensor based on color imaging.

    Science.gov (United States)

    Tanaka, Tsuyoshi; Saeki, Tatsuya; Sunaga, Yoshihiko; Matsunaga, Tadashi

    2010-12-15

    A complementary metal oxide semiconductor (CMOS) image sensor was applied to high-content analysis of single cells which were assembled closely or directly onto the CMOS sensor surface. The direct assembling of cell groups on CMOS sensor surface allows large-field (6.66 mm×5.32 mm in entire active area of CMOS sensor) imaging within a second. Trypan blue-stained and non-stained cells in the same field area on the CMOS sensor were successfully distinguished as white- and blue-colored images under white LED light irradiation. Furthermore, the chemiluminescent signals of each cell were successfully visualized as blue-colored images on CMOS sensor only when HeLa cells were placed directly on the micro-lens array of the CMOS sensor. Our proposed approach will be a promising technique for real-time and high-content analysis of single cells in a large-field area based on color imaging. Copyright © 2010 Elsevier B.V. All rights reserved.

  6. Content-Based Image Retrieval Based on Electromagnetism-Like Mechanism

    Directory of Open Access Journals (Sweden)

    Hamid A. Jalab

    2013-01-01

    Full Text Available Recently, many researchers in the field of automatic content-based image retrieval have devoted a remarkable amount of research looking for methods to retrieve the best relevant images to the query image. This paper presents a novel algorithm for increasing the precision in content-based image retrieval based on electromagnetism optimization technique. The electromagnetism optimization is a nature-inspired technique that follows the collective attraction-repulsion mechanism by considering each image as an electrical charge. The algorithm is composed of two phases: fitness function measurement and electromagnetism optimization technique. It is implemented on a database with 8,000 images spread across 80 classes with 100 images in each class. Eight thousand queries are fired on the database, and the overall average precision is computed. Experimental results of the proposed approach have shown significant improvement in the retrieval performance in regard to precision.

  7. Ontology of gaps in content-based image retrieval.

    Science.gov (United States)

    Deserno, Thomas M; Antani, Sameer; Long, Rodney

    2009-04-01

    Content-based image retrieval (CBIR) is a promising technology to enrich the core functionality of picture archiving and communication systems (PACS). CBIR has a potential for making a strong impact in diagnostics, research, and education. Research as reported in the scientific literature, however, has not made significant inroads as medical CBIR applications incorporated into routine clinical medicine or medical research. The cause is often attributed (without supporting analysis) to the inability of these applications in overcoming the "semantic gap." The semantic gap divides the high-level scene understanding and interpretation available with human cognitive capabilities from the low-level pixel analysis of computers, based on mathematical processing and artificial intelligence methods. In this paper, we suggest a more systematic and comprehensive view of the concept of "gaps" in medical CBIR research. In particular, we define an ontology of 14 gaps that addresses the image content and features, as well as system performance and usability. In addition to these gaps, we identify seven system characteristics that impact CBIR applicability and performance. The framework we have created can be used a posteriori to compare medical CBIR systems and approaches for specific biomedical image domains and goals and a priori during the design phase of a medical CBIR application, as the systematic analysis of gaps provides detailed insight in system comparison and helps to direct future research.

  8. Supervised learning of tools for content-based search of image databases

    Science.gov (United States)

    Delanoy, Richard L.

    1996-03-01

    A computer environment, called the Toolkit for Image Mining (TIM), is being developed with the goal of enabling users with diverse interests and varied computer skills to create search tools for content-based image retrieval and other pattern matching tasks. Search tools are generated using a simple paradigm of supervised learning that is based on the user pointing at mistakes of classification made by the current search tool. As mistakes are identified, a learning algorithm uses the identified mistakes to build up a model of the user's intentions, construct a new search tool, apply the search tool to a test image, display the match results as feedback to the user, and accept new inputs from the user. Search tools are constructed in the form of functional templates, which are generalized matched filters capable of knowledge- based image processing. The ability of this system to learn the user's intentions from experience contrasts with other existing approaches to content-based image retrieval that base searches on the characteristics of a single input example or on a predefined and semantically- constrained textual query. Currently, TIM is capable of learning spectral and textural patterns, but should be adaptable to the learning of shapes, as well. Possible applications of TIM include not only content-based image retrieval, but also quantitative image analysis, the generation of metadata for annotating images, data prioritization or data reduction in bandwidth-limited situations, and the construction of components for larger, more complex computer vision algorithms.

  9. Content Based Medical Image Retrieval for Histopathological, CT and MRI Images

    Directory of Open Access Journals (Sweden)

    Swarnambiga AYYACHAMY

    2013-09-01

    Full Text Available A content based approach is followed for medical images. The purpose of this study is to access the stability of these methods for medical image retrieval. The methods used in color based retrieval for histopathological images are color co-occurrence matrix (CCM and histogram with meta features. For texture based retrieval GLCM (gray level co-occurrence matrix and local binary pattern (LBP were used. For shape based retrieval canny edge detection and otsu‘s method with multivariable threshold were used. Texture and shape based retrieval were implemented using MRI (magnetic resonance images. The most remarkable characteristics of the article are its content based approach for each medical imaging modality. Our efforts were focused on the initial visual search. From our experiment, histogram with meta features in color based retrieval for histopathological images shows a precision of 60 % and recall of 30 %. Whereas GLCM in texture based retrieval for MRI images shows a precision of 70 % and recall of 20 %. Shape based retrieval for MRI images shows a precision of 50% and recall of 25 %. The retrieval results shows that this simple approach is successful.

  10. Localization-based super-resolution imaging meets high-content screening.

    Science.gov (United States)

    Beghin, Anne; Kechkar, Adel; Butler, Corey; Levet, Florian; Cabillic, Marine; Rossier, Olivier; Giannone, Gregory; Galland, Rémi; Choquet, Daniel; Sibarita, Jean-Baptiste

    2017-12-01

    Single-molecule localization microscopy techniques have proven to be essential tools for quantitatively monitoring biological processes at unprecedented spatial resolution. However, these techniques are very low throughput and are not yet compatible with fully automated, multiparametric cellular assays. This shortcoming is primarily due to the huge amount of data generated during imaging and the lack of software for automation and dedicated data mining. We describe an automated quantitative single-molecule-based super-resolution methodology that operates in standard multiwell plates and uses analysis based on high-content screening and data-mining software. The workflow is compatible with fixed- and live-cell imaging and allows extraction of quantitative data like fluorophore photophysics, protein clustering or dynamic behavior of biomolecules. We demonstrate that the method is compatible with high-content screening using 3D dSTORM and DNA-PAINT based super-resolution microscopy as well as single-particle tracking.

  11. Detection of Isoflavones Content in Soybean Based on Hyperspectral Imaging Technology

    Directory of Open Access Journals (Sweden)

    Tan Kezhu

    2014-04-01

    Full Text Available Because of many important biological activities, Soybean isoflavones which has great potential for exploitation is significant to practical applications. Due to the conventional methods for determination of soybean isoflavones having long detection period, used too many reagents, couldn’t be detected on-line, and other issues, we propose hyperspectral imaging technology to detect the contents of soybean isoflavones. Based on the 40 varieties of soybeans produced in Heilongjiang province, we get the spectral reflection datum of soybean samples varied from the soybean’s hyperspectral images which are collected by the hyperspectral imaging system, and apply high performance liquid chromatography (HPLC method to determine the true value of the selected samples of isoflavones. The feature wavelengths for isoflavones content prediction (1516, 1572, 1691, 1716 and 1760 nm were selected based on correlation analysis. The prediction model was established by using the method of BP neural network in order to realize the prediction of soybean isoflavones content analysis. The experimental results show that, the ANN model could predict isoflavones content of soybean samples with of 0.9679, the average relative error is 0.8032 %, and the mean square error (MSE is 0.110328, which indicates the effectiveness of the proposed method and provides a theoretical basis for the applications of hyerspectral imaging in non-destructive detection for interior quality of soybean.

  12. Application of content-based image compression to telepathology

    Science.gov (United States)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  13. Ontology of Gaps in Content-Based Image Retrieval

    OpenAIRE

    Deserno, Thomas M.; Antani, Sameer; Long, Rodney

    2008-01-01

    Content-based image retrieval (CBIR) is a promising technology to enrich the core functionality of picture archiving and communication systems (PACS). CBIR has a potential for making a strong impact in diagnostics, research, and education. Research as reported in the scientific literature, however, has not made significant inroads as medical CBIR applications incorporated into routine clinical medicine or medical research. The cause is often attributed (without supporting analysis) to the ina...

  14. General Staining and Segmentation Procedures for High Content Imaging and Analysis.

    Science.gov (United States)

    Chambers, Kevin M; Mandavilli, Bhaskar S; Dolman, Nick J; Janes, Michael S

    2018-01-01

    Automated quantitative fluorescence microscopy, also known as high content imaging (HCI), is a rapidly growing analytical approach in cell biology. Because automated image analysis relies heavily on robust demarcation of cells and subcellular regions, reliable methods for labeling cells is a critical component of the HCI workflow. Labeling of cells for image segmentation is typically performed with fluorescent probes that bind DNA for nuclear-based cell demarcation or with those which react with proteins for image analysis based on whole cell staining. These reagents, along with instrument and software settings, play an important role in the successful segmentation of cells in a population for automated and quantitative image analysis. In this chapter, we describe standard procedures for labeling and image segmentation in both live and fixed cell samples. The chapter will also provide troubleshooting guidelines for some of the common problems associated with these aspects of HCI.

  15. Content-based image retrieval with ontological ranking

    Science.gov (United States)

    Tsai, Shen-Fu; Tsai, Min-Hsuan; Huang, Thomas S.

    2010-02-01

    Images are a much more powerful medium of expression than text, as the adage says: "One picture is worth a thousand words." It is because compared with text consisting of an array of words, an image has more degrees of freedom and therefore a more complicated structure. However, the less limited structure of images presents researchers in the computer vision community a tough task of teaching machines to understand and organize images, especially when a limit number of learning examples and background knowledge are given. The advance of internet and web technology in the past decade has changed the way human gain knowledge. People, hence, can exchange knowledge with others by discussing and contributing information on the web. As a result, the web pages in the internet have become a living and growing source of information. One is therefore tempted to wonder whether machines can learn from the web knowledge base as well. Indeed, it is possible to make computer learn from the internet and provide human with more meaningful knowledge. In this work, we explore this novel possibility on image understanding applied to semantic image search. We exploit web resources to obtain links from images to keywords and a semantic ontology constituting human's general knowledge. The former maps visual content to related text in contrast to the traditional way of associating images with surrounding text; the latter provides relations between concepts for machines to understand to what extent and in what sense an image is close to the image search query. With the aid of these two tools, the resulting image search system is thus content-based and moreover, organized. The returned images are ranked and organized such that semantically similar images are grouped together and given a rank based on the semantic closeness to the input query. The novelty of the system is twofold: first, images are retrieved not only based on text cues but their actual contents as well; second, the grouping

  16. Retinal image quality assessment based on image clarity and content

    Science.gov (United States)

    Abdel-Hamid, Lamiaa; El-Rafei, Ahmed; El-Ramly, Salwa; Michelson, Georg; Hornegger, Joachim

    2016-09-01

    Retinal image quality assessment (RIQA) is an essential step in automated screening systems to avoid misdiagnosis caused by processing poor quality retinal images. A no-reference transform-based RIQA algorithm is introduced that assesses images based on five clarity and content quality issues: sharpness, illumination, homogeneity, field definition, and content. Transform-based RIQA algorithms have the advantage of considering retinal structures while being computationally inexpensive. Wavelet-based features are proposed to evaluate the sharpness and overall illumination of the images. A retinal saturation channel is designed and used along with wavelet-based features for homogeneity assessment. The presented sharpness and illumination features are utilized to assure adequate field definition, whereas color information is used to exclude nonretinal images. Several publicly available datasets of varying quality grades are utilized to evaluate the feature sets resulting in area under the receiver operating characteristic curve above 0.99 for each of the individual feature sets. The overall quality is assessed by a classifier that uses the collective features as an input vector. The classification results show superior performance of the algorithm in comparison to other methods from literature. Moreover, the algorithm addresses efficiently and comprehensively various quality issues and is suitable for automatic screening systems.

  17. Content-based histopathology image retrieval using CometCloud.

    Science.gov (United States)

    Qi, Xin; Wang, Daihou; Rodero, Ivan; Diaz-Montes, Javier; Gensure, Rebekah H; Xing, Fuyong; Zhong, Hua; Goodell, Lauri; Parashar, Manish; Foran, David J; Yang, Lin

    2014-08-26

    The development of digital imaging technology is creating extraordinary levels of accuracy that provide support for improved reliability in different aspects of the image analysis, such as content-based image retrieval, image segmentation, and classification. This has dramatically increased the volume and rate at which data are generated. Together these facts make querying and sharing non-trivial and render centralized solutions unfeasible. Moreover, in many cases this data is often distributed and must be shared across multiple institutions requiring decentralized solutions. In this context, a new generation of data/information driven applications must be developed to take advantage of the national advanced cyber-infrastructure (ACI) which enable investigators to seamlessly and securely interact with information/data which is distributed across geographically disparate resources. This paper presents the development and evaluation of a novel content-based image retrieval (CBIR) framework. The methods were tested extensively using both peripheral blood smears and renal glomeruli specimens. The datasets and performance were evaluated by two pathologists to determine the concordance. The CBIR algorithms that were developed can reliably retrieve the candidate image patches exhibiting intensity and morphological characteristics that are most similar to a given query image. The methods described in this paper are able to reliably discriminate among subtle staining differences and spatial pattern distributions. By integrating a newly developed dual-similarity relevance feedback module into the CBIR framework, the CBIR results were improved substantially. By aggregating the computational power of high performance computing (HPC) and cloud resources, we demonstrated that the method can be successfully executed in minutes on the Cloud compared to weeks using standard computers. In this paper, we present a set of newly developed CBIR algorithms and validate them using two

  18. Content-based image retrieval: Color-selection exploited

    NARCIS (Netherlands)

    Broek, E.L. van den; Vuurpijl, L.G.; Kisters, P. M. F.; Schmid, J.C.M. von; Moens, M.F.; Busser, R. de; Hiemstra, D.; Kraaij, W.

    2002-01-01

    This research presents a new color selection interface that facilitates query-by-color in Content-Based Image Retrieval (CBIR). Existing CBIR color selection interfaces, are being judged as non-intuitive and difficult to use. Our interface copes with these problems of usability. It is based on 11

  19. Content-Based Image Retrieval: Color-selection exploited

    NARCIS (Netherlands)

    Moens, Marie-Francine; van den Broek, Egon; Vuurpijl, L.G.; de Brusser, Rik; Kisters, P.M.F.; Hiemstra, Djoerd; Kraaij, Wessel; von Schmid, J.C.M.

    2002-01-01

    This research presents a new color selection interface that facilitates query-by-color in Content-Based Image Retrieval (CBIR). Existing CBIR color selection interfaces, are being judged as non-intuitive and difficult to use. Our interface copes with these problems of usability. It is based on 11

  20. Content-based analysis and indexing of sports video

    Science.gov (United States)

    Luo, Ming; Bai, Xuesheng; Xu, Guang-you

    2001-12-01

    An explosion of on-line image and video data in digital form is already well underway. With the exponential rise in interactive information exploration and dissemination through the World-Wide Web, the major inhibitors of rapid access to on-line video data are the management of capture and storage, and content-based intelligent search and indexing techniques. This paper proposes an approach for content-based analysis and event-based indexing of sports video. It includes a novel method to organize shots - classifying shots as close shots and far shots, an original idea of blur extent-based event detection, and an innovative local mutation-based algorithm for caption detection and retrieval. Results on extensive real TV programs demonstrate the applicability of our approach.

  1. Content-addressable read/write memories for image analysis

    Science.gov (United States)

    Snyder, W. E.; Savage, C. D.

    1982-01-01

    The commonly encountered image analysis problems of region labeling and clustering are found to be cases of search-and-rename problem which can be solved in parallel by a system architecture that is inherently suitable for VLSI implementation. This architecture is a novel form of content-addressable memory (CAM) which provides parallel search and update functions, allowing speed reductions down to constant time per operation. It has been proposed in related investigations by Hall (1981) that, with VLSI, CAM-based structures with enhanced instruction sets for general purpose processing will be feasible.

  2. Bread Water Content Measurement Based on Hyperspectral Imaging

    DEFF Research Database (Denmark)

    Liu, Zhi; Møller, Flemming

    2011-01-01

    Water content is one of the most important properties of the bread for tasting assesment or store monitoring. Traditional bread water content measurement methods mostly are processed manually, which is destructive and time consuming. This paper proposes an automated water content measurement...... for bread quality based on near-infrared hyperspectral imaging against the conventional manual loss-in-weight method. For this purpose, the hyperspectral components unmixing technology is used for measuring the water content quantitatively. And the definition on bread water content index is presented...

  3. Image content authentication based on channel coding

    Science.gov (United States)

    Zhang, Fan; Xu, Lei

    2008-03-01

    The content authentication determines whether an image has been tampered or not, and if necessary, locate malicious alterations made on the image. Authentication on a still image or a video are motivated by recipient's interest, and its principle is that a receiver must be able to identify the source of this document reliably. Several techniques and concepts based on data hiding or steganography designed as a means for the image authentication. This paper presents a color image authentication algorithm based on convolution coding. The high bits of color digital image are coded by the convolution codes for the tamper detection and localization. The authentication messages are hidden in the low bits of image in order to keep the invisibility of authentication. All communications channels are subject to errors introduced because of additive Gaussian noise in their environment. Data perturbations cannot be eliminated but their effect can be minimized by the use of Forward Error Correction (FEC) techniques in the transmitted data stream and decoders in the receiving system that detect and correct bits in error. This paper presents a color image authentication algorithm based on convolution coding. The message of each pixel is convolution encoded with the encoder. After the process of parity check and block interleaving, the redundant bits are embedded in the image offset. The tamper can be detected and restored need not accessing the original image.

  4. The Use of QBIC Content-Based Image Retrieval System

    Directory of Open Access Journals (Sweden)

    Ching-Yi Wu

    2004-03-01

    Full Text Available The fast increase in digital images has caught increasing attention on the development of image retrieval technologies. Content-based image retrieval (CBIR has become an important approach in retrieving image data from a large collection. This article reports our results on the use and users study of a CBIR system. Thirty-eight students majored in art and design were invited to use the IBM’s OBIC (Query by Image Content system through the Internet. Data from their information needs, behaviors, and retrieval strategies were collected through an in-depth interview, observation, and self-described think-aloud process. Important conclusions are:(1)There are four types of information needs for image data: implicit, inspirational, ever-changing, and purposive. The types of needs may change during the retrieval process. (2)CBIR is suitable for the example-type query, text retrieval is suitable for the scenario-type query, and image browsing is suitable for the symbolic query. (3)Different from text retrieval, detailed description of the query condition may lead to retrieval failure more easily. (4)CBIR is suitable for the domain-specific image collection, not for the images on the Word-Wide Web.[Article content in Chinese

  5. A Novel Technique for Shape Feature Extraction Using Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Dhanoa Jaspreet Singh

    2016-01-01

    Full Text Available With the advent of technology and multimedia information, digital images are increasing very quickly. Various techniques are being developed to retrieve/search digital information or data contained in the image. Traditional Text Based Image Retrieval System is not plentiful. Since it is time consuming as it require manual image annotation. Also, the image annotation differs with different peoples. An alternate to this is Content Based Image Retrieval (CBIR system. It retrieves/search for image using its contents rather the text, keywords etc. A lot of exploration has been compassed in the range of Content Based Image Retrieval (CBIR with various feature extraction techniques. Shape is a significant image feature as it reflects the human perception. Moreover, Shape is quite simple to use by the user to define object in an image as compared to other features such as Color, texture etc. Over and above, if applied alone, no descriptor will give fruitful results. Further, by combining it with an improved classifier, one can use the positive features of both the descriptor and classifier. So, a tryout will be made to establish an algorithm for accurate feature (Shape extraction in Content Based Image Retrieval (CBIR. The main objectives of this project are: (a To propose an algorithm for shape feature extraction using CBIR, (b To evaluate the performance of proposed algorithm and (c To compare the proposed algorithm with state of art techniques.

  6. A picture tells a thousand words: A content analysis of concussion-related images online.

    Science.gov (United States)

    Ahmed, Osman H; Lee, Hopin; Struik, Laura L

    2016-09-01

    Recently image-sharing social media platforms have become a popular medium for sharing health-related images and associated information. However within the field of sports medicine, and more specifically sports related concussion, the content of images and meta-data shared through these popular platforms have not been investigated. The aim of this study was to analyse the content of concussion-related images and its accompanying meta-data on image-sharing social media platforms. We retrieved 300 images from Pinterest, Instagram and Flickr by using a standardised search strategy. All images were screened and duplicate images were removed. We excluded images if they were: non-static images; illustrations; animations; or screenshots. The content and characteristics of each image was evaluated using a customised coding scheme to determine major content themes, and images were referenced to the current international concussion management guidelines. From 300 potentially relevant images, 176 images were included for analysis; 70 from Pinterest, 63 from Flickr, and 43 from Instagram. Most images were of another person or a scene (64%), with the primary content depicting injured individuals (39%). The primary purposes of the images were to share a concussion-related incident (33%) and to dispense education (19%). For those images where it could be evaluated, the majority (91%) were found to reflect the Sports Concussion Assessment Tool 3 (SCAT3) guidelines. The ability to rapidly disseminate rich information though photos, images, and infographics to a wide-reaching audience suggests that image-sharing social media platforms could be used as an effective communication tool for sports concussion. Public health strategies could direct educative content to targeted populations via the use of image-sharing platforms. Further research is required to understand how image-sharing platforms can be used to effectively relay evidence-based information to patients and sports medicine

  7. A Novel Optimization-Based Approach for Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Manyu Xiao

    2013-01-01

    Full Text Available Content-based image retrieval is nowadays one of the possible and promising solutions to manage image databases effectively. However, with the large number of images, there still exists a great discrepancy between the users’ expectations (accuracy and efficiency and the real performance in image retrieval. In this work, new optimization strategies are proposed on vocabulary tree building, retrieval, and matching methods. More precisely, a new clustering strategy combining classification and conventional K-Means method is firstly redefined. Then a new matching technique is built to eliminate the error caused by large-scaled scale-invariant feature transform (SIFT. Additionally, a new unit mechanism is proposed to reduce the cost of indexing time. Finally, the numerical results show that excellent performances are obtained in both accuracy and efficiency based on the proposed improvements for image retrieval.

  8. Plant leaf chlorophyll content retrieval based on a field imaging spectroscopy system.

    Science.gov (United States)

    Liu, Bo; Yue, Yue-Min; Li, Ru; Shen, Wen-Jing; Wang, Ke-Lin

    2014-10-23

    A field imaging spectrometer system (FISS; 380-870 nm and 344 bands) was designed for agriculture applications. In this study, FISS was used to gather spectral information from soybean leaves. The chlorophyll content was retrieved using a multiple linear regression (MLR), partial least squares (PLS) regression and support vector machine (SVM) regression. Our objective was to verify the performance of FISS in a quantitative spectral analysis through the estimation of chlorophyll content and to determine a proper quantitative spectral analysis method for processing FISS data. The results revealed that the derivative reflectance was a more sensitive indicator of chlorophyll content and could extract content information more efficiently than the spectral reflectance, which is more significant for FISS data compared to ASD (analytical spectral devices) data, reducing the corresponding RMSE (root mean squared error) by 3.3%-35.6%. Compared with the spectral features, the regression methods had smaller effects on the retrieval accuracy. A multivariate linear model could be the ideal model to retrieve chlorophyll information with a small number of significant wavelengths used. The smallest RMSE of the chlorophyll content retrieved using FISS data was 0.201 mg/g, a relative reduction of more than 30% compared with the RMSE based on a non-imaging ASD spectrometer, which represents a high estimation accuracy compared with the mean chlorophyll content of the sampled leaves (4.05 mg/g). Our study indicates that FISS could obtain both spectral and spatial detailed information of high quality. Its image-spectrum-in-one merit promotes the good performance of FISS in quantitative spectral analyses, and it can potentially be widely used in the agricultural sector.

  9. Content-based quality evaluation of color images: overview and proposals

    Science.gov (United States)

    Tremeau, Alain; Richard, Noel; Colantoni, Philippe; Fernandez-Maloigne, Christine

    2003-12-01

    The automatic prediction of perceived quality from image data in general, and the assessment of particular image characteristics or attributes that may need improvement in particular, becomes an increasingly important part of intelligent imaging systems. The purpose of this paper is to propose to the color imaging community in general to develop a software package available on internet to help the user to select among all these approaches which is better appropriated to a given application. The ultimate goal of this project is to propose, next to implement, an open and unified color imaging system to set up a favourable context for the evaluation and analysis of color imaging processes. Many different methods for measuring the performance of a process have been proposed by different researchers. In this paper, we will discuss the advantages and shortcomings of most of main analysis criteria and performance measures currently used. The aim is not to establish a harsh competition between algorithms or processes, but rather to test and compare the efficiency of methodologies firstly to highlight strengths and weaknesses of a given algorithm or methodology on a given image type and secondly to have these results publicly available. This paper is focused on two important unsolved problems. Why it is so difficult to select a color space which gives better results than another one? Why it is so difficult to select an image quality metric which gives better results than another one, with respect to the judgment of the Human Visual System? Several methods used either in color imaging or in image quality will be thus discussed. Proposals for content-based image measures and means of developing a standard test suite for will be then presented. The above reference advocates for an evaluation protocol based on an automated procedure. This is the ultimate goal of our proposal.

  10. Indexing, learning and content-based retrieval for special purpose image databases

    NARCIS (Netherlands)

    M.J. Huiskes (Mark); E.J. Pauwels (Eric)

    2005-01-01

    textabstractThis chapter deals with content-based image retrieval in special purpose image databases. As image data is amassed ever more effortlessly, building efficient systems for searching and browsing of image databases becomes increasingly urgent. We provide an overview of the current

  11. A content analysis of thinspiration images and text posts on Tumblr.

    Science.gov (United States)

    Wick, Madeline R; Harriger, Jennifer A

    2018-03-01

    Thinspiration is content advocating extreme weight loss by means of images and/or text posts. While past content analyses have examined thinspiration content on social media and other websites, no research to date has examined thinspiration content on Tumblr. Over the course of a week, 222 images and text posts were collected after entering the keyword 'thinspiration' into the Tumblr search bar. These images were then rated on a variety of characteristics. The majority of thinspiration images included a thin woman adhering to culturally based beauty, often posing in a manner that accentuated her thinness or sexuality. The most common themes for thinspiration text posts included dieting/restraint, weight loss, food guilt, and body guilt. The thinspiration content on Tumblr appears to be consistent with that on other mediums. Future research should utilize experimental methods to examine the potential effects of consuming thinspiration content on Tumblr. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Parallel content-based sub-image retrieval using hierarchical searching.

    Science.gov (United States)

    Yang, Lin; Qi, Xin; Xing, Fuyong; Kurc, Tahsin; Saltz, Joel; Foran, David J

    2014-04-01

    The capacity to systematically search through large image collections and ensembles and detect regions exhibiting similar morphological characteristics is central to pathology diagnosis. Unfortunately, the primary methods used to search digitized, whole-slide histopathology specimens are slow and prone to inter- and intra-observer variability. The central objective of this research was to design, develop, and evaluate a content-based image retrieval system to assist doctors for quick and reliable content-based comparative search of similar prostate image patches. Given a representative image patch (sub-image), the algorithm will return a ranked ensemble of image patches throughout the entire whole-slide histology section which exhibits the most similar morphologic characteristics. This is accomplished by first performing hierarchical searching based on a newly developed hierarchical annular histogram (HAH). The set of candidates is then further refined in the second stage of processing by computing a color histogram from eight equally divided segments within each square annular bin defined in the original HAH. A demand-driven master-worker parallelization approach is employed to speed up the searching procedure. Using this strategy, the query patch is broadcasted to all worker processes. Each worker process is dynamically assigned an image by the master process to search for and return a ranked list of similar patches in the image. The algorithm was tested using digitized hematoxylin and eosin (H&E) stained prostate cancer specimens. We have achieved an excellent image retrieval performance. The recall rate within the first 40 rank retrieved image patches is ∼90%. Both the testing data and source code can be downloaded from http://pleiad.umdnj.edu/CBII/Bioinformatics/.

  13. Advanced Cell Classifier: User-Friendly Machine-Learning-Based Software for Discovering Phenotypes in High-Content Imaging Data.

    Science.gov (United States)

    Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter

    2017-06-28

    High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Content-based image retrieval applied to bone age assessment

    Science.gov (United States)

    Fischer, Benedikt; Brosig, André; Welter, Petra; Grouls, Christoph; Günther, Rolf W.; Deserno, Thomas M.

    2010-03-01

    Radiological bone age assessment is based on local image regions of interest (ROI), such as the epiphysis or the area of carpal bones. These are compared to a standardized reference and scores determining the skeletal maturity are calculated. For computer-aided diagnosis, automatic ROI extraction and analysis is done so far mainly by heuristic approaches. Due to high variations in the imaged biological material and differences in age, gender and ethnic origin, automatic analysis is difficult and frequently requires manual interactions. On the contrary, epiphyseal regions (eROIs) can be compared to previous cases with known age by content-based image retrieval (CBIR). This requires a sufficient number of cases with reliable positioning of the eROI centers. In this first approach to bone age assessment by CBIR, we conduct leaving-oneout experiments on 1,102 left hand radiographs and 15,428 metacarpal and phalangeal eROIs from the USC hand atlas. The similarity of the eROIs is assessed by cross-correlation of 16x16 scaled eROIs. The effects of the number of eROIs, two age computation methods as well as the number of considered CBIR references are analyzed. The best results yield an error rate of 1.16 years and a standard deviation of 0.85 years. As the appearance of the hand varies naturally by up to two years, these results clearly demonstrate the applicability of the CBIR approach for bone age estimation.

  15. Design Guidelines for a Content-Based Image Retrieval Color-Selection Interface

    NARCIS (Netherlands)

    Eggen, Berry; van den Broek, Egon; van der Veer, Gerrit C.; Kisters, Peter M.F.; Willems, Rob; Vuurpijl, Louis G.

    2004-01-01

    In Content-Based Image Retrieval (CBIR) two query-methods exist: query-by-example and query-by-memory. The user either selects an example image or selects image features retrieved from memory (such as color, texture, spatial attributes, and shape) to define his query. Hitherto, research on CBIR

  16. Development of automatic image analysis methods for high-throughput and high-content screening

    NARCIS (Netherlands)

    Di, Zi

    2013-01-01

    This thesis focuses on the development of image analysis methods for ultra-high content analysis of high-throughput screens where cellular phenotype responses to various genetic or chemical perturbations that are under investigation. Our primary goal is to deliver efficient and robust image analysis

  17. Image Retargeting by Content-Aware Synthesis

    OpenAIRE

    Dong, Weiming; Wu, Fuzhang; Kong, Yan; Mei, Xing; Lee, Tong-Yee; Zhang, Xiaopeng

    2014-01-01

    Real-world images usually contain vivid contents and rich textural details, which will complicate the manipulation on them. In this paper, we design a new framework based on content-aware synthesis to enhance content-aware image retargeting. By detecting the textural regions in an image, the textural image content can be synthesized rather than simply distorted or cropped. This method enables the manipulation of textural & non-textural regions with different strategy since they have different...

  18. Learning effective color features for content based image retrieval in dermatology

    NARCIS (Netherlands)

    Bunte, Kerstin; Biehl, Michael; Jonkman, Marcel F.; Petkov, Nicolai

    We investigate the extraction of effective color features for a content-based image retrieval (CBIR) application in dermatology. Effectiveness is measured by the rate of correct retrieval of images from four color classes of skin lesions. We employ and compare two different methods to learn

  19. Content Based Radiographic Images Indexing and Retrieval Using Pattern Orientation Histogram

    Directory of Open Access Journals (Sweden)

    Abolfazl Lakdashti

    2008-06-01

    Full Text Available Introduction: Content Based Image Retrieval (CBIR is a method of image searching and retrieval in a  database. In medical applications, CBIR is a tool used by physicians to compare the previous and current  medical images associated with patients pathological conditions. As the volume of pictorial information  stored in medical image databases is in progress, efficient image indexing and retrieval is increasingly  becoming a necessity.  Materials and Methods: This paper presents a new content based radiographic image retrieval approach  based on histogram of pattern orientations, namely pattern orientation histogram (POH. POH represents  the  spatial  distribution  of  five  different  pattern  orientations:  vertical,  horizontal,  diagonal  down/left,  diagonal down/right and non-orientation. In this method, a given image is first divided into image-blocks  and  the  frequency  of  each  type  of  pattern  is  determined  in  each  image-block.  Then,  local  pattern  histograms for each of these image-blocks are computed.   Results: The method was compared to two well known texture-based image retrieval methods: Tamura  and  Edge  Histogram  Descriptors  (EHD  in  MPEG-7  standard.  Experimental  results  based  on  10000  IRMA  radiography  image  dataset,  demonstrate  that  POH  provides  better  precision  and  recall  rates  compared to Tamura and EHD. For some images, the recall and precision rates obtained by POH are,  respectively, 48% and 18% better than the best of the two above mentioned methods.    Discussion and Conclusion: Since we exploit the absolute location of the pattern in the image as well as  its global composition, the proposed matching method can retrieve semantically similar medical images.

  20. Disability in physical education textbooks: an analysis of image content.

    Science.gov (United States)

    Táboas-Pais, María Inés; Rey-Cao, Ana

    2012-10-01

    The aim of this paper is to show how images of disability are portrayed in physical education textbooks for secondary schools in Spain. The sample was composed of 3,316 images published in 36 textbooks by 10 publishing houses. A content analysis was carried out using a coding scheme based on categories employed in other similar studies and adapted to the requirements of this study with additional categories. The variables were camera angle, gender, type of physical activity, field of practice, space, and level. Univariate and bivariate descriptive analyses were also carried out. The Pearson chi-square statistic was used to identify associations between the variables. Results showed a noticeable imbalance between people with disabilities and people without disabilities, and women with disabilities were less frequently represented than men with disabilities. People with disabilities were depicted as participating in a very limited variety of segregated, competitive, and elite sports activities.

  1. ImageGrouper: a group-oriented user interface for content-based image retrieval and digital image arrangement

    NARCIS (Netherlands)

    Nakazato, Munehiro; Manola, L.; Huang, Thomas S.

    In content-based image retrieval (CBIR), experimental (trial-and-error) query with relevance feedback is essential for successful retrieval. Unfortunately, the traditional user interfaces are not suitable for trying different combinations of query examples. This is because first, these systems

  2. Biased discriminant euclidean embedding for content-based image retrieval.

    Science.gov (United States)

    Bian, Wei; Tao, Dacheng

    2010-02-01

    With many potential multimedia applications, content-based image retrieval (CBIR) has recently gained more attention for image management and web search. A wide variety of relevance feedback (RF) algorithms have been developed in recent years to improve the performance of CBIR systems. These RF algorithms capture user's preferences and bridge the semantic gap. However, there is still a big room to further the RF performance, because the popular RF algorithms ignore the manifold structure of image low-level visual features. In this paper, we propose the biased discriminative Euclidean embedding (BDEE) which parameterises samples in the original high-dimensional ambient space to discover the intrinsic coordinate of image low-level visual features. BDEE precisely models both the intraclass geometry and interclass discrimination and never meets the undersampled problem. To consider unlabelled samples, a manifold regularization-based item is introduced and combined with BDEE to form the semi-supervised BDEE, or semi-BDEE for short. To justify the effectiveness of the proposed BDEE and semi-BDEE, we compare them against the conventional RF algorithms and show a significant improvement in terms of accuracy and stability based on a subset of the Corel image gallery.

  3. Computer-aided diagnostics of screening mammography using content-based image retrieval

    Science.gov (United States)

    Deserno, Thomas M.; Soiron, Michael; de Oliveira, Júlia E. E.; de A. Araújo, Arnaldo

    2012-03-01

    Breast cancer is one of the main causes of death among women in occidental countries. In the last years, screening mammography has been established worldwide for early detection of breast cancer, and computer-aided diagnostics (CAD) is being developed to assist physicians reading mammograms. A promising method for CAD is content-based image retrieval (CBIR). Recently, we have developed a classification scheme of suspicious tissue pattern based on the support vector machine (SVM). In this paper, we continue moving towards automatic CAD of screening mammography. The experiments are based on in total 10,509 radiographs that have been collected from different sources. From this, 3,375 images are provided with one and 430 radiographs with more than one chain code annotation of cancerous regions. In different experiments, this data is divided into 12 and 20 classes, distinguishing between four categories of tissue density, three categories of pathology and in the 20 class problem two categories of different types of lesions. Balancing the number of images in each class yields 233 and 45 images remaining in each of the 12 and 20 classes, respectively. Using a two-dimensional principal component analysis, features are extracted from small patches of 128 x 128 pixels and classified by means of a SVM. Overall, the accuracy of the raw classification was 61.6 % and 52.1 % for the 12 and the 20 class problem, respectively. The confusion matrices are assessed for detailed analysis. Furthermore, an implementation of a SVM-based CBIR system for CADx in screening mammography is presented. In conclusion, with a smarter patch extraction, the CBIR approach might reach precision rates that are helpful for the physicians. This, however, needs more comprehensive evaluation on clinical data.

  4. An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis

    Science.gov (United States)

    Kim, Yongmin; Alexander, Thomas

    1986-06-01

    In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.

  5. Phase-image-based content-addressable holographic data storage

    Science.gov (United States)

    John, Renu; Joseph, Joby; Singh, Kehar

    2004-03-01

    We propose and demonstrate the use of phase images for content-addressable holographic data storage. Use of binary phase-based data pages with 0 and π phase changes, produces uniform spectral distribution at the Fourier plane. The absence of strong DC component at the Fourier plane and more intensity of higher order spatial frequencies facilitate better recording of higher spatial frequencies, and improves the discrimination capability of the content-addressable memory. This improves the results of the associative recall in a holographic memory system, and can give low number of false hits even for small search arguments. The phase-modulated pixels also provide an opportunity of subtraction among data pixels leading to better discrimination between similar data pages.

  6. Towards a framework for agent-based image analysis of remote-sensing data.

    Science.gov (United States)

    Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera

    2015-04-03

    Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects' properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA).

  7. Content Based Image Matching for Planetary Science

    Science.gov (United States)

    Deans, M. C.; Meyer, C.

    2006-12-01

    Planetary missions generate large volumes of data. With the MER rovers still functioning on Mars, PDS contains over 7200 released images from the Microscopic Imagers alone. These data products are only searchable by keys such as the Sol, spacecraft clock, or rover motion counter index, with little connection to the semantic content of the images. We have developed a method for matching images based on the visual textures in images. For every image in a database, a series of filters compute the image response to localized frequencies and orientations. Filter responses are turned into a low dimensional descriptor vector, generating a 37 dimensional fingerprint. For images such as the MER MI, this represents a compression ratio of 99.9965% (the fingerprint is approximately 0.0035% the size of the original image). At query time, fingerprints are quickly matched to find images with similar appearance. Image databases containing several thousand images are preprocessed offline in a matter of hours. Image matches from the database are found in a matter of seconds. We have demonstrated this image matching technique using three sources of data. The first database consists of 7200 images from the MER Microscopic Imager. The second database consists of 3500 images from the Narrow Angle Mars Orbital Camera (MOC-NA), which were cropped into 1024×1024 sub-images for consistency. The third database consists of 7500 scanned archival photos from the Apollo Metric Camera. Example query results from all three data sources are shown. We have also carried out user tests to evaluate matching performance by hand labeling results. User tests verify approximately 20% false positive rate for the top 14 results for MOC NA and MER MI data. This means typically 10 to 12 results out of 14 match the query image sufficiently. This represents a powerful search tool for databases of thousands of images where the a priori match probability for an image might be less than 1%. Qualitatively, correct

  8. Precision Statistical Analysis of Images Based on Brightness Distribution

    Directory of Open Access Journals (Sweden)

    Muzhir Shaban Al-Ani

    2017-07-01

    Full Text Available Study the content of images is considered an important topic in which reasonable and accurate analysis of images are generated. Recently image analysis becomes a vital field because of huge number of images transferred via transmission media in our daily life. These crowded media with images lead to highlight in research area of image analysis. In this paper, the implemented system is passed into many steps to perform the statistical measures of standard deviation and mean values of both color and grey images. Whereas the last step of the proposed method concerns to compare the obtained results in different cases of the test phase. In this paper, the statistical parameters are implemented to characterize the content of an image and its texture. Standard deviation, mean and correlation values are used to study the intensity distribution of the tested images. Reasonable results are obtained for both standard deviation and mean value via the implementation of the system. The major issue addressed in the work is concentrated on brightness distribution via statistical measures applying different types of lighting.

  9. INTEGRATION OF SPATIAL INFORMATION WITH COLOR FOR CONTENT RETRIEVAL OF REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    Bikesh Kumar Singh

    2010-08-01

    Full Text Available There is rapid increase in image databases of remote sensing images due to image satellites with high resolution, commercial applications of remote sensing & high available bandwidth in last few years. The problem of content-based image retrieval (CBIR of remotely sensed images presents a major challenge not only because of the surprisingly increasing volume of images acquired from a wide range of sensors but also because of the complexity of images themselves. In this paper, a software system for content-based retrieval of remote sensing images using RGB and HSV color spaces is presented. Further, we also compare our results with spatiogram based content retrieval which integrates spatial information along with color histogram. Experimental results show that the integration of spatial information in color improves the image analysis of remote sensing data. In general, retrievals in HSV color space showed better performance than in RGB color space.

  10. Multiscale Distance Coherence Vector Algorithm for Content-Based Image Retrieval

    Science.gov (United States)

    Jiexian, Zeng; Xiupeng, Liu

    2014-01-01

    Multiscale distance coherence vector algorithm for content-based image retrieval (CBIR) is proposed due to the same descriptor with different shapes and the shortcomings of antinoise performance of the distance coherence vector algorithm. By this algorithm, the image contour curve is evolved by Gaussian function first, and then the distance coherence vector is, respectively, extracted from the contour of the original image and evolved images. Multiscale distance coherence vector was obtained by reasonable weight distribution of the distance coherence vectors of evolved images contour. This algorithm not only is invariable to translation, rotation, and scaling transformation but also has good performance of antinoise. The experiment results show us that the algorithm has a higher recall rate and precision rate for the retrieval of images polluted by noise. PMID:24883416

  11. Retrieval Architecture with Classified Query for Content Based Image Recognition

    Directory of Open Access Journals (Sweden)

    Rik Das

    2016-01-01

    Full Text Available The consumer behavior has been observed to be largely influenced by image data with increasing familiarity of smart phones and World Wide Web. Traditional technique of browsing through product varieties in the Internet with text keywords has been gradually replaced by the easy accessible image data. The importance of image data has portrayed a steady growth in application orientation for business domain with the advent of different image capturing devices and social media. The paper has described a methodology of feature extraction by image binarization technique for enhancing identification and retrieval of information using content based image recognition. The proposed algorithm was tested on two public datasets, namely, Wang dataset and Oliva and Torralba (OT-Scene dataset with 3688 images on the whole. It has outclassed the state-of-the-art techniques in performance measure and has shown statistical significance.

  12. A review of content-based image retrieval systems in medical applications-clinical benefits and future directions.

    Science.gov (United States)

    Müller, Henning; Michoux, Nicolas; Bandon, David; Geissbuhler, Antoine

    2004-02-01

    Content-based visual information retrieval (CBVIR) or content-based image retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. The availability of large and steadily growing amounts of visual and multimedia data, and the development of the Internet underline the need to create thematic access methods that offer more than simple text-based queries or requests based on matching exact database fields. Many programs and tools have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of differing sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered. In the medical field, images, and especially digital images, are produced in ever-increasing quantities and used for diagnostics and therapy. The Radiology Department of the University Hospital of Geneva alone produced more than 12,000 images a day in 2002. The cardiology is currently the second largest producer of digital images, especially with videos of cardiac catheterization ( approximately 1800 exams per year containing almost 2000 images each). The total amount of cardiologic image data produced in the Geneva University Hospital was around 1 TB in 2002. Endoscopic videos can equally produce enormous amounts of data. With digital imaging and communications in medicine (DICOM), a standard for image communication has been set and patient information can be stored with the actual image(s), although still a few problems prevail with respect to the standardization. In several articles, content-based access to medical images for supporting clinical decision-making has been proposed that would ease the management of clinical data and scenarios for the integration of

  13. Relationship between color and tannin content in sorghum grain: application of image analysis and artificial neural network

    Directory of Open Access Journals (Sweden)

    M Sedghi

    2012-03-01

    Full Text Available The relationship between sorghum grain color and tannin content was reported in several references. In this study, 33 phenotypes of sorghum grain differing in seed characteristics were collected and analyzed by Folin-Ciocalteu method. A computer image analysis method was used to determine the color characteristics of all 33 sorghum phenotypes. Two methods of multiple linear regression and artificial neural network (ANN models were developed to describe tannin content in sorghum grain from three input parameters of color characteristics. The goodness of fit of the models was tested using R², MS error, and bias. The computer image analysis technique was a suitable method to estimate tannin through sorghum grain color strength. Therefore, the color quality of the samples was described according three color parameters: L* (lightness, a* (redness - from green to red and b* (blueness - from blue to yellow. The developed regression and ANN models showed a strong relationship between color and tannin content of samples. The goodness of fit (in terms of R², which corresponds to training the ANN model, showed higher accuracy of prediction of ANN compared with the equation established by the regression method (0.96 vs. 0.88. The ANN models in term of MS error showed lower residuals distribution than that of regression model (0.002 vs. 0.006. The platform of computer image analysis technique and ANN-based model may be used to estimate the tannin content of sorghum.

  14. The utilization of human color categorization for content-based image retrieval

    NARCIS (Netherlands)

    van den Broek, Egon; Rogowitz, Bernice E.; Kisters, Peter M.F.; Pappas, Thrasyvoulos N.; Vuurpijl, Louis G.

    2004-01-01

    We present the concept of intelligent Content-Based Image Retrieval (iCBIR), which incorporates knowledge concerning human cognition in system development. The present research focuses on the utilization of color categories (or focal colors) for CBIR purposes, in particularly considered to be useful

  15. Mutual Perception of USA and China based on Content-Analysis of Media

    Directory of Open Access Journals (Sweden)

    Farida Halmuratovna Autova

    2015-12-01

    Full Text Available The article evaluates mutual perception of the United States and China in the XXI century, based on content analysis of American and Chinese media. The research methodology includes both content and event analysis. To conduct content analysis we used leading weekly news magazines of US and China - “Newsweek” and “Beijing Review”. The events, limiting the time frame of analysis are Barack Obama's re-election to the second term in 2012, and the entry of Xi Jinping as the chairman of China in 2013. As a result, we have analyzed the issues of each magazine one year before and after the events respectively. Thematic areas covered by articles (politics, economy, culture, as well as stylistic coloring titles of articles are examined. Following the results of the analysis China confidently perceives itself in the international arena. In turn, the US are committed by emphasizing speed and power of the Chinese point out the negative consequences of such a jump (“growing pains”, on the challenges facing China in domestic and foreign policy, in order to create a negative image of China in the minds of American citizens.

  16. Changes in content and synthesis of collagen types and proteoglycans in osteoarthritis of the knee joint and comparison of quantitative analysis with Photoshop-based image analysis.

    Science.gov (United States)

    Lahm, Andreas; Mrosek, Eike; Spank, Heiko; Erggelet, Christoph; Kasch, Richard; Esser, Jan; Merk, Harry

    2010-04-01

    The different cartilage layers vary in synthesis of proteoglycan and of the distinct types of collagen with the predominant collagen Type II with its associated collagens, e.g. types IX and XI, produced by normal chondrocytes. It was demonstrated that proteoglycan decreases in degenerative tissue and a switch from collagen type II to type I occurs. The aim of this study was to evaluate the correlation of real-time (RT)-PCR and Photoshop-based image analysis in detecting such lesions and find new aspects about their distribution. We performed immunohistochemistry and histology with cartilage tissue samples from 20 patients suffering from osteoarthritis compared with 20 healthy biopsies. Furthermore, we quantified our results on the gene expression of collagen type I and II and aggrecan with the help of real-time (RT)-PCR. Proteoglycan content was measured colorimetrically. Using Adobe Photoshop the digitized images of histology and immunohistochemistry stains of collagen type I and II were stored on an external data storage device. The area occupied by any specific colour range can be specified and compared in a relative manner directly from the histogram using the "magic wand tool" in the select similar menu. In the image grow menu gray levels or luminosity (colour) of all pixels within the selected area, including mean, median and standard deviation, etc. are depicted. Statistical Analysis was performed using the t test. With the help of immunohistochemistry, RT-PCR and quantitative RT- PCR we found that not only collagen type II, but also collagen type I is synthesized by the cells of the diseased cartilage tissue, shown by increasing amounts of collagen type I mRNA especially in the later stages of osteoarthritis. A decrease of collagen type II is visible especially in the upper fibrillated area of the advanced osteoarthritic samples, which leads to an overall decrease. Analysis of proteoglycan showed a loss of the overall content and a quite uniform staining in

  17. Content-Based Image Retrieval Benchmarking: Utilizing color categories and color distributions

    NARCIS (Netherlands)

    van den Broek, Egon; Kisters, Peter M.F.; Vuurpijl, Louis G.

    From a human centered perspective three ingredients for Content-Based Image Retrieval (CBIR) were developed. First, with their existence confirmed by experimental data, 11 color categories were utilized for CBIR and used as input for a new color space segmentation technique. The complete HSI color

  18. Mosaicing of single plane illumination microscopy images using groupwise registration and fast content-based image fusion

    Science.gov (United States)

    Preibisch, Stephan; Rohlfing, Torsten; Hasak, Michael P.; Tomancak, Pavel

    2008-03-01

    Single Plane Illumination Microscopy (SPIM; Huisken et al., Nature 305(5686):1007-1009, 2004) is an emerging microscopic technique that enables live imaging of large biological specimens in their entirety. By imaging the living biological sample from multiple angles SPIM has the potential to achieve isotropic resolution throughout even relatively large biological specimens. For every angle, however, only a relatively shallow section of the specimen is imaged with high resolution, whereas deeper regions appear increasingly blurred. In order to produce a single, uniformly high resolution image, we propose here an image mosaicing algorithm that combines state of the art groupwise image registration for alignment with content-based image fusion to prevent degrading of the fused image due to regional blurring of the input images. For the registration stage, we introduce an application-specific groupwise transformation model that incorporates per-image as well as groupwise transformation parameters. We also propose a new fusion algorithm based on Gaussian filters, which is substantially faster than fusion based on local image entropy. We demonstrate the performance of our mosaicing method on data acquired from living embryos of the fruit fly, Drosophila, using four and eight angle acquisitions.

  19. Morphological images analysis and chromosomic aberrations classification based on fuzzy logic

    International Nuclear Information System (INIS)

    Souza, Leonardo Peres

    2011-01-01

    This work has implemented a methodology for automation of images analysis of chromosomes of human cells irradiated at IEA-R1 nuclear reactor (located at IPEN, Sao Paulo, Brazil), and therefore subject to morphological aberrations. This methodology intends to be a tool for helping cytogeneticists on identification, characterization and classification of chromosomal metaphasic analysis. The methodology development has included the creation of a software application based on artificial intelligence techniques using Fuzzy Logic combined with image processing techniques. The developed application was named CHRIMAN and is composed of modules that contain the methodological steps which are important requirements in order to achieve an automated analysis. The first step is the standardization of the bi-dimensional digital image acquisition procedure through coupling a simple digital camera to the ocular of the conventional metaphasic analysis microscope. Second step is related to the image treatment achieved through digital filters application; storing and organization of information obtained both from image content itself, and from selected extracted features, for further use on pattern recognition algorithms. The third step consists on characterizing, counting and classification of stored digital images and extracted features information. The accuracy in the recognition of chromosome images is 93.9%. This classification is based on classical standards obtained at Buckton [1973], and enables support to geneticist on chromosomic analysis procedure, decreasing analysis time, and creating conditions to include this method on a broader evaluation system on human cell damage due to ionizing radiation exposure. (author)

  20. Content Progressive Coding of Limited Bits/pixel Images

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Forchhammer, Søren

    1999-01-01

    A new lossless context based method for content progressive coding of limited bits/pixel images is proposed. Progressive coding is achieved by separating the image into contelnt layers. Digital maps are compressed up to 3 times better than GIF.......A new lossless context based method for content progressive coding of limited bits/pixel images is proposed. Progressive coding is achieved by separating the image into contelnt layers. Digital maps are compressed up to 3 times better than GIF....

  1. System for accessing a collection of histology images using content-based strategies

    International Nuclear Information System (INIS)

    Gonzalez F; Caicedo J C; Cruz Roa A; Camargo, J; Spinel, C

    2010-01-01

    Histology images are an important resource for research, education and medical practice. The availability of image collections with reference purposes is limited to printed formats such as books and specialized journals. When histology image sets are published in digital formats, they are composed of some tens of images that do not represent the wide diversity of biological structures that can be found in fundamental tissues; making a complete histology image collection available to the general public having a great impact on research and education in different areas such as medicine, biology and natural sciences. This work presents the acquisition process of a histology image collection with 20,000 samples in digital format, from tissue processing to digital image capturing. The main purpose of collecting these images is to make them available as reference material to the academic community. In addition, this paper presents the design and architecture of a system to query and explore the image collection, using content-based image retrieval tools and text-based search on the annotations provided by experts. The system also offers novel image visualization methods to allow easy identification of interesting images among hundreds of possible pictures. The system has been developed using a service-oriented architecture and allows web-based access in http://www.informed.unal.edu.co

  2. [Vegetation index estimation by chlorophyll content of grassland based on spectral analysis].

    Science.gov (United States)

    Xiao, Han; Chen, Xiu-Wan; Yang, Zhen-Yu; Li, Huai-Yu; Zhu, Han

    2014-11-01

    Comparing the methods of existing remote sensing research on the estimation of chlorophyll content, the present paper confirms that the vegetation index is one of the most practical and popular research methods. In recent years, the increasingly serious problem of grassland degradation. This paper, firstly, analyzes the measured reflectance spectral curve and its first derivative curve in the grasslands of Songpan, Sichuan and Gongger, Inner Mongolia, conducts correlation analysis between these two spectral curves and chlorophyll content, and finds out the regulation between REP (red edge position) and grassland chlorophyll content, that is, the higher the chlorophyll content is, the higher the REIP (red-edge inflection point) value would be. Then, this paper constructs GCI (grassland chlorophyll index) and selects the most suitable band for retrieval. Finally, this paper calculates the GCI by the use of satellite hyperspectral image, conducts the verification and accuracy analysis of the calculation results compared with chlorophyll content data collected from field of twice experiments. The result shows that for grassland chlorophyll content, GCI has stronger sensitivity than other indices of chlorophyll, and has higher estimation accuracy. GCI is the first proposed to estimate the grassland chlorophyll content, and has wide application potential for the remote sensing retrieval of grassland chlorophyll content. In addition, the grassland chlorophyll content estimation method based on remote sensing retrieval in this paper provides new research ideas for other vegetation biochemical parameters' estimation, vegetation growth status' evaluation and grassland ecological environment change's monitoring.

  3. 'Strong is the new skinny': A content analysis of #fitspiration images on Instagram.

    Science.gov (United States)

    Tiggemann, Marika; Zaccardo, Mia

    2018-07-01

    'Fitspiration' is an online trend designed to inspire viewers towards a healthier lifestyle by promoting exercise and healthy food. This study provides a content analysis of fitspiration imagery on the social networking site Instagram. A set of 600 images were coded for body type, activity, objectification and textual elements. Results showed that the majority of images of women contained only one body type: thin and toned. In addition, most images contained objectifying elements. Accordingly, while fitspiration images may be inspirational for viewers, they also contain a number of elements likely to have negative effects on the viewer's body image.

  4. Investigating the link between radiologists’ gaze, diagnostic decision, and image content

    Science.gov (United States)

    Tourassi, Georgia; Voisin, Sophie; Paquit, Vincent; Krupinski, Elizabeth

    2013-01-01

    Objective To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods Gaze data and diagnostic decisions were collected from three breast imaging radiologists and three radiology residents who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Image analysis was performed in mammographic regions that attracted radiologists’ attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results By pooling the data from all readers, machine learning produced highly accurate predictive models linking image content, gaze, and cognition. Potential linking of those with diagnostic error was also supported to some extent. Merging readers’ gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the readers’ diagnostic errors while confirming 97.3% of their correct diagnoses. The readers’ individual perceptual and cognitive behaviors could be adequately predicted by modeling the behavior of others. However, personalized tuning was in many cases beneficial for capturing more accurately individual behavior. Conclusions There is clearly an interaction between radiologists’ gaze, diagnostic decision, and image content which can be modeled with machine learning algorithms. PMID:23788627

  5. Fast Depiction Invariant Visual Similarity for Content Based Image Retrieval Based on Data-driven Visual Similarity using Linear Discriminant Analysis

    Science.gov (United States)

    Wihardi, Y.; Setiawan, W.; Nugraha, E.

    2018-01-01

    On this research we try to build CBIRS based on Learning Distance/Similarity Function using Linear Discriminant Analysis (LDA) and Histogram of Oriented Gradient (HoG) feature. Our method is invariant to depiction of image, such as similarity of image to image, sketch to image, and painting to image. LDA can decrease execution time compared to state of the art method, but it still needs an improvement in term of accuracy. Inaccuracy in our experiment happen because we did not perform sliding windows search and because of low number of negative samples as natural-world images.

  6. Visual analytics for semantic queries of TerraSAR-X image content

    Science.gov (United States)

    Espinoza-Molina, Daniela; Alonso, Kevin; Datcu, Mihai

    2015-10-01

    With the continuous image product acquisition of satellite missions, the size of the image archives is considerably increasing every day as well as the variety and complexity of their content, surpassing the end-user capacity to analyse and exploit them. Advances in the image retrieval field have contributed to the development of tools for interactive exploration and extraction of the images from huge archives using different parameters like metadata, key-words, and basic image descriptors. Even though we count on more powerful tools for automated image retrieval and data analysis, we still face the problem of understanding and analyzing the results. Thus, a systematic computational analysis of these results is required in order to provide to the end-user a summary of the archive content in comprehensible terms. In this context, visual analytics combines automated analysis with interactive visualizations analysis techniques for an effective understanding, reasoning and decision making on the basis of very large and complex datasets. Moreover, currently several researches are focused on associating the content of the images with semantic definitions for describing the data in a format to be easily understood by the end-user. In this paper, we present our approach for computing visual analytics and semantically querying the TerraSAR-X archive. Our approach is mainly composed of four steps: 1) the generation of a data model that explains the information contained in a TerraSAR-X product. The model is formed by primitive descriptors and metadata entries, 2) the storage of this model in a database system, 3) the semantic definition of the image content based on machine learning algorithms and relevance feedback, and 4) querying the image archive using semantic descriptors as query parameters and computing the statistical analysis of the query results. The experimental results shows that with the help of visual analytics and semantic definitions we are able to explain

  7. Relevance Feedback in Content Based Image Retrieval: A Review

    Directory of Open Access Journals (Sweden)

    Manesh B. Kokare

    2011-01-01

    Full Text Available This paper provides an overview of the technical achievements in the research area of relevance feedback (RF in content-based image retrieval (CBIR. Relevance feedback is a powerful technique in CBIR systems, in order to improve the performance of CBIR effectively. It is an open research area to the researcher to reduce the semantic gap between low-level features and high level concepts. The paper covers the current state of art of the research in relevance feedback in CBIR, various relevance feedback techniques and issues in relevance feedback are discussed in detail.

  8. Investigating the Link Between Radiologists Gaze, Diagnostic Decision, and Image Content

    Energy Technology Data Exchange (ETDEWEB)

    Tourassi, Georgia [ORNL; Voisin, Sophie [ORNL; Paquit, Vincent C [ORNL; Krupinski, Elizabeth [University of Arizona

    2013-01-01

    Objective: To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods: Gaze data and diagnostic decisions were collected from six radiologists who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Texture analysis was performed in mammographic regions that attracted radiologists attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results: By pooling the data from all radiologists machine learning produced highly accurate predictive models linking image content, gaze, cognition, and error. Merging radiologists gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the radiologists diagnostic errors while confirming 96.2% of their correct diagnoses. The radiologists individual errors could be adequately predicted by modeling the behavior of their peers. However, personalized tuning appears to be beneficial in many cases to capture more accurately individual behavior. Conclusions: Machine learning algorithms combining image features with radiologists gaze data and diagnostic decisions can be effectively developed to recognize cognitive and perceptual errors associated with the diagnostic interpretation of mammograms.

  9. Quantification of sterol-specific response in human macrophages using automated imaged-based analysis.

    Science.gov (United States)

    Gater, Deborah L; Widatalla, Namareq; Islam, Kinza; AlRaeesi, Maryam; Teo, Jeremy C M; Pearson, Yanthe E

    2017-12-13

    The transformation of normal macrophage cells into lipid-laden foam cells is an important step in the progression of atherosclerosis. One major contributor to foam cell formation in vivo is the intracellular accumulation of cholesterol. Here, we report the effects of various combinations of low-density lipoprotein, sterols, lipids and other factors on human macrophages, using an automated image analysis program to quantitatively compare single cell properties, such as cell size and lipid content, in different conditions. We observed that the addition of cholesterol caused an increase in average cell lipid content across a range of conditions. All of the sterol-lipid mixtures examined were capable of inducing increases in average cell lipid content, with variations in the distribution of the response, in cytotoxicity and in how the sterol-lipid combination interacted with other activating factors. For example, cholesterol and lipopolysaccharide acted synergistically to increase cell lipid content while also increasing cell survival compared with the addition of lipopolysaccharide alone. Additionally, ergosterol and cholesteryl hemisuccinate caused similar increases in lipid content but also exhibited considerably greater cytotoxicity than cholesterol. The use of automated image analysis enables us to assess not only changes in average cell size and content, but also to rapidly and automatically compare population distributions based on simple fluorescence images. Our observations add to increasing understanding of the complex and multifactorial nature of foam-cell formation and provide a novel approach to assessing the heterogeneity of macrophage response to a variety of factors.

  10. Content-adaptive Image Enhancement, Based on Sky and Grass Segmentation

    NARCIS (Netherlands)

    Zafarifar, B.; With, de P.H.N.

    2009-01-01

    Current TV image enhancement functions employ globally controlled settings. A more flexible system can be achieved if the global control is extended to incorporate semantic-level image content information. In this paper, we present a system that extends existing TV image enhancement functions with

  11. Fundus Image Features Extraction for Exudate Mining in Coordination with Content Based Image Retrieval: A Study

    Science.gov (United States)

    Gururaj, C.; Jayadevappa, D.; Tunga, Satish

    2018-06-01

    Medical field has seen a phenomenal improvement over the previous years. The invention of computers with appropriate increase in the processing and internet speed has changed the face of the medical technology. However there is still scope for improvement of the technologies in use today. One of the many such technologies of medical aid is the detection of afflictions of the eye. Although a repertoire of research has been accomplished in this field, most of them fail to address how to take the detection forward to a stage where it will be beneficial to the society at large. An automated system that can predict the current medical condition of a patient after taking the fundus image of his eye is yet to see the light of the day. Such a system is explored in this paper by summarizing a number of techniques for fundus image features extraction, predominantly hard exudate mining, coupled with Content Based Image Retrieval to develop an automation tool. The knowledge of the same would bring about worthy changes in the domain of exudates extraction of the eye. This is essential in cases where the patients may not have access to the best of technologies. This paper attempts at a comprehensive summary of the techniques for Content Based Image Retrieval (CBIR) or fundus features image extraction, and few choice methods of both, and an exploration which aims to find ways to combine these two attractive features, and combine them so that it is beneficial to all.

  12. Fundus Image Features Extraction for Exudate Mining in Coordination with Content Based Image Retrieval: A Study

    Science.gov (United States)

    Gururaj, C.; Jayadevappa, D.; Tunga, Satish

    2018-02-01

    Medical field has seen a phenomenal improvement over the previous years. The invention of computers with appropriate increase in the processing and internet speed has changed the face of the medical technology. However there is still scope for improvement of the technologies in use today. One of the many such technologies of medical aid is the detection of afflictions of the eye. Although a repertoire of research has been accomplished in this field, most of them fail to address how to take the detection forward to a stage where it will be beneficial to the society at large. An automated system that can predict the current medical condition of a patient after taking the fundus image of his eye is yet to see the light of the day. Such a system is explored in this paper by summarizing a number of techniques for fundus image features extraction, predominantly hard exudate mining, coupled with Content Based Image Retrieval to develop an automation tool. The knowledge of the same would bring about worthy changes in the domain of exudates extraction of the eye. This is essential in cases where the patients may not have access to the best of technologies. This paper attempts at a comprehensive summary of the techniques for Content Based Image Retrieval (CBIR) or fundus features image extraction, and few choice methods of both, and an exploration which aims to find ways to combine these two attractive features, and combine them so that it is beneficial to all.

  13. "Fitspiration" on Social Media: A Content Analysis of Gendered Images.

    Science.gov (United States)

    Carrotte, Elise Rose; Prichard, Ivanka; Lim, Megan Su Cheng

    2017-03-29

    "Fitspiration" (also known as "fitspo") aims to inspire individuals to exercise and be healthy, but emerging research indicates exposure can negatively impact female body image. Fitspiration is frequently accessed on social media; however, it is currently unclear the degree to which messages about body image and exercise differ by gender of the subject. The aim of our study was to conduct a content analysis to identify the characteristics of fitspiration content posted across social media and whether this differs according to subject gender. Content tagged with #fitspo across Instagram, Facebook, Twitter, and Tumblr was extracted over a composite 30-minute period. All posts were analyzed by 2 independent coders according to a codebook. Of the 415/476 (87.2%) relevant posts extracted, most posts were on Instagram (360/415, 86.8%). Most posts (308/415, 74.2%) related thematically to exercise, and 81/415 (19.6%) related thematically to food. In total, 151 (36.4%) posts depicted only female subjects and 114/415 (27.5%) depicted only male subjects. Female subjects were typically thin but toned; male subjects were often muscular or hypermuscular. Within the images, female subjects were significantly more likely to be aged under 25 years (P<.001) than the male subjects, to have their full body visible (P=.001), and to have their buttocks emphasized (P<.001). Male subjects were more likely to have their face visible in the post (P=.005) than the female subjects. Female subjects were more likely to be sexualized than the male subjects (P=.002). Female #fitspo subjects typically adhered to the thin or athletic ideal, and male subjects typically adhered to the muscular ideal. Future research and interventional efforts should consider the potential objectifying messages in fitspiration, as it relates to both female and male body image. ©Elise Rose Carrotte, Ivanka Prichard, Megan Su Cheng Lim. Originally published in the Journal of Medical Internet Research (http

  14. “Fitspiration” on Social Media: A Content Analysis of Gendered Images

    Science.gov (United States)

    Prichard, Ivanka; Lim, Megan Su Cheng

    2017-01-01

    Background “Fitspiration” (also known as “fitspo”) aims to inspire individuals to exercise and be healthy, but emerging research indicates exposure can negatively impact female body image. Fitspiration is frequently accessed on social media; however, it is currently unclear the degree to which messages about body image and exercise differ by gender of the subject. Objective The aim of our study was to conduct a content analysis to identify the characteristics of fitspiration content posted across social media and whether this differs according to subject gender. Methods Content tagged with #fitspo across Instagram, Facebook, Twitter, and Tumblr was extracted over a composite 30-minute period. All posts were analyzed by 2 independent coders according to a codebook. Results Of the 415/476 (87.2%) relevant posts extracted, most posts were on Instagram (360/415, 86.8%). Most posts (308/415, 74.2%) related thematically to exercise, and 81/415 (19.6%) related thematically to food. In total, 151 (36.4%) posts depicted only female subjects and 114/415 (27.5%) depicted only male subjects. Female subjects were typically thin but toned; male subjects were often muscular or hypermuscular. Within the images, female subjects were significantly more likely to be aged under 25 years (P<.001) than the male subjects, to have their full body visible (P=.001), and to have their buttocks emphasized (P<.001). Male subjects were more likely to have their face visible in the post (P=.005) than the female subjects. Female subjects were more likely to be sexualized than the male subjects (P=.002). Conclusions Female #fitspo subjects typically adhered to the thin or athletic ideal, and male subjects typically adhered to the muscular ideal. Future research and interventional efforts should consider the potential objectifying messages in fitspiration, as it relates to both female and male body image. PMID:28356239

  15. A rapid automatic analyzer and its methodology for effective bentonite content based on image recognition technology

    Directory of Open Access Journals (Sweden)

    Wei Long

    2016-09-01

    Full Text Available Fast and accurate determination of effective bentonite content in used clay bonded sand is very important for selecting the correct mixing ratio and mixing process to obtain high-performance molding sand. Currently, the effective bentonite content is determined by testing the ethylene blue absorbed in used clay bonded sand, which is usually a manual operation with some disadvantages including complicated process, long testing time and low accuracy. A rapid automatic analyzer of the effective bentonite content in used clay bonded sand was developed based on image recognition technology. The instrument consists of auto stirring, auto liquid removal, auto titration, step-rotation and image acquisition components, and processor. The principle of the image recognition method is first to decompose the color images into three-channel gray images based on the photosensitive degree difference of the light blue and dark blue in the three channels of red, green and blue, then to make the gray values subtraction calculation and gray level transformation of the gray images, and finally, to extract the outer circle light blue halo and the inner circle blue spot and calculate their area ratio. The titration process can be judged to reach the end-point while the area ratio is higher than the setting value.

  16. Wavelet optimization for content-based image retrieval in medical databases.

    Science.gov (United States)

    Quellec, G; Lamard, M; Cazuguel, G; Cochener, B; Roux, C

    2010-04-01

    We propose in this article a content-based image retrieval (CBIR) method for diagnosis aid in medical fields. In the proposed system, images are indexed in a generic fashion, without extracting domain-specific features: a signature is built for each image from its wavelet transform. These image signatures characterize the distribution of wavelet coefficients in each subband of the decomposition. A distance measure is then defined to compare two image signatures and thus retrieve the most similar images in a database when a query image is submitted by a physician. To retrieve relevant images from a medical database, the signatures and the distance measure must be related to the medical interpretation of images. As a consequence, we introduce several degrees of freedom in the system so that it can be tuned to any pathology and image modality. In particular, we propose to adapt the wavelet basis, within the lifting scheme framework, and to use a custom decomposition scheme. Weights are also introduced between subbands. All these parameters are tuned by an optimization procedure, using the medical grading of each image in the database to define a performance measure. The system is assessed on two medical image databases: one for diabetic retinopathy follow up and one for screening mammography, as well as a general purpose database. Results are promising: a mean precision of 56.50%, 70.91% and 96.10% is achieved for these three databases, when five images are returned by the system. Copyright 2009 Elsevier B.V. All rights reserved.

  17. A thematic content analysis of #cheatmeal images on social media: Characterizing an emerging dietary trend.

    Science.gov (United States)

    Pila, Eva; Mond, Jonathan M; Griffiths, Scott; Mitchison, Deborah; Murray, Stuart B

    2017-06-01

    Despite the pervasive social endorsement of "cheat meals" within pro-muscularity online communities, there is an absence of empirical work examining this dietary phenomenon. The present study aimed to characterize cheat meals, and explore the meaning ascribed to engagement in this practice. Thematic content analysis was employed to code the photographic and textual elements of a sample (n = 600) that was extracted from over 1.6 million images marked with the #cheatmeal tag on the social networking site, Instagram. Analysis of the volume and type of food revealed the presence of very large quantities (54.5%) of calorie-dense foods (71.3%) that was rated to qualify as an objective binge episode. Photographic content of people commonly portrayed highly-muscular bodies (60.7%) in the act of intentional body exposure (40.0%). Meanwhile, textual content exemplified the idealization of overconsumption, a strict commitment to fitness, and a reward-based framework around diet and fitness. Collectively, these findings position cheat meals as goal-oriented dietary practices in the pursuit of physique-ideals, thus underscoring the potential clinical repercussions of this socially-endorsed dietary phenomenon. © 2017 Wiley Periodicals, Inc.

  18. IMAGE DESCRIPTIONS FOR SKETCH BASED IMAGE RETRIEVAL

    OpenAIRE

    SAAVEDRA RONDO, JOSE MANUEL; SAAVEDRA RONDO, JOSE MANUEL

    2008-01-01

    Due to the massive use of Internet together with the proliferation of media devices, content based image retrieval has become an active discipline in computer science. A common content based image retrieval approach requires that the user gives a regular image (e.g, a photo) as a query. However, having a regular image as query may be a serious problem. Indeed, people commonly use an image retrieval system because they do not count on the desired image. An easy alternative way t...

  19. Content dependent selection of image enhancement parameters for mobile displays

    Science.gov (United States)

    Lee, Yoon-Gyoo; Kang, Yoo-Jin; Kim, Han-Eol; Kim, Ka-Hee; Kim, Choon-Woo

    2011-01-01

    Mobile devices such as cellular phones and portable multimedia player with capability of playing terrestrial digital multimedia broadcasting (T-DMB) contents have been introduced into consumer market. In this paper, content dependent image quality enhancement method for sharpness and colorfulness and noise reduction is presented to improve perceived image quality on mobile displays. Human visual experiments are performed to analyze viewers' preference. Relationship between the objective measures and the optimal values of image control parameters are modeled by simple lookup tables based on the results of human visual experiments. Content dependent values of image control parameters are determined based on the calculated measures and predetermined lookup tables. Experimental results indicate that dynamic selection of image control parameters yields better image quality.

  20. Combining semantic technologies with a content-based image retrieval system - Preliminary considerations

    Science.gov (United States)

    Chmiel, P.; Ganzha, M.; Jaworska, T.; Paprzycki, M.

    2017-10-01

    Nowadays, as a part of systematic growth of volume, and variety, of information that can be found on the Internet, we observe also dramatic increase in sizes of available image collections. There are many ways to help users browsing / selecting images of interest. One of popular approaches are Content-Based Image Retrieval (CBIR) systems, which allow users to search for images that match their interests, expressed in the form of images (query by example). However, we believe that image search and retrieval could take advantage of semantic technologies. We have decided to test this hypothesis. Specifically, on the basis of knowledge captured in the CBIR, we have developed a domain ontology of residential real estate (detached houses, in particular). This allows us to semantically represent each image (and its constitutive architectural elements) represented within the CBIR. The proposed ontology was extended to capture not only the elements resulting from image segmentation, but also "spatial relations" between them. As a result, a new approach to querying the image database (semantic querying) has materialized, thus extending capabilities of the developed system.

  1. Implementation and evaluation of a medical image management system with content-based retrieval support

    International Nuclear Information System (INIS)

    Carita, Edilson Carlos; Seraphim, Enzo; Honda, Marcelo Ossamu; Azevedo-Marques, Paulo Mazzoncini de

    2008-01-01

    Objective: the present paper describes the implementation and evaluation of a medical images management system with content-based retrieval support (PACS-CBIR) integrating modules focused on images acquisition, storage and distribution, and text retrieval by keyword and images retrieval by similarity. Materials and methods: internet-compatible technologies were utilized for the system implementation with free ware, and C ++ , PHP and Java languages on a Linux platform. There is a DICOM-compatible image management module and two query modules, one of them based on text and the other on similarity of image texture attributes. Results: results demonstrate an appropriate images management and storage, and that the images retrieval time, always < 15 sec, was found to be good by users. The evaluation of retrieval by similarity has demonstrated that the selected images extractor allowed the sorting of images according to anatomical areas. Conclusion: based on these results, one can conclude that the PACS-CBIR implementation is feasible. The system has demonstrated to be DICOM-compatible, and that it can be integrated with the local information system. The similar images retrieval functionality can be enhanced by the introduction of further descriptors. (author)

  2. An efficient similarity measure for content based image retrieval using memetic algorithm

    Directory of Open Access Journals (Sweden)

    Mutasem K. Alsmadi

    2017-06-01

    Full Text Available Content based image retrieval (CBIR systems work by retrieving images which are related to the query image (QI from huge databases. The available CBIR systems extract limited feature sets which confine the retrieval efficacy. In this work, extensive robust and important features were extracted from the images database and then stored in the feature repository. This feature set is composed of color signature with the shape and color texture features. Where, features are extracted from the given QI in the similar fashion. Consequently, a novel similarity evaluation using a meta-heuristic algorithm called a memetic algorithm (genetic algorithm with great deluge is achieved between the features of the QI and the features of the database images. Our proposed CBIR system is assessed by inquiring number of images (from the test dataset and the efficiency of the system is evaluated by calculating precision-recall value for the results. The results were superior to other state-of-the-art CBIR systems in regard to precision.

  3. A Sensitive Measurement for Estimating Impressions of Image-Contents

    Science.gov (United States)

    Sato, Mie; Matouge, Shingo; Mori, Toshifumi; Suzuki, Noboru; Kasuga, Masao

    We have investigated Kansei Content that appeals maker's intention to viewer's kansei. An SD method is a very good way to evaluate subjective impression of image-contents. However, because the SD method is performed after subjects view the image-contents, it is difficult to examine impression of detailed scenes of the image-contents in real time. To measure viewer's impression of the image-contents in real time, we have developed a Taikan sensor. With the Taikan sensor, we investigate relations among the image-contents, the grip strength and the body temperature. We also explore the interface of the Taikan sensor to use it easily. In our experiment, a horror movie is used that largely affects emotion of the subjects. Our results show that there is a possibility that the grip strength increases when the subjects view a strained scene and that it is easy to use the Taikan sensor without its circle base that is originally installed.

  4. 99Tcm-MIBI imaging in diagnosing benign/malign pulmonary disease and analysis of lung cancer DNA content

    International Nuclear Information System (INIS)

    Feng Yanlin; Tan Jiaju; Yang Jie; Zhu Zheng; Yu Fengwen; He Xiaohong; Huang Kemin; Yuan Baihong; Su Shaodi

    2002-01-01

    Objective: To evaluate the value of 99 Tc m -methoxyisobutylisonitrile (MIBI) lung imaging in diagnosing benign/malign pulmonary disease and the relation of 99 Tc m -MIBI uptake ratio (UR) with lung cancer DNA content. Methods: Early and delay imaging were performed on 27 cases of benign lung disease and 46 cases of malign lung disease. Visual analysis of the images and T/N uptake ratio measurement were performed on every case. Cancer cell DNA content and DNA index (DI) were measured in 24 cases of malign pulmonary disease. Results: The delay phase UR was 1.13 ± 0.19 in benign disease group, and the delay phase UR was 1.45 ± 0.21 in malign disease group (t6.51, P 99 Tc m -MIBI is not an ideal imaging agent for differentiating pulmonary benign/malign disease. Lung cancer DNA content may be reflected by delay phase UR

  5. Chlorophyll content of Plešné Lake phytoplankton cells studied with image analysis

    Czech Academy of Sciences Publication Activity Database

    Nedoma, Jiří; Nedbalová, Linda

    2006-01-01

    Roč. 61, Suppl. 20 (2006), S491-S498 ISSN 0006-3088 Grant - others:MSMT(CZ) 0021620828 Institutional research plan: CEZ:AV0Z60170517; CEZ:AV0Z60050516 Keywords : cellular chlorophyll content * image analysis * vertical profile Subject RIV: EH - Ecology, Behaviour Impact factor: 0.213, year: 2006

  6. Qualitative Content Analysis

    Directory of Open Access Journals (Sweden)

    Satu Elo

    2014-02-01

    Full Text Available Qualitative content analysis is commonly used for analyzing qualitative data. However, few articles have examined the trustworthiness of its use in nursing science studies. The trustworthiness of qualitative content analysis is often presented by using terms such as credibility, dependability, conformability, transferability, and authenticity. This article focuses on trustworthiness based on a review of previous studies, our own experiences, and methodological textbooks. Trustworthiness was described for the main qualitative content analysis phases from data collection to reporting of the results. We concluded that it is important to scrutinize the trustworthiness of every phase of the analysis process, including the preparation, organization, and reporting of results. Together, these phases should give a reader a clear indication of the overall trustworthiness of the study. Based on our findings, we compiled a checklist for researchers attempting to improve the trustworthiness of a content analysis study. The discussion in this article helps to clarify how content analysis should be reported in a valid and understandable manner, which would be of particular benefit to reviewers of scientific articles. Furthermore, we discuss that it is often difficult to evaluate the trustworthiness of qualitative content analysis studies because of defective data collection method description and/or analysis description.

  7. Semi-supervised learning based probabilistic latent semantic analysis for automatic image annotation

    Institute of Scientific and Technical Information of China (English)

    Tian Dongping

    2017-01-01

    In recent years, multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas, especially for automatic image annotation, whose purpose is to provide an efficient and effective searching environment for users to query their images more easily.In this paper, a semi-supervised learning based probabilistic latent semantic analysis ( PL-SA) model for automatic image annotation is presenred.Since it' s often hard to obtain or create la-beled images in large quantities while unlabeled ones are easier to collect, a transductive support vector machine ( TSVM) is exploited to enhance the quality of the training image data.Then, differ-ent image features with different magnitudes will result in different performance for automatic image annotation.To this end, a Gaussian normalization method is utilized to normalize different features extracted from effective image regions segmented by the normalized cuts algorithm so as to reserve the intrinsic content of images as complete as possible.Finally, a PLSA model with asymmetric mo-dalities is constructed based on the expectation maximization( EM) algorithm to predict a candidate set of annotations with confidence scores.Extensive experiments on the general-purpose Corel5k dataset demonstrate that the proposed model can significantly improve performance of traditional PL-SA for the task of automatic image annotation.

  8. Content-based image retrieval using spatial layout information in brain tumor T1-weighted contrast-enhanced MR images.

    Science.gov (United States)

    Huang, Meiyan; Yang, Wei; Wu, Yao; Jiang, Jun; Gao, Yang; Chen, Yang; Feng, Qianjin; Chen, Wufan; Lu, Zhentai

    2014-01-01

    This study aims to develop content-based image retrieval (CBIR) system for the retrieval of T1-weighted contrast-enhanced MR (CE-MR) images of brain tumors. When a tumor region is fed to the CBIR system as a query, the system attempts to retrieve tumors of the same pathological category. The bag-of-visual-words (BoVW) model with partition learning is incorporated into the system to extract informative features for representing the image contents. Furthermore, a distance metric learning algorithm called the Rank Error-based Metric Learning (REML) is proposed to reduce the semantic gap between low-level visual features and high-level semantic concepts. The effectiveness of the proposed method is evaluated on a brain T1-weighted CE-MR dataset with three types of brain tumors (i.e., meningioma, glioma, and pituitary tumor). Using the BoVW model with partition learning, the mean average precision (mAP) of retrieval increases beyond 4.6% with the learned distance metrics compared with the spatial pyramid BoVW method. The distance metric learned by REML significantly outperforms three other existing distance metric learning methods in terms of mAP. The mAP of the CBIR system is as high as 91.8% using the proposed method, and the precision can reach 93.1% when the top 10 images are returned by the system. These preliminary results demonstrate that the proposed method is effective and feasible for the retrieval of brain tumors in T1-weighted CE-MR Images.

  9. Content-based image retrieval using spatial layout information in brain tumor T1-weighted contrast-enhanced MR images.

    Directory of Open Access Journals (Sweden)

    Meiyan Huang

    Full Text Available This study aims to develop content-based image retrieval (CBIR system for the retrieval of T1-weighted contrast-enhanced MR (CE-MR images of brain tumors. When a tumor region is fed to the CBIR system as a query, the system attempts to retrieve tumors of the same pathological category. The bag-of-visual-words (BoVW model with partition learning is incorporated into the system to extract informative features for representing the image contents. Furthermore, a distance metric learning algorithm called the Rank Error-based Metric Learning (REML is proposed to reduce the semantic gap between low-level visual features and high-level semantic concepts. The effectiveness of the proposed method is evaluated on a brain T1-weighted CE-MR dataset with three types of brain tumors (i.e., meningioma, glioma, and pituitary tumor. Using the BoVW model with partition learning, the mean average precision (mAP of retrieval increases beyond 4.6% with the learned distance metrics compared with the spatial pyramid BoVW method. The distance metric learned by REML significantly outperforms three other existing distance metric learning methods in terms of mAP. The mAP of the CBIR system is as high as 91.8% using the proposed method, and the precision can reach 93.1% when the top 10 images are returned by the system. These preliminary results demonstrate that the proposed method is effective and feasible for the retrieval of brain tumors in T1-weighted CE-MR Images.

  10. High content image based analysis identifies cell cycle inhibitors as regulators of Ebola virus infection.

    Science.gov (United States)

    Kota, Krishna P; Benko, Jacqueline G; Mudhasani, Rajini; Retterer, Cary; Tran, Julie P; Bavari, Sina; Panchal, Rekha G

    2012-09-25

    Viruses modulate a number of host biological responses including the cell cycle to favor their replication. In this study, we developed a high-content imaging (HCI) assay to measure DNA content and identify different phases of the cell cycle. We then investigated the potential effects of cell cycle arrest on Ebola virus (EBOV) infection. Cells arrested in G1 phase by serum starvation or G1/S phase using aphidicolin or G2/M phase using nocodazole showed much reduced EBOV infection compared to the untreated control. Release of cells from serum starvation or aphidicolin block resulted in a time-dependent increase in the percentage of EBOV infected cells. The effect of EBOV infection on cell cycle progression was found to be cell-type dependent. Infection of asynchronous MCF-10A cells with EBOV resulted in a reduced number of cells in G2/M phase with concomitant increase of cells in G1 phase. However, these effects were not observed in HeLa or A549 cells. Together, our studies suggest that EBOV requires actively proliferating cells for efficient replication. Furthermore, multiplexing of HCI based assays to detect viral infection, cell cycle status and other phenotypic changes in a single cell population will provide useful information during screening campaigns using siRNA and small molecule therapeutics.

  11. High Content Image Based Analysis Identifies Cell Cycle Inhibitors as Regulators of Ebola Virus Infection

    Directory of Open Access Journals (Sweden)

    Sina Bavari

    2012-09-01

    Full Text Available Viruses modulate a number of host biological responses including the cell cycle to favor their replication. In this study, we developed a high-content imaging (HCI assay to measure DNA content and identify different phases of the cell cycle. We then investigated the potential effects of cell cycle arrest on Ebola virus (EBOV infection. Cells arrested in G1 phase by serum starvation or G1/S phase using aphidicolin or G2/M phase using nocodazole showed much reduced EBOV infection compared to the untreated control. Release of cells from serum starvation or aphidicolin block resulted in a time-dependent increase in the percentage of EBOV infected cells. The effect of EBOV infection on cell cycle progression was found to be cell-type dependent. Infection of asynchronous MCF-10A cells with EBOV resulted in a reduced number of cells in G2/M phase with concomitant increase of cells in G1 phase. However, these effects were not observed in HeLa or A549 cells. Together, our studies suggest that EBOV requires actively proliferating cells for efficient replication. Furthermore, multiplexing of HCI based assays to detect viral infection, cell cycle status and other phenotypic changes in a single cell population will provide useful information during screening campaigns using siRNA and small molecule therapeutics.

  12. Dynamic Chest Image Analysis: Model-Based Perfusion Analysis in Dynamic Pulmonary Imaging

    Directory of Open Access Journals (Sweden)

    Kiuru Aaro

    2003-01-01

    Full Text Available The "Dynamic Chest Image Analysis" project aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the dynamic pulmonary imaging technique. We have proposed and evaluated a multiresolutional method with an explicit ventilation model for ventilation analysis. This paper presents a new model-based method for pulmonary perfusion analysis. According to perfusion properties, we first devise a novel mathematical function to form a perfusion model. A simple yet accurate approach is further introduced to extract cardiac systolic and diastolic phases from the heart, so that this cardiac information may be utilized to accelerate the perfusion analysis and improve its sensitivity in detecting pulmonary perfusion abnormalities. This makes perfusion analysis not only fast but also robust in computation; consequently, perfusion analysis becomes computationally feasible without using contrast media. Our clinical case studies with 52 patients show that this technique is effective for pulmonary embolism even without using contrast media, demonstrating consistent correlations with computed tomography (CT and nuclear medicine (NM studies. This fluoroscopical examination takes only about 2 seconds for perfusion study with only low radiation dose to patient, involving no preparation, no radioactive isotopes, and no contrast media.

  13. Content Analysis of a Computer-Based Faculty Activity Repository

    Science.gov (United States)

    Baker-Eveleth, Lori; Stone, Robert W.

    2013-01-01

    The research presents an analysis of faculty opinions regarding the introduction of a new computer-based faculty activity repository (FAR) in a university setting. The qualitative study employs content analysis to better understand the phenomenon underlying these faculty opinions and to augment the findings from a quantitative study. A web-based…

  14. Content-based TV sports video retrieval using multimodal analysis

    Science.gov (United States)

    Yu, Yiqing; Liu, Huayong; Wang, Hongbin; Zhou, Dongru

    2003-09-01

    In this paper, we propose content-based video retrieval, which is a kind of retrieval by its semantical contents. Because video data is composed of multimodal information streams such as video, auditory and textual streams, we describe a strategy of using multimodal analysis for automatic parsing sports video. The paper first defines the basic structure of sports video database system, and then introduces a new approach that integrates visual stream analysis, speech recognition, speech signal processing and text extraction to realize video retrieval. The experimental results for TV sports video of football games indicate that the multimodal analysis is effective for video retrieval by quickly browsing tree-like video clips or inputting keywords within predefined domain.

  15. Multimedia human brain database system for surgical candidacy determination in temporal lobe epilepsy with content-based image retrieval

    Science.gov (United States)

    Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad A.; Elisevich, Kost

    2003-01-01

    This paper presents the development of a human brain multimedia database for surgical candidacy determination in temporal lobe epilepsy. The focus of the paper is on content-based image management, navigation and retrieval. Several medical image-processing methods including our newly developed segmentation method are utilized for information extraction/correlation and indexing. The input data includes T1-, T2-Weighted MRI and FLAIR MRI and ictal and interictal SPECT modalities with associated clinical data and EEG data analysis. The database can answer queries regarding issues such as the correlation between the attribute X of the entity Y and the outcome of a temporal lobe epilepsy surgery. The entity Y can be a brain anatomical structure such as the hippocampus. The attribute X can be either a functionality feature of the anatomical structure Y, calculated with SPECT modalities, such as signal average, or a volumetric/morphological feature of the entity Y such as volume or average curvature. The outcome of the surgery can be any surgery assessment such as memory quotient. A determination is made regarding surgical candidacy by analysis of both textual and image data. The current database system suggests a surgical determination for the cases with relatively small hippocampus and high signal intensity average on FLAIR images within the hippocampus. This indication pretty much fits with the surgeons" expectations/observations. Moreover, as the database gets more populated with patient profiles and individual surgical outcomes, using data mining methods one may discover partially invisible correlations between the contents of different modalities of data and the outcome of the surgery.

  16. Content and user-based music visual analysis

    Science.gov (United States)

    Guo, Xiaochun; Tang, Lei

    2015-12-01

    In recent years, people's ability to collect music got enhanced greatly. Many people who prefer listening music offline even stored thousands of music on their local storage or portable device. However, their ability to deal with music information has not been improved accordingly, which results in two problems. One is how to find out the favourite songs from large music dataset and satisfy different individuals. The other one is how to compose a play list quickly. To solve these problems, the authors proposed a content and user-based music visual analysis approach. We first developed a new recommendation algorithm based on the content of music and user's behaviour, which satisfy individual's preference. Then, we make use of visualization and interaction tools to illustrate the relationship between songs and help people compose a suitable play list. At the end of this paper, a survey is mentioned to show that our system is available and effective.

  17. An automatic fuzzy-based multi-temporal brain digital subtraction angiography image fusion algorithm using curvelet transform and content selection strategy.

    Science.gov (United States)

    Momeni, Saba; Pourghassem, Hossein

    2014-08-01

    Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.

  18. TBIdoc: 3D content-based CT image retrieval system for traumatic brain injury

    Science.gov (United States)

    Li, Shimiao; Gong, Tianxia; Wang, Jie; Liu, Ruizhe; Tan, Chew Lim; Leong, Tze Yun; Pang, Boon Chuan; Lim, C. C. Tchoyoson; Lee, Cheng Kiang; Tian, Qi; Zhang, Zhuo

    2010-03-01

    Traumatic brain injury (TBI) is a major cause of death and disability. Computed Tomography (CT) scan is widely used in the diagnosis of TBI. Nowadays, large amount of TBI CT data is stacked in the hospital radiology department. Such data and the associated patient information contain valuable information for clinical diagnosis and outcome prediction. However, current hospital database system does not provide an efficient and intuitive tool for doctors to search out cases relevant to the current study case. In this paper, we present the TBIdoc system: a content-based image retrieval (CBIR) system which works on the TBI CT images. In this web-based system, user can query by uploading CT image slices from one study, retrieval result is a list of TBI cases ranked according to their 3D visual similarity to the query case. Specifically, cases of TBI CT images often present diffuse or focal lesions. In TBIdoc system, these pathological image features are represented as bin-based binary feature vectors. We use the Jaccard-Needham measure as the similarity measurement. Based on these, we propose a 3D similarity measure for computing the similarity score between two series of CT slices. nDCG is used to evaluate the system performance, which shows the system produces satisfactory retrieval results. The system is expected to improve the current hospital data management in TBI and to give better support for the clinical decision-making process. It may also contribute to the computer-aided education in TBI.

  19. Morphometric Characterization of Rat and Human Alveolar Macrophage Cell Models and their Response to Amiodarone using High Content Image Analysis.

    Science.gov (United States)

    Hoffman, Ewelina; Patel, Aateka; Ball, Doug; Klapwijk, Jan; Millar, Val; Kumar, Abhinav; Martin, Abigail; Mahendran, Rhamiya; Dailey, Lea Ann; Forbes, Ben; Hutter, Victoria

    2017-12-01

    Progress to the clinic may be delayed or prevented when vacuolated or "foamy" alveolar macrophages are observed during non-clinical inhalation toxicology assessment. The first step in developing methods to study this response in vitro is to characterize macrophage cell lines and their response to drug exposures. Human (U937) and rat (NR8383) cell lines and primary rat alveolar macrophages obtained by bronchoalveolar lavage were characterized using high content fluorescence imaging analysis quantification of cell viability, morphometry, and phospholipid and neutral lipid accumulation. Cell health, morphology and lipid content were comparable (p content. Responses to amiodarone, a known inducer of phospholipidosis, required analysis of shifts in cell population profiles (the proportion of cells with elevated vacuolation or lipid content) rather than average population data which was insensitive to the changes observed. A high content image analysis assay was developed and used to provide detailed morphological characterization of rat and human alveolar-like macrophages and their response to a phospholipidosis-inducing agent. This provides a basis for development of assays to predict or understand macrophage vacuolation following inhaled drug exposure.

  20. Automatic classification for mammogram backgrounds based on bi-rads complexity definition and on a multi content analysis framework

    Science.gov (United States)

    Wu, Jie; Besnehard, Quentin; Marchessoux, Cédric

    2011-03-01

    Clinical studies for the validation of new medical imaging devices require hundreds of images. An important step in creating and tuning the study protocol is the classification of images into "difficult" and "easy" cases. This consists of classifying the image based on features like the complexity of the background, the visibility of the disease (lesions). Therefore, an automatic medical background classification tool for mammograms would help for such clinical studies. This classification tool is based on a multi-content analysis framework (MCA) which was firstly developed to recognize image content of computer screen shots. With the implementation of new texture features and a defined breast density scale, the MCA framework is able to automatically classify digital mammograms with a satisfying accuracy. BI-RADS (Breast Imaging Reporting Data System) density scale is used for grouping the mammograms, which standardizes the mammography reporting terminology and assessment and recommendation categories. Selected features are input into a decision tree classification scheme in MCA framework, which is the so called "weak classifier" (any classifier with a global error rate below 50%). With the AdaBoost iteration algorithm, these "weak classifiers" are combined into a "strong classifier" (a classifier with a low global error rate) for classifying one category. The results of classification for one "strong classifier" show the good accuracy with the high true positive rates. For the four categories the results are: TP=90.38%, TN=67.88%, FP=32.12% and FN =9.62%.

  1. Analysis, Retrieval and Delivery of Multimedia Content

    CERN Document Server

    Cavallaro, Andrea; Leonardi, Riccardo; Migliorati, Pierangelo

    2013-01-01

    Covering some of the most cutting-edge research on the delivery and retrieval of interactive multimedia content, this volume of specially chosen contributions provides the most updated perspective on one of the hottest contemporary topics. The material represents extended versions of papers presented at the 11th International Workshop on Image Analysis for Multimedia Interactive Services, a vital international forum on this fast-moving field. Logically organized in discrete sections that approach the subject from its various angles, the content deals in turn with content analysis, motion and activity analysis, high-level descriptors and video retrieval, 3-D and multi-view, and multimedia delivery. The chapters cover the finest detail of emerging techniques such as the use of high-level audio information in improving scene segmentation and the use of subjective logic for forensic visual surveillance. On content delivery, the book examines both images and video, focusing on key subjects including an efficient p...

  2. Utilizing a Photo-Analysis Software for Content Identifying Method (CIM

    Directory of Open Access Journals (Sweden)

    Nejad Nasim Sahraei

    2015-01-01

    Full Text Available Content Identifying Methodology or (CIM was developed to measure public preferences in order to reveal the common characteristics of landscapes and aspects of underlying perceptions including the individual's reactions to content and spatial configuration, therefore, it can assist with the identification of factors that influenced preference. Regarding the analysis of landscape photographs through CIM, there are several studies utilizing image analysis software, such as Adobe Photoshop, in order to identify the physical contents in the scenes. This study attempts to evaluate public’s ‘preferences for aesthetic qualities of pedestrian bridges in urban areas through a photo-questionnaire survey, in which respondents evaluated images of pedestrian bridges in urban areas. Two groups of images were evaluated as the most and least preferred scenes that concern the highest and lowest mean scores respectively. These two groups were analyzed by CIM and also evaluated based on the respondent’s description of each group to reveal the pattern of preferences and the factors that may affect them. Digimizer Software was employed to triangulate the two approaches and to determine the role of these factors on people’s preferences. This study attempts to introduce the useful software for image analysis which can measure the physical contents and also their spatial organization in the scenes. According to the findings, it is revealed that Digimizer could be a useful tool in CIM approaches through preference studies that utilizes photographs in place of the actual landscape in order to determine the most important factors in public preferences for pedestrian bridges in urban areas.

  3. Qualitative Content Analysis

    OpenAIRE

    Satu Elo; Maria Kääriäinen; Outi Kanste; Tarja Pölkki; Kati Utriainen; Helvi Kyngäs

    2014-01-01

    Qualitative content analysis is commonly used for analyzing qualitative data. However, few articles have examined the trustworthiness of its use in nursing science studies. The trustworthiness of qualitative content analysis is often presented by using terms such as credibility, dependability, conformability, transferability, and authenticity. This article focuses on trustworthiness based on a review of previous studie...

  4. Orthogonally Based Digital Content Management Applicable to Projects-bases

    Directory of Open Access Journals (Sweden)

    Daniel MILODIN

    2009-01-01

    Full Text Available There is defined the concept of digital content. The requirements of an efficient management of the digital content are established. There are listed the quality characteristics of digital content. Orthogonality indicators of digital content are built up. They are meant to measure the image, the sound as well as the text orthogonality as well. Projects-base concept is introduced. There is presented the model of structuring the content in order to maximize orthogonality via a convergent iterative process. The model is instantiated for the digital content of a projects-base. It is introduced the application used to test the model. The paper ends with conclusions.

  5. Information management for high content live cell imaging

    Directory of Open Access Journals (Sweden)

    White Michael RH

    2009-07-01

    Full Text Available Abstract Background High content live cell imaging experiments are able to track the cellular localisation of labelled proteins in multiple live cells over a time course. Experiments using high content live cell imaging will generate multiple large datasets that are often stored in an ad-hoc manner. This hinders identification of previously gathered data that may be relevant to current analyses. Whilst solutions exist for managing image data, they are primarily concerned with storage and retrieval of the images themselves and not the data derived from the images. There is therefore a requirement for an information management solution that facilitates the indexing of experimental metadata and results of high content live cell imaging experiments. Results We have designed and implemented a data model and information management solution for the data gathered through high content live cell imaging experiments. Many of the experiments to be stored measure the translocation of fluorescently labelled proteins from cytoplasm to nucleus in individual cells. The functionality of this database has been enhanced by the addition of an algorithm that automatically annotates results of these experiments with the timings of translocations and periods of any oscillatory translocations as they are uploaded to the repository. Testing has shown the algorithm to perform well with a variety of previously unseen data. Conclusion Our repository is a fully functional example of how high throughput imaging data may be effectively indexed and managed to address the requirements of end users. By implementing the automated analysis of experimental results, we have provided a clear impetus for individuals to ensure that their data forms part of that which is stored in the repository. Although focused on imaging, the solution provided is sufficiently generic to be applied to other functional proteomics and genomics experiments. The software is available from: fhttp://code.google.com/p/livecellim/

  6. Developing students’ ideas about lens imaging: teaching experiments with an image-based approach

    Science.gov (United States)

    Grusche, Sascha

    2017-07-01

    Lens imaging is a classic topic in physics education. To guide students from their holistic viewpoint to the scientists’ analytic viewpoint, an image-based approach to lens imaging has recently been proposed. To study the effect of the image-based approach on undergraduate students’ ideas, teaching experiments are performed and evaluated using qualitative content analysis. Some of the students’ ideas have not been reported before, namely those related to blurry lens images, and those developed by the proposed teaching approach. To describe learning pathways systematically, a conception-versus-time coordinate system is introduced, specifying how teaching actions help students advance toward a scientific understanding.

  7. Hyperspectral imaging detection of decayed honey peaches based on their chlorophyll content.

    Science.gov (United States)

    Sun, Ye; Wang, Yihang; Xiao, Hui; Gu, Xinzhe; Pan, Leiqing; Tu, Kang

    2017-11-15

    Honey peach is a very common but highly perishable market fruit. When pathogens infect fruit, chlorophyll as one of the important components related to fruit quality, decreased significantly. Here, the feasibility of hyperspectral imaging to determine the chlorophyll content thus distinguishing diseased peaches was investigated. Three optimal wavelengths (617nm, 675nm, and 818nm) were selected according to chlorophyll content via successive projections algorithm. Partial least square regression models were established to determine chlorophyll content. Three band ratios were obtained using these optimal wavelengths, which improved spatial details, but also integrates the information of chemical composition from spectral characteristics. The band ratio values were suitable to classify the diseased peaches with 98.75% accuracy and clearly show the spatial distribution of diseased parts. This study provides a new perspective for the selection of optimal wavelengths of hyperspectral imaging via chlorophyll content, thus enabling the detection of fungal diseases in peaches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Optimizing top precision performance measure of content-based image retrieval by learning similarity function

    KAUST Repository

    Liang, Ru-Ze

    2017-04-24

    In this paper we study the problem of content-based image retrieval. In this problem, the most popular performance measure is the top precision measure, and the most important component of a retrieval system is the similarity function used to compare a query image against a database image. However, up to now, there is no existing similarity learning method proposed to optimize the top precision measure. To fill this gap, in this paper, we propose a novel similarity learning method to maximize the top precision measure. We model this problem as a minimization problem with an objective function as the combination of the losses of the relevant images ranked behind the top-ranked irrelevant image, and the squared Frobenius norm of the similarity function parameter. This minimization problem is solved as a quadratic programming problem. The experiments over two benchmark data sets show the advantages of the proposed method over other similarity learning methods when the top precision is used as the performance measure.

  9. Optimizing top precision performance measure of content-based image retrieval by learning similarity function

    KAUST Repository

    Liang, Ru-Ze; Shi, Lihui; Wang, Haoxiang; Meng, Jiandong; Wang, Jim Jing-Yan; Sun, Qingquan; Gu, Yi

    2017-01-01

    In this paper we study the problem of content-based image retrieval. In this problem, the most popular performance measure is the top precision measure, and the most important component of a retrieval system is the similarity function used to compare a query image against a database image. However, up to now, there is no existing similarity learning method proposed to optimize the top precision measure. To fill this gap, in this paper, we propose a novel similarity learning method to maximize the top precision measure. We model this problem as a minimization problem with an objective function as the combination of the losses of the relevant images ranked behind the top-ranked irrelevant image, and the squared Frobenius norm of the similarity function parameter. This minimization problem is solved as a quadratic programming problem. The experiments over two benchmark data sets show the advantages of the proposed method over other similarity learning methods when the top precision is used as the performance measure.

  10. Geographic Object-Based Image Analysis: Towards a new paradigm

    NARCIS (Netherlands)

    Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.A.|info:eu-repo/dai/nl/224281216; Queiroz Feitosa, R.; van der Meer, F.D.|info:eu-repo/dai/nl/138940908; van der Werff, H.M.A.; van Coillie, F.; Tiede, A.

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis – GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature

  11. FT-IR imaging for quantitative determination of liver fat content in non-alcoholic fatty liver.

    Science.gov (United States)

    Kochan, K; Maslak, E; Chlopicki, S; Baranska, M

    2015-08-07

    In this work we apply FT-IR imaging of large areas of liver tissue cross-section samples (∼5 cm × 5 cm) for quantitative assessment of steatosis in murine model of Non-Alcoholic Fatty Liver (NAFLD). We quantified the area of liver tissue occupied by lipid droplets (LDs) by FT-IR imaging and Oil Red O (ORO) staining for comparison. Two alternative FT-IR based approaches are presented. The first, straightforward method, was based on average spectra from tissues and provided values of the fat content by using a PLS regression model and the reference method. The second one – the chemometric-based method – enabled us to determine the values of the fat content, independently of the reference method by means of k-means cluster (KMC) analysis. In summary, FT-IR images of large size liver sections may prove to be useful for quantifying liver steatosis without the need of tissue staining.

  12. Magnetic resonance imaging and quantitative analysis of contents of epidermoid and dermoid cysts

    Energy Technology Data Exchange (ETDEWEB)

    Takeshita, Mikihiko; Kubo, Osami; Hiyama, Hirofumi; Tajika, Yasuhiko; Izawa, Masahiro; Kagawa, Mizuo; Takakura, Kintomo; Kobayashi, Naotoshi; Toyoda, Masako [Tokyo Women' s Medical Coll. (Japan)

    1994-07-01

    The intracapsular cholesterol protein, and calcium contents of epidermoid and dermoid cysts from seven patients were compared with the signal intensities on T[sub 1]-weighted spin-echo magnetic resonance (MR) images. All specimens had a paste-like consistency when resected. Epidermoid and dermoid cysts demonstrated a wide range of cholesterol and calcium contents, and epidermoid cysts were not always rich in cholesterol. Five patients had cysts with lower signal intensity than white matter, which contained more than 18.3 mg/g wet weight of protein. One of these patients had the highest cholesterol content of all seven patients (22.25 mg/g wet weight) and another had the highest calcium content (0.75 mg/g wet weight). Two patients had cysts with higher signal intensity than white matter, with protein contents of lower than 4.3 mg/g wet weight. High protein content (>18.3 mg/g wet weight) may decrease signal intensity on T[sub 1]-weighted MR images, while low protein content (<4.3 mg/g wet weight) may increase signal intensity in epidermoid and dermoid cysts with high viscosity (paste-like consistency) contents. (author).

  13. Content-Based Information Retrieval from Forensic Databases

    NARCIS (Netherlands)

    Geradts, Z.J.M.H.

    2002-01-01

    In forensic science, the number of image databases is growing rapidly. For this reason, it is necessary to have a proper procedure for searching in these images databases based on content. The use of image databases results in more solved crimes; furthermore, statistical information can be obtained

  14. Web Based Distributed Coastal Image Analysis System, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project develops Web based distributed image analysis system processing the Moderate Resolution Imaging Spectroradiometer (MODIS) data to provide decision...

  15. High content analysis of phagocytic activity and cell morphology with PuntoMorph

    DEFF Research Database (Denmark)

    Al-Ali, Hassan; Gao, Han; Dalby-Hansen, Camilla

    2017-01-01

    methods for quantifying phagocytic activity in multiple dimensions including speed, accuracy, and resolution. Conclusions We provide a framework to facilitate the development of high content assays suitable for drug screening. For convenience, we implemented our algorithm in a standalone software package...... with image-based quantification of phagocytic activity. New method We present a robust algorithm and cell-based assay system for high content analysis of phagocytic activity. The method utilizes fluorescently labeled beads as a phagocytic substrate with defined physical properties. The algorithm employs...... content screening. Results We tested our assay system using microglial cultures. Our results recapitulated previous findings on the effects of microglial stimulation on cell morphology and phagocytic activity. Moreover, our cell-level analysis revealed that the two phenotypes associated with microglial...

  16. Research of second harmonic generation images based on texture analysis

    Science.gov (United States)

    Liu, Yao; Li, Yan; Gong, Haiming; Zhu, Xiaoqin; Huang, Zufang; Chen, Guannan

    2014-09-01

    Texture analysis plays a crucial role in identifying objects or regions of interest in an image. It has been applied to a variety of medical image processing, ranging from the detection of disease and the segmentation of specific anatomical structures, to differentiation between healthy and pathological tissues. Second harmonic generation (SHG) microscopy as a potential noninvasive tool for imaging biological tissues has been widely used in medicine, with reduced phototoxicity and photobleaching. In this paper, we clarified the principles of texture analysis including statistical, transform, structural and model-based methods and gave examples of its applications, reviewing studies of the technique. Moreover, we tried to apply texture analysis to the SHG images for the differentiation of human skin scar tissues. Texture analysis method based on local binary pattern (LBP) and wavelet transform was used to extract texture features of SHG images from collagen in normal and abnormal scars, and then the scar SHG images were classified into normal or abnormal ones. Compared with other texture analysis methods with respect to the receiver operating characteristic analysis, LBP combined with wavelet transform was demonstrated to achieve higher accuracy. It can provide a new way for clinical diagnosis of scar types. At last, future development of texture analysis in SHG images were discussed.

  17. Estimate of the melanin content in human hairs by the inverse Monte-Carlo method using a system for digital image analysis

    International Nuclear Information System (INIS)

    Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V

    2006-01-01

    Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)

  18. A content-based digital image watermarking scheme resistant to local geometric distortions

    International Nuclear Information System (INIS)

    Yang, Hong-ying; Chen, Li-li; Wang, Xiang-yang

    2011-01-01

    Geometric distortion is known as one of the most difficult attacks to resist, as it can desynchronize the location of the watermark and hence cause incorrect watermark detection. Geometric distortion can be decomposed into two classes: global affine transforms and local geometric distortions. Most countermeasures proposed in the literature only address the problem of global affine transforms. It is a challenging problem to design a robust image watermarking scheme against local geometric distortions. In this paper, we propose a new content-based digital image watermarking scheme with good visual quality and reasonable resistance against local geometric distortions. Firstly, the robust feature points, which can survive various common image processing and global affine transforms, are extracted by using a multi-scale SIFT (scale invariant feature transform) detector. Then, the affine covariant local feature regions (LFRs) are constructed adaptively according to the feature scale and local invariant centroid. Finally, the digital watermark is embedded into the affine covariant LFRs by modulating the magnitudes of discrete Fourier transform (DFT) coefficients. By binding the watermark with the affine covariant LFRs, the watermark detection can be done without synchronization error. Experimental results show that the proposed image watermarking is not only invisible and robust against common image processing operations such as sharpening, noise addition, and JPEG compression, etc, but also robust against global affine transforms and local geometric distortions

  19. Independent component analysis for understanding multimedia content

    DEFF Research Database (Denmark)

    Kolenda, Thomas; Hansen, Lars Kai; Larsen, Jan

    2002-01-01

    Independent component analysis of combined text and image data from Web pages has potential for search and retrieval applications by providing more meaningful and context dependent content. It is demonstrated that ICA of combined text and image features has a synergistic effect, i.e., the retrieval...

  20. Pancreatic size and fat content in diabetes: A systematic review and meta-analysis of imaging studies.

    Directory of Open Access Journals (Sweden)

    Tiago Severo Garcia

    Full Text Available Imaging studies are expected to produce reliable information regarding the size and fat content of the pancreas. However, the available studies have produced inconclusive results. The aim of this study was to perform a systematic review and meta-analysis of imaging studies assessing pancreas size and fat content in patients with type 1 diabetes (T1DM and type 2 diabetes (T2DM.Medline and Embase databases were performed. Studies evaluating pancreatic size (diameter, area or volume and/or fat content by ultrasound, computed tomography, or magnetic resonance imaging in patients with T1DM and/or T2DM as compared to healthy controls were selected. Seventeen studies including 3,403 subjects (284 T1DM patients, 1,139 T2DM patients, and 1,980 control subjects were selected for meta-analyses. Pancreas diameter, area, volume, density, and fat percentage were evaluated.Pancreatic volume was reduced in T1DM and T2DM vs. controls (T1DM vs. controls: -38.72 cm3, 95%CI: -52.25 to -25.19, I2 = 70.2%, p for heterogeneity = 0.018; and T2DM vs. controls: -12.18 cm3, 95%CI: -19.1 to -5.25, I2 = 79.3%, p for heterogeneity = 0.001. Fat content was higher in T2DM vs. controls (+2.73%, 95%CI 0.55 to 4.91, I2 = 82.0%, p for heterogeneity<0.001.Individuals with T1DM and T2DM have reduced pancreas size in comparison with control subjects. Patients with T2DM have increased pancreatic fat content.

  1. Phaedra, a protocol-driven system for analysis and validation of high-content imaging and flow cytometry.

    Science.gov (United States)

    Cornelissen, Frans; Cik, Miroslav; Gustin, Emmanuel

    2012-04-01

    High-content screening has brought new dimensions to cellular assays by generating rich data sets that characterize cell populations in great detail and detect subtle phenotypes. To derive relevant, reliable conclusions from these complex data, it is crucial to have informatics tools supporting quality control, data reduction, and data mining. These tools must reconcile the complexity of advanced analysis methods with the user-friendliness demanded by the user community. After review of existing applications, we realized the possibility of adding innovative new analysis options. Phaedra was developed to support workflows for drug screening and target discovery, interact with several laboratory information management systems, and process data generated by a range of techniques including high-content imaging, multicolor flow cytometry, and traditional high-throughput screening assays. The application is modular and flexible, with an interface that can be tuned to specific user roles. It offers user-friendly data visualization and reduction tools for HCS but also integrates Matlab for custom image analysis and the Konstanz Information Miner (KNIME) framework for data mining. Phaedra features efficient JPEG2000 compression and full drill-down functionality from dose-response curves down to individual cells, with exclusion and annotation options, cell classification, statistical quality controls, and reporting.

  2. An Integrative Object-Based Image Analysis Workflow for Uav Images

    Science.gov (United States)

    Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong

    2016-06-01

    In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  3. AN INTEGRATIVE OBJECT-BASED IMAGE ANALYSIS WORKFLOW FOR UAV IMAGES

    Directory of Open Access Journals (Sweden)

    H. Yu

    2016-06-01

    Full Text Available In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA. More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC. Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya’an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  4. A SVD Based Image Complexity Measure

    DEFF Research Database (Denmark)

    Gustafsson, David Karl John; Pedersen, Kim Steenstrup; Nielsen, Mads

    2009-01-01

    Images are composed of geometric structures and texture, and different image processing tools - such as denoising, segmentation and registration - are suitable for different types of image contents. Characterization of the image content in terms of geometric structure and texture is an important...... problem that one is often faced with. We propose a patch based complexity measure, based on how well the patch can be approximated using singular value decomposition. As such the image complexity is determined by the complexity of the patches. The concept is demonstrated on sequences from the newly...... collected DIKU Multi-Scale image database....

  5. Can state-of-the-art HVS-based objective image quality criteria be used for image reconstruction techniques based on ROI analysis?

    Science.gov (United States)

    Dostal, P.; Krasula, L.; Klima, M.

    2012-06-01

    Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.

  6. Knowledge-based analysis and understanding of 3D medical images

    International Nuclear Information System (INIS)

    Dhawan, A.P.; Juvvadi, S.

    1988-01-01

    The anatomical three-dimensional (3D) medical imaging modalities, such as X-ray CT and MRI, have been well recognized in the diagnostic radiology for several years while the nuclear medicine modalities, such as PET, have just started making a strong impact through functional imaging. Though PET images provide the functional information about the human organs, they are hard to interpret because of the lack of anatomical information. The authors objective is to develop a knowledge-based biomedical image analysis system which can interpret the anatomical images (such as CT). The anatomical information thus obtained can then be used in analyzing PET images of the same patient. This will not only help in interpreting PET images but it will also provide a means of studying the correlation between the anatomical and functional imaging. This paper presents the preliminary results of the knowledge based biomedical image analysis system for interpreting CT images of the chest

  7. A Probabilistic Framework for Content-Based Diagnosis of Retinal Disease

    Energy Technology Data Exchange (ETDEWEB)

    Tobin Jr, Kenneth William [ORNL; Abdelrahman, Mohamed A [ORNL; Chaum, Edward [ORNL; Muthusamy Govindasamy, Vijaya Priya [ORNL; Karnowski, Thomas Paul [ORNL

    2007-01-01

    Diabetic retinopathy is the leading cause of blindness in the working age population around the world. Computer assisted analysis has the potential to assist in the early detection of diabetes by regular screening of large populations. The widespread availability of digital fundus cameras today is resulting in the accumulation of large image archives of diagnosed patient data that captures historical knowledge of retinal pathology. Through this research we are developing a content-based image retrieval method to verify our hypothesis that retinal pathology can be identified and quantified from visually similar retinal images in an image archive. We will present diagnostic results for specificity and sensitivity on a population of 395 fundus images representing the normal fundus and 14 stratified disease states.

  8. Detecting content adaptive scaling of images for forensic applications

    Science.gov (United States)

    Fillion, Claude; Sharma, Gaurav

    2010-01-01

    Content-aware resizing methods have recently been developed, among which, seam-carving has achieved the most widespread use. Seam-carving's versatility enables deliberate object removal and benign image resizing, in which perceptually important content is preserved. Both types of modifications compromise the utility and validity of the modified images as evidence in legal and journalistic applications. It is therefore desirable that image forensic techniques detect the presence of seam-carving. In this paper we address detection of seam-carving for forensic purposes. As in other forensic applications, we pose the problem of seam-carving detection as the problem of classifying a test image in either of two classes: a) seam-carved or b) non-seam-carved. We adopt a pattern recognition approach in which a set of features is extracted from the test image and then a Support Vector Machine based classifier, trained over a set of images, is utilized to estimate which of the two classes the test image lies in. Based on our study of the seam-carving algorithm, we propose a set of intuitively motivated features for the detection of seam-carving. Our methodology for detection of seam-carving is then evaluated over a test database of images. We demonstrate that the proposed method provides the capability for detecting seam-carving with high accuracy. For images which have been reduced 30% by benign seam-carving, our method provides a classification accuracy of 91%.

  9. Video retrieval by still-image analysis with ImageMiner

    Science.gov (United States)

    Kreyss, Jutta; Roeper, M.; Alshuth, Peter; Hermes, Thorsten; Herzog, Otthein

    1997-01-01

    The large amount of available multimedia information (e.g. videos, audio, images) requires efficient and effective annotation and retrieval methods. As videos start playing a more important role in the frame of multimedia, we want to make these available for content-based retrieval. The ImageMiner-System, which was developed at the University of Bremen in the AI group, is designed for content-based retrieval of single images by a new combination of techniques and methods from computer vision and artificial intelligence. In our approach to make videos available for retrieval in a large database of videos and images there are two necessary steps: First, the detection and extraction of shots from a video, which is done by a histogram based method and second, the construction of the separate frames in a shot to one still single images. This is performed by a mosaicing-technique. The resulting mosaiced image gives a one image visualization of the shot and can be analyzed by the ImageMiner-System. ImageMiner has been tested on several domains, (e.g. landscape images, technical drawings), which cover a wide range of applications.

  10. A REGION-BASED MULTI-SCALE APPROACH FOR OBJECT-BASED IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    T. Kavzoglu

    2016-06-01

    Full Text Available Within the last two decades, object-based image analysis (OBIA considering objects (i.e. groups of pixels instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient. Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.

  11. Design and development of a content-based medical image retrieval system for spine vertebrae irregularity.

    Science.gov (United States)

    Mustapha, Aouache; Hussain, Aini; Samad, Salina Abdul; Zulkifley, Mohd Asyraf; Diyana Wan Zaki, Wan Mimi; Hamid, Hamzaini Abdul

    2015-01-16

    Content-based medical image retrieval (CBMIR) system enables medical practitioners to perform fast diagnosis through quantitative assessment of the visual information of various modalities. In this paper, a more robust CBMIR system that deals with both cervical and lumbar vertebrae irregularity is afforded. It comprises three main phases, namely modelling, indexing and retrieval of the vertebrae image. The main tasks in the modelling phase are to improve and enhance the visibility of the x-ray image for better segmentation results using active shape model (ASM). The segmented vertebral fractures are then characterized in the indexing phase using region-based fracture characterization (RB-FC) and contour-based fracture characterization (CB-FC). Upon a query, the characterized features are compared to the query image. Effectiveness of the retrieval phase is determined by its retrieval, thus, we propose an integration of the predictor model based cross validation neural network (PMCVNN) and similarity matching (SM) in this stage. The PMCVNN task is to identify the correct vertebral irregularity class through classification allowing the SM process to be more efficient. Retrieval performance between the proposed and the standard retrieval architectures are then compared using retrieval precision (Pr@M) and average group score (AGS) measures. Experimental results show that the new integrated retrieval architecture performs better than those of the standard CBMIR architecture with retrieval results of cervical (AGS > 87%) and lumbar (AGS > 82%) datasets. The proposed CBMIR architecture shows encouraging results with high Pr@M accuracy. As a result, images from the same visualization class are returned for further used by the medical personnel.

  12. Content-based analysis improves audiovisual archive retrieval

    NARCIS (Netherlands)

    Huurnink, B.; Snoek, C.G.M.; de Rijke, M.; Smeulders, A.W.M.

    2012-01-01

    Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. In this paper, we take into account the information needs

  13. Evaluation of Yogurt Microstructure Using Confocal Laser Scanning Microscopy and Image Analysis.

    Science.gov (United States)

    Skytte, Jacob L; Ghita, Ovidiu; Whelan, Paul F; Andersen, Ulf; Møller, Flemming; Dahl, Anders B; Larsen, Rasmus

    2015-06-01

    The microstructure of protein networks in yogurts defines important physical properties of the yogurt and hereby partly its quality. Imaging this protein network using confocal scanning laser microscopy (CSLM) has shown good results, and CSLM has become a standard measuring technique for fermented dairy products. When studying such networks, hundreds of images can be obtained, and here image analysis methods are essential for using the images in statistical analysis. Previously, methods including gray level co-occurrence matrix analysis and fractal analysis have been used with success. However, a range of other image texture characterization methods exists. These methods describe an image by a frequency distribution of predefined image features (denoted textons). Our contribution is an investigation of the choice of image analysis methods by performing a comparative study of 7 major approaches to image texture description. Here, CSLM images from a yogurt fermentation study are investigated, where production factors including fat content, protein content, heat treatment, and incubation temperature are varied. The descriptors are evaluated through nearest neighbor classification, variance analysis, and cluster analysis. Our investigation suggests that the texton-based descriptors provide a fuller description of the images compared to gray-level co-occurrence matrix descriptors and fractal analysis, while still being as applicable and in some cases as easy to tune. © 2015 Institute of Food Technologists®

  14. Keyframes Global Map Establishing Method for Robot Localization through Content-Based Image Matching

    Directory of Open Access Journals (Sweden)

    Tianyang Cao

    2017-01-01

    Full Text Available Self-localization and mapping are important for indoor mobile robot. We report a robust algorithm for map building and subsequent localization especially suited for indoor floor-cleaning robots. Common methods, for example, SLAM, can easily be kidnapped by colliding or disturbed by similar objects. Therefore, keyframes global map establishing method for robot localization in multiple rooms and corridors is needed. Content-based image matching is the core of this method. It is designed for the situation, by establishing keyframes containing both floor and distorted wall images. Image distortion, caused by robot view angle and movement, is analyzed and deduced. And an image matching solution is presented, consisting of extraction of overlap regions of keyframes extraction and overlap region rebuild through subblocks matching. For improving accuracy, ceiling points detecting and mismatching subblocks checking methods are incorporated. This matching method can process environment video effectively. In experiments, less than 5% frames are extracted as keyframes to build global map, which have large space distance and overlap each other. Through this method, robot can localize itself by matching its real-time vision frames with our keyframes map. Even with many similar objects/background in the environment or kidnapping robot, robot localization is achieved with position RMSE <0.5 m.

  15. Digital image sequence processing, compression, and analysis

    CERN Document Server

    Reed, Todd R

    2004-01-01

    IntroductionTodd R. ReedCONTENT-BASED IMAGE SEQUENCE REPRESENTATIONPedro M. Q. Aguiar, Radu S. Jasinschi, José M. F. Moura, andCharnchai PluempitiwiriyawejTHE COMPUTATION OF MOTIONChristoph Stiller, Sören Kammel, Jan Horn, and Thao DangMOTION ANALYSIS AND DISPLACEMENT ESTIMATION IN THE FREQUENCY DOMAINLuca Lucchese and Guido Maria CortelazzoQUALITY OF SERVICE ASSESSMENT IN NEW GENERATION WIRELESS VIDEO COMMUNICATIONSGaetano GiuntaERROR CONCEALMENT IN DIGITAL VIDEOFrancesco G.B. De NataleIMAGE SEQUENCE RESTORATION: A WIDER PERSPECTIVEAnil KokaramVIDEO SUMMARIZATIONCuneyt M. Taskiran and Edward

  16. Independent component analysis based filtering for penumbral imaging

    International Nuclear Information System (INIS)

    Chen Yenwei; Han Xianhua; Nozaki, Shinya

    2004-01-01

    We propose a filtering based on independent component analysis (ICA) for Poisson noise reduction. In the proposed filtering, the image is first transformed to ICA domain and then the noise components are removed by a soft thresholding (shrinkage). The proposed filter, which is used as a preprocessing of the reconstruction, has been successfully applied to penumbral imaging. Both simulation results and experimental results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters

  17. Content analysis of Australian direct-to-consumer websites for emerging breast cancer imaging devices.

    Science.gov (United States)

    Vreugdenburg, Thomas D; Laurence, Caroline O; Willis, Cameron D; Mundy, Linda; Hiller, Janet E

    2014-09-01

    To describe the nature and frequency of information presented on direct-to-consumer websites for emerging breast cancer imaging devices. Content analysis of Australian website advertisements from 2 March 2011 to 30 March 2012, for three emerging breast cancer imaging devices: digital infrared thermal imaging, electrical impedance scanning and electronic palpation imaging. Type of imaging offered, device safety, device performance, application of device, target population, supporting evidence and comparator tests. Thirty-nine unique Australian websites promoting a direct-to-consumer breast imaging device were identified. Despite a lack of supporting evidence, 22 websites advertised devices for diagnosis, 20 advertised devices for screening, 13 advertised devices for prevention and 13 advertised devices for identifying breast cancer risk factors. Similarly, advertised ranges of diagnostic sensitivity (78%-99%) and specificity (44%-91%) were relatively high compared with published literature. Direct comparisons with conventional screening tools that favoured the new device were highly prominent (31 websites), and one-third of websites (12) explicitly promoted their device as a suitable alternative. Australian websites for emerging breast imaging devices, which are also available internationally, promote the use of such devices as safe and effective solutions for breast cancer screening and diagnosis in a range of target populations. Many of these claims are not supported by peer-reviewed evidence, raising questions about the manner in which these devices and their advertising material are regulated, particularly when they are promoted as direct alternatives to established screening interventions.

  18. Cnn Based Retinal Image Upscaling Using Zero Component Analysis

    Science.gov (United States)

    Nasonov, A.; Chesnakov, K.; Krylov, A.

    2017-05-01

    The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.

  19. Interactive classification and content-based retrieval of tissue images

    Science.gov (United States)

    Aksoy, Selim; Marchisio, Giovanni B.; Tusk, Carsten; Koperski, Krzysztof

    2002-11-01

    We describe a system for interactive classification and retrieval of microscopic tissue images. Our system models tissues in pixel, region and image levels. Pixel level features are generated using unsupervised clustering of color and texture values. Region level features include shape information and statistics of pixel level feature values. Image level features include statistics and spatial relationships of regions. To reduce the gap between low-level features and high-level expert knowledge, we define the concept of prototype regions. The system learns the prototype regions in an image collection using model-based clustering and density estimation. Different tissue types are modeled using spatial relationships of these regions. Spatial relationships are represented by fuzzy membership functions. The system automatically selects significant relationships from training data and builds models which can also be updated using user relevance feedback. A Bayesian framework is used to classify tissues based on these models. Preliminary experiments show that the spatial relationship models we developed provide a flexible and powerful framework for classification and retrieval of tissue images.

  20. Advances in Reasoning-Based Image Processing Intelligent Systems Conventional and Intelligent Paradigms

    CERN Document Server

    Nakamatsu, Kazumi

    2012-01-01

    The book puts special stress on the contemporary techniques for reasoning-based image processing and analysis: learning based image representation and advanced video coding; intelligent image processing and analysis in medical vision systems; similarity learning models for image reconstruction; visual perception for mobile robot motion control, simulation of human brain activity in the analysis of video sequences; shape-based invariant features extraction; essential of paraconsistent neural networks, creativity and intelligent representation in computational systems. The book comprises 14 chapters. Each chapter is a small monograph, representing resent investigations of authors in the area. The topics of the chapters cover wide scientific and application areas and complement each-other very well. The chapters’ content is based on fundamental theoretical presentations, followed by experimental results and comparison with similar techniques. The size of the chapters is well-ballanced which permits a thorough ...

  1. A simple method for detecting tumor in T2-weighted MRI brain images. An image-based analysis

    International Nuclear Information System (INIS)

    Lau, Phooi-Yee; Ozawa, Shinji

    2006-01-01

    The objective of this paper is to present a decision support system which uses a computer-based procedure to detect tumor blocks or lesions in digitized medical images. The authors developed a simple method with a low computation effort to detect tumors on T2-weighted Magnetic Resonance Imaging (MRI) brain images, focusing on the connection between the spatial pixel value and tumor properties from four different perspectives: cases having minuscule differences between two images using a fixed block-based method, tumor shape and size using the edge and binary images, tumor properties based on texture values using spatial pixel intensity distribution controlled by a global discriminate value, and the occurrence of content-specific tumor pixel for threshold images. Measurements of the following medical datasets were performed: different time interval images, and different brain disease images on single and multiple slice images. Experimental results have revealed that our proposed technique incurred an overall error smaller than those in other proposed methods. In particular, the proposed method allowed decrements of false alarm and missed alarm errors, which demonstrate the effectiveness of our proposed technique. In this paper, we also present a prototype system, known as PCB, to evaluate the performance of the proposed methods by actual experiments, comparing the detection accuracy and system performance. (author)

  2. Feed particle size evaluation: conventional approach versus digital holography based image analysis

    Directory of Open Access Journals (Sweden)

    Vittorio Dell’Orto

    2010-01-01

    Full Text Available The aim of this study was to evaluate the application of image analysis approach based on digital holography in defining particle size in comparison with the sieve shaker method (sieving method as reference method. For this purpose ground corn meal was analyzed by a sieve shaker Retsch VS 1000 and by image analysis approach based on digital holography. Particle size from digital holography were compared with results obtained by screen (sieving analysis for each of size classes by a cumulative distribution plot. Comparison between particle size values obtained by sieving method and image analysis indicated that values were comparable in term of particle size information, introducing a potential application for digital holography and image analysis in feed industry.

  3. Developing a NIR multispectral imaging for prediction and visualization of peanut protein content using variable selection algorithms

    Science.gov (United States)

    Cheng, Jun-Hu; Jin, Huali; Liu, Zhiwei

    2018-01-01

    The feasibility of developing a multispectral imaging method using important wavelengths from hyperspectral images selected by genetic algorithm (GA), successive projection algorithm (SPA) and regression coefficient (RC) methods for modeling and predicting protein content in peanut kernel was investigated for the first time. Partial least squares regression (PLSR) calibration model was established between the spectral data from the selected optimal wavelengths and the reference measured protein content ranged from 23.46% to 28.43%. The RC-PLSR model established using eight key wavelengths (1153, 1567, 1972, 2143, 2288, 2339, 2389 and 2446 nm) showed the best predictive results with the coefficient of determination of prediction (R2P) of 0.901, and root mean square error of prediction (RMSEP) of 0.108 and residual predictive deviation (RPD) of 2.32. Based on the obtained best model and image processing algorithms, the distribution maps of protein content were generated. The overall results of this study indicated that developing a rapid and online multispectral imaging system using the feature wavelengths and PLSR analysis is potential and feasible for determination of the protein content in peanut kernels.

  4. Image-Analysis Based on Seed Phenomics in Sesame

    Directory of Open Access Journals (Sweden)

    Prasad R.

    2014-10-01

    Full Text Available The seed coat (testa structure of twenty-three cultivated (Sesamum indicum L. and six wild sesame (s. occidentale Regel & Heer., S. mulayanum Nair, S. prostratum Retz., S. radiatum Schumach. & Thonn., S. angustifolium (Oliv. Engl. and S. schinzianum Asch germplasm was analyzed from digital and Scanning Electron Microscopy (SEM images with dedicated software using the descriptors for computer based seed image analysis to understand the diversity of seed morphometric traits, which later on can be extended to screen and evaluate improved genotypes of sesame. Seeds of wild sesame species could conveniently be distinguished from cultivated varieties based on shape and architectural analysis. Results indicated discrete ‘cut off values to identify definite shape and contour of seed for a desirable sesame genotype along with the con-ventional practice of selecting lighter colored testa.

  5. Quantifying the margin sharpness of lesions on radiological images for content-based image retrieval

    International Nuclear Information System (INIS)

    Xu Jiajing; Napel, Sandy; Greenspan, Hayit; Beaulieu, Christopher F.; Agrawal, Neeraj; Rubin, Daniel

    2012-01-01

    . Equivalence across deformations was assessed using Schuirmann's paired two one-sided tests. Results: In simulated images, the concordance correlation between measured gradient and actual gradient was 0.994. The mean (s.d.) and standard deviation NDCG score for the retrieval of K images, K = 5, 10, and 15, were 84% (8%), 85% (7%), and 85% (7%) for CT images containing liver lesions, and 82% (7%), 84% (6%), and 85% (4%) for CT images containing lung nodules, respectively. The authors’ proposed method outperformed the two existing margin characterization methods in average NDCG scores over all K, by 1.5% and 3% in datasets containing liver lesion, and 4.5% and 5% in datasets containing lung nodules. Equivalence testing showed that the authors’ feature is more robust across all margin deformations (p < 0.05) than the two existing methods for margin sharpness characterization in both simulated and clinical datasets. Conclusions: The authors have described a new image feature to quantify the margin sharpness of lesions. It has strong correlation with known margin sharpness in simulated images and in clinical CT images containing liver lesions and lung nodules. This image feature has excellent performance for retrieving images with similar margin characteristics, suggesting potential utility, in conjunction with other lesion features, for content-based image retrieval applications.

  6. Knowledge-based low-level image analysis for computer vision systems

    Science.gov (United States)

    Dhawan, Atam P.; Baxi, Himanshu; Ranganath, M. V.

    1988-01-01

    Two algorithms for entry-level image analysis and preliminary segmentation are proposed which are flexible enough to incorporate local properties of the image. The first algorithm involves pyramid-based multiresolution processing and a strategy to define and use interlevel and intralevel link strengths. The second algorithm, which is designed for selected window processing, extracts regions adaptively using local histograms. The preliminary segmentation and a set of features are employed as the input to an efficient rule-based low-level analysis system, resulting in suboptimal meaningful segmentation.

  7. An Analysis of Images of Contention and Violence in Dagara and Akan Proverbial Expressions

    Directory of Open Access Journals (Sweden)

    Martin Kyiileyang

    2017-04-01

    Full Text Available Proverbial expressions have typical linguistic and figurative features. These are normally captivating to the listener. The expressive culture of the Dagara and Akan societies is embellished by these proverbial expressions. Most African proverbs, express various images depicting both pleasant and unpleasant situations in life. Unpleasant language normally depicts several terrifying images particularly when threats, insults and other forms of abuse are traded vehemently. Dagara and Akan proverbs are no exceptions to this phenomenon. This paper seeks to examine images of contention and violence depicted in Akan and Dagara proverbial expressions. To achieve this, a variety of proverbs from Akan and Dagara were analysed for their meanings using Yankah’s and Honeck’s Theories. The result revealed that structurally, as with many proverbs, the Akan and Dagara proverbial expressions are pithy and terse. The most dominant images of contention and violence in these expressions expose negative values and perceptions about the people who speak these languages.

  8. Physics-based deformable organisms for medical image analysis

    Science.gov (United States)

    Hamarneh, Ghassan; McIntosh, Chris

    2005-04-01

    Previously, "Deformable organisms" were introduced as a novel paradigm for medical image analysis that uses artificial life modelling concepts. Deformable organisms were designed to complement the classical bottom-up deformable models methodologies (geometrical and physical layers), with top-down intelligent deformation control mechanisms (behavioral and cognitive layers). However, a true physical layer was absent and in order to complete medical image segmentation tasks, deformable organisms relied on pure geometry-based shape deformations guided by sensory data, prior structural knowledge, and expert-generated schedules of behaviors. In this paper we introduce the use of physics-based shape deformations within the deformable organisms framework yielding additional robustness by allowing intuitive real-time user guidance and interaction when necessary. We present the results of applying our physics-based deformable organisms, with an underlying dynamic spring-mass mesh model, to segmenting and labelling the corpus callosum in 2D midsagittal magnetic resonance images.

  9. Big data in multiple sclerosis: development of a web-based longitudinal study viewer in an imaging informatics-based eFolder system for complex data analysis and management

    Science.gov (United States)

    Ma, Kevin; Wang, Ximing; Lerner, Alex; Shiroishi, Mark; Amezcua, Lilyana; Liu, Brent

    2015-03-01

    In the past, we have developed and displayed a multiple sclerosis eFolder system for patient data storage, image viewing, and automatic lesion quantification results stored in DICOM-SR format. The web-based system aims to be integrated in DICOM-compliant clinical and research environments to aid clinicians in patient treatments and disease tracking. This year, we have further developed the eFolder system to handle big data analysis and data mining in today's medical imaging field. The database has been updated to allow data mining and data look-up from DICOM-SR lesion analysis contents. Longitudinal studies are tracked, and any changes in lesion volumes and brain parenchyma volumes are calculated and shown on the webbased user interface as graphical representations. Longitudinal lesion characteristic changes are compared with patients' disease history, including treatments, symptom progressions, and any other changes in the disease profile. The image viewer is updated such that imaging studies can be viewed side-by-side to allow visual comparisons. We aim to use the web-based medical imaging informatics eFolder system to demonstrate big data analysis in medical imaging, and use the analysis results to predict MS disease trends and patterns in Hispanic and Caucasian populations in our pilot study. The discovery of disease patterns among the two ethnicities is a big data analysis result that will help lead to personalized patient care and treatment planning.

  10. An investigation of content and media images in gay men's magazines.

    Science.gov (United States)

    Saucier, Jason A; Caron, Sandra L

    2008-01-01

    This study provides an analysis of gay men's magazines, examining both the content and advertisements. Four magazine titles were selected, including The Advocate, Genre, Instinct, and Out, each targeting gay men as its target audience. These magazines were coded for both article content and advertisement content. In the advertisement analysis, both the type of advertisement and characteristics of the men depicted within the advertisement when present. The results mirror previous research findings relating to the portrayal of women, including the objectification of specific body parts and the high community standards set by the images depicted. These findings were reinforced by both the advertisements and content analyzed to include a high degree of importance being placed on having the right body type. Implications for further research are discussed.

  11. Uses of software in digital image analysis: a forensic report

    Science.gov (United States)

    Sharma, Mukesh; Jha, Shailendra

    2010-02-01

    Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.

  12. Automated analysis of high-content microscopy data with deep learning.

    Science.gov (United States)

    Kraus, Oren Z; Grys, Ben T; Ba, Jimmy; Chong, Yolanda; Frey, Brendan J; Boone, Charles; Andrews, Brenda J

    2017-04-18

    Existing computational pipelines for quantitative analysis of high-content microscopy data rely on traditional machine learning approaches that fail to accurately classify more than a single dataset without substantial tuning and training, requiring extensive analysis. Here, we demonstrate that the application of deep learning to biological image data can overcome the pitfalls associated with conventional machine learning classifiers. Using a deep convolutional neural network (DeepLoc) to analyze yeast cell images, we show improved performance over traditional approaches in the automated classification of protein subcellular localization. We also demonstrate the ability of DeepLoc to classify highly divergent image sets, including images of pheromone-arrested cells with abnormal cellular morphology, as well as images generated in different genetic backgrounds and in different laboratories. We offer an open-source implementation that enables updating DeepLoc on new microscopy datasets. This study highlights deep learning as an important tool for the expedited analysis of high-content microscopy data. © 2017 The Authors. Published under the terms of the CC BY 4.0 license.

  13. Geographic Object-Based Image Analysis - Towards a new paradigm.

    Science.gov (United States)

    Blaschke, Thomas; Hay, Geoffrey J; Kelly, Maggi; Lang, Stefan; Hofmann, Peter; Addink, Elisabeth; Queiroz Feitosa, Raul; van der Meer, Freek; van der Werff, Harald; van Coillie, Frieke; Tiede, Dirk

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis - GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extraction approaches. This article investigates these development and its implications and asks whether or not this is a new paradigm in remote sensing and Geographic Information Science (GIScience). We first discuss several limitations of prevailing per-pixel methods when applied to high resolution images. Then we explore the paradigm concept developed by Kuhn (1962) and discuss whether GEOBIA can be regarded as a paradigm according to this definition. We crystallize core concepts of GEOBIA, including the role of objects, of ontologies and the multiplicity of scales and we discuss how these conceptual developments support important methods in remote sensing such as change detection and accuracy assessment. The ramifications of the different theoretical foundations between the ' per-pixel paradigm ' and GEOBIA are analysed, as are some of the challenges along this path from pixels, to objects, to geo-intelligence. Based on several paradigm indications as defined by Kuhn and based on an analysis of peer-reviewed scientific literature we conclude that GEOBIA is a new and evolving paradigm.

  14. Image-based RSA: Roentgen stereophotogrammetric analysis based on 2D-3D image registration.

    Science.gov (United States)

    de Bruin, P W; Kaptein, B L; Stoel, B C; Reiber, J H C; Rozing, P M; Valstar, E R

    2008-01-01

    Image-based Roentgen stereophotogrammetric analysis (IBRSA) integrates 2D-3D image registration and conventional RSA. Instead of radiopaque RSA bone markers, IBRSA uses 3D CT data, from which digitally reconstructed radiographs (DRRs) are generated. Using 2D-3D image registration, the 3D pose of the CT is iteratively adjusted such that the generated DRRs resemble the 2D RSA images as closely as possible, according to an image matching metric. Effectively, by registering all 2D follow-up moments to the same 3D CT, the CT volume functions as common ground. In two experiments, using RSA and using a micromanipulator as gold standard, IBRSA has been validated on cadaveric and sawbone scapula radiographs, and good matching results have been achieved. The accuracy was: |mu |RSA but higher than in vivo standard RSA. Because IBRSA does not require radiopaque markers, it adds functionality to the RSA method by opening new directions and possibilities for research, such as dynamic analyses using fluoroscopy on subjects without markers and computer navigation applications.

  15. Computer-aided detection of basal cell carcinoma through blood content analysis in dermoscopy images

    Science.gov (United States)

    Kharazmi, Pegah; Kalia, Sunil; Lui, Harvey; Wang, Z. Jane; Lee, Tim K.

    2018-02-01

    Basal cell carcinoma (BCC) is the most common type of skin cancer, which is highly damaging to the skin at its advanced stages and causes huge costs on the healthcare system. However, most types of BCC are easily curable if detected at early stage. Due to limited access to dermatologists and expert physicians, non-invasive computer-aided diagnosis is a viable option for skin cancer screening. A clinical biomarker of cancerous tumors is increased vascularization and excess blood flow. In this paper, we present a computer-aided technique to differentiate cancerous skin tumors from benign lesions based on vascular characteristics of the lesions. Dermoscopy image of the lesion is first decomposed using independent component analysis of the RGB channels to derive melanin and hemoglobin maps. A novel set of clinically inspired features and ratiometric measurements are then extracted from each map to characterize the vascular properties and blood content of the lesion. The feature set is then fed into a random forest classifier. Over a dataset of 664 skin lesions, the proposed method achieved an area under ROC curve of 0.832 in a 10-fold cross validation for differentiating basal cell carcinomas from benign lesions.

  16. Optimization of reference library used in content-based medical image retrieval scheme

    International Nuclear Information System (INIS)

    Park, Sang Cheol; Sukthankar, Rahul; Mummert, Lily; Satyanarayanan, Mahadev; Zheng Bin

    2007-01-01

    Building an optimal image reference library is a critical step in developing the interactive computer-aided detection and diagnosis (I-CAD) systems of medical images using content-based image retrieval (CBIR) schemes. In this study, the authors conducted two experiments to investigate (1) the relationship between I-CAD performance and size of reference library and (2) a new reference selection strategy to optimize the library and improve I-CAD performance. The authors assembled a reference library that includes 3153 regions of interest (ROI) depicting either malignant masses (1592) or CAD-cued false-positive regions (1561) and an independent testing data set including 200 masses and 200 false-positive regions. A CBIR scheme using a distance-weighted K-nearest neighbor algorithm is applied to retrieve references that are considered similar to the testing sample from the library. The area under receiver operating characteristic curve (A z ) is used as an index to evaluate the I-CAD performance. In the first experiment, the authors systematically increased reference library size and tested I-CAD performance. The result indicates that scheme performance improves initially from A z =0.715 to 0.874 and then plateaus when the library size reaches approximately half of its maximum capacity. In the second experiment, based on the hypothesis that a ROI should be removed if it performs poorly compared to a group of similar ROIs in a large and diverse reference library, the authors applied a new strategy to identify 'poorly effective' references. By removing 174 identified ROIs from the reference library, I-CAD performance significantly increases to A z =0.914 (p<0.01). The study demonstrates that increasing reference library size and removing poorly effective references can significantly improve I-CAD performance

  17. Smart Images Search based on Visual Features Fusion

    International Nuclear Information System (INIS)

    Saad, M.H.

    2013-01-01

    Image search engines attempt to give fast and accurate access to the wide range of the huge amount images available on the Internet. There have been a number of efforts to build search engines based on the image content to enhance search results. Content-Based Image Retrieval (CBIR) systems have achieved a great interest since multimedia files, such as images and videos, have dramatically entered our lives throughout the last decade. CBIR allows automatically extracting target images according to objective visual contents of the image itself, for example its shapes, colors and textures to provide more accurate ranking of the results. The recent approaches of CBIR differ in terms of which image features are extracted to be used as image descriptors for matching process. This thesis proposes improvements of the efficiency and accuracy of CBIR systems by integrating different types of image features. This framework addresses efficient retrieval of images in large image collections. A comparative study between recent CBIR techniques is provided. According to this study; image features need to be integrated to provide more accurate description of image content and better image retrieval accuracy. In this context, this thesis presents new image retrieval approaches that provide more accurate retrieval accuracy than previous approaches. The first proposed image retrieval system uses color, texture and shape descriptors to form the global features vector. This approach integrates the yc b c r color histogram as a color descriptor, the modified Fourier descriptor as a shape descriptor and modified Edge Histogram as a texture descriptor in order to enhance the retrieval results. The second proposed approach integrates the global features vector, which is used in the first approach, with the SURF salient point technique as local feature. The nearest neighbor matching algorithm with a proposed similarity measure is applied to determine the final image rank. The second approach

  18. Identification and annotation of erotic film based on content analysis

    Science.gov (United States)

    Wang, Donghui; Zhu, Miaoliang; Yuan, Xin; Qian, Hui

    2005-02-01

    The paper brings forward a new method for identifying and annotating erotic films based on content analysis. First, the film is decomposed to video and audio stream. Then, the video stream is segmented into shots and key frames are extracted from each shot. We filter the shots that include potential erotic content by finding the nude human body in key frames. A Gaussian model in YCbCr color space for detecting skin region is presented. An external polygon that covered the skin regions is used for the approximation of the human body. Last, we give the degree of the nudity by calculating the ratio of skin area to whole body area with weighted parameters. The result of the experiment shows the effectiveness of our method.

  19. Content-based multimedia retrieval: indexing and diversification

    NARCIS (Netherlands)

    van Leuken, R.H.

    2009-01-01

    The demand for efficient systems that facilitate searching in multimedia databases and collections is vastly increasing. Application domains include criminology, musicology, trademark registration, medicine and image or video retrieval on the web. This thesis discusses content-based retrieval

  20. Morphometric image analysis of giant vesicles

    DEFF Research Database (Denmark)

    Husen, Peter Rasmussen; Arriaga, Laura; Monroy, Francisco

    2012-01-01

    We have developed a strategy to determine lengths and orientations of tie lines in the coexistence region of liquid-ordered and liquid-disordered phases of cholesterol containing ternary lipid mixtures. The method combines confocal-fluorescence-microscopy image stacks of giant unilamellar vesicles...... (GUVs), a dedicated 3D-image analysis, and a quantitative analysis based in equilibrium thermodynamic considerations. This approach was tested in GUVs composed of 1,2-dioleoyl-sn-glycero-3-phosphocholine/1,2-palmitoyl-sn-glycero-3-phosphocholine/cholesterol. In general, our results show a reasonable...... agreement with previously reported data obtained by other methods. For example, our computed tie lines were found to be nonhorizontal, indicating a difference in cholesterol content in the coexisting phases. This new, to our knowledge, analytical strategy offers a way to further exploit fluorescence...

  1. Analysis and improvement of a chaos-based image encryption algorithm

    International Nuclear Information System (INIS)

    Xiao Di; Liao Xiaofeng; Wei Pengcheng

    2009-01-01

    The security of digital image attracts much attention recently. In Guan et al. [Guan Z, Huang F, Guan W. Chaos-based image encryption algorithm. Phys Lett A 2005; 346: 153-7.], a chaos-based image encryption algorithm has been proposed. In this paper, the cause of potential flaws in the original algorithm is analyzed in detail, and then the corresponding enhancement measures are proposed. Both theoretical analysis and computer simulation indicate that the improved algorithm can overcome these flaws and maintain all the merits of the original one.

  2. Detailed analysis of latencies in image-based dynamic MLC tracking

    International Nuclear Information System (INIS)

    Poulsen, Per Rugaard; Cho, Byungchul; Sawant, Amit; Ruan, Dan; Keall, Paul J.

    2010-01-01

    Purpose: Previous measurements of the accuracy of image-based real-time dynamic multileaf collimator (DMLC) tracking show that the major contributor to errors is latency, i.e., the delay between target motion and MLC response. Therefore the purpose of this work was to develop a method for detailed analysis of latency contributions during image-based DMLC tracking. Methods: A prototype DMLC tracking system integrated with a linear accelerator was used for tracking a phantom with an embedded fiducial marker during treatment delivery. The phantom performed a sinusoidal motion. Real-time target localization was based on x-ray images acquired either with a portal imager or a kV imager mounted orthogonal to the treatment beam. Each image was stored in a file on the imaging workstation. A marker segmentation program opened the image file, determined the marker position in the image, and transferred it to the DMLC tracking program. This program estimated the three-dimensional target position by a single-imager method and adjusted the MLC aperture to the target position. Imaging intervals ΔT image from 150 to 1000 ms were investigated for both kV and MV imaging. After the experiments, the recorded images were synchronized with MLC log files generated by the MLC controller and tracking log files generated by the tracking program. This synchronization allowed temporal analysis of the information flow for each individual image from acquisition to completed MLC adjustment. The synchronization also allowed investigation of the MLC adjustment dynamics on a considerably finer time scale than the 50 ms time resolution of the MLC log files. Results: For ΔT image =150 ms, the total time from image acquisition to completed MLC adjustment was 380±9 ms for MV and 420±12 ms for kV images. The main part of this time was from image acquisition to completed image file writing (272 ms for MV and 309 ms for kV). Image file opening (38 ms), marker segmentation (4 ms), MLC position

  3. Detailed analysis of latencies in image-based dynamic MLC tracking

    Energy Technology Data Exchange (ETDEWEB)

    Poulsen, Per Rugaard; Cho, Byungchul; Sawant, Amit; Ruan, Dan; Keall, Paul J. [Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Department of Oncology and Department of Medical Physics, Aarhus University Hospital, 8000 Aarhus (Denmark); Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Department of Radiation Oncology, Asan Medical Center, Seoul 138-736 (Korea, Republic of); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States)

    2010-09-15

    Purpose: Previous measurements of the accuracy of image-based real-time dynamic multileaf collimator (DMLC) tracking show that the major contributor to errors is latency, i.e., the delay between target motion and MLC response. Therefore the purpose of this work was to develop a method for detailed analysis of latency contributions during image-based DMLC tracking. Methods: A prototype DMLC tracking system integrated with a linear accelerator was used for tracking a phantom with an embedded fiducial marker during treatment delivery. The phantom performed a sinusoidal motion. Real-time target localization was based on x-ray images acquired either with a portal imager or a kV imager mounted orthogonal to the treatment beam. Each image was stored in a file on the imaging workstation. A marker segmentation program opened the image file, determined the marker position in the image, and transferred it to the DMLC tracking program. This program estimated the three-dimensional target position by a single-imager method and adjusted the MLC aperture to the target position. Imaging intervals {Delta}T{sub image} from 150 to 1000 ms were investigated for both kV and MV imaging. After the experiments, the recorded images were synchronized with MLC log files generated by the MLC controller and tracking log files generated by the tracking program. This synchronization allowed temporal analysis of the information flow for each individual image from acquisition to completed MLC adjustment. The synchronization also allowed investigation of the MLC adjustment dynamics on a considerably finer time scale than the 50 ms time resolution of the MLC log files. Results: For {Delta}T{sub image}=150 ms, the total time from image acquisition to completed MLC adjustment was 380{+-}9 ms for MV and 420{+-}12 ms for kV images. The main part of this time was from image acquisition to completed image file writing (272 ms for MV and 309 ms for kV). Image file opening (38 ms), marker segmentation (4 ms

  4. Mapping Fire Severity Using Imaging Spectroscopy and Kernel Based Image Analysis

    Science.gov (United States)

    Prasad, S.; Cui, M.; Zhang, Y.; Veraverbeke, S.

    2014-12-01

    Improved spatial representation of within-burn heterogeneity after wildfires is paramount to effective land management decisions and more accurate fire emissions estimates. In this work, we demonstrate feasibility and efficacy of airborne imaging spectroscopy (hyperspectral imagery) for quantifying wildfire burn severity, using kernel based image analysis techniques. Two different airborne hyperspectral datasets, acquired over the 2011 Canyon and 2013 Rim fire in California using the Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) sensor, were used in this study. The Rim Fire, covering parts of the Yosemite National Park started on August 17, 2013, and was the third largest fire in California's history. Canyon Fire occurred in the Tehachapi mountains, and started on September 4, 2011. In addition to post-fire data for both fires, half of the Rim fire was also covered with pre-fire images. Fire severity was measured in the field using Geo Composite Burn Index (GeoCBI). The field data was utilized to train and validate our models, wherein the trained models, in conjunction with imaging spectroscopy data were used for GeoCBI estimation wide geographical regions. This work presents an approach for using remotely sensed imagery combined with GeoCBI field data to map fire scars based on a non-linear (kernel based) epsilon-Support Vector Regression (e-SVR), which was used to learn the relationship between spectra and GeoCBI in a kernel-induced feature space. Classification of healthy vegetation versus fire-affected areas based on morphological multi-attribute profiles was also studied. The availability of pre- and post-fire imaging spectroscopy data over the Rim Fire provided a unique opportunity to evaluate the performance of bi-temporal imaging spectroscopy for assessing post-fire effects. This type of data is currently constrained because of limited airborne acquisitions before a fire, but will become widespread with future spaceborne sensors such as those on

  5. [Estimation and Visualization of Nitrogen Content in Citrus Canopy Based on Two Band Vegetation Index (TBVI)].

    Science.gov (United States)

    Wang, Qiao-nan; Ye, Xu-jun; Li, Jin-meng; Xiao, Yu-zhao; He, Yong

    2015-03-01

    Nitrogen is a necessary and important element for the growth and development of fruit orchards. Timely, accurate and nondestructive monitoring of nitrogen status in fruit orchards would help maintain the fruit quality and efficient production of the orchard, and mitigate the pollution of water resources caused by excessive nitrogen fertilization. This study investigated the capability of hyperspectral imagery for estimating and visualizing the nitrogen content in citrus canopy. Hyperspectral images were obtained for leaf samples in laboratory as well as for the whole canopy in the field with ImSpector V10E (Spectral Imaging Ltd., Oulu, Finland). The spectral datas for each leaf sample were represented by the average spectral data extracted from the selected region of interest (ROI) in the hyperspectral images with the aid of ENVI software. The nitrogen content in each leaf sample was measured by the Dumas combustion method with the rapid N cube (Elementar Analytical, Germany). Simple correlation analysis and the two band vegetation index (TBVI) were then used to develop the spectra data-based nitrogen content prediction models. Results obtained through the formula calculation indicated that the model with the two band vegetation index (TBVI) based on the wavelengths 811 and 856 nm achieved the optimal estimation of nitrogen content in citrus leaves (R2 = 0.607 1). Furthermore, the canopy image for the identified TBVI was calculated, and the nitrogen content of the canopy was visualized by incorporating the model into the TBVI image. The tender leaves, middle-aged leaves and elder leaves showed distinct nitrogen status from highto low-levels in the canopy image. The results suggested the potential of hyperspectral imagery for the nondestructive detection and diagnosis of nitrogen status in citrus canopy in real time. Different from previous studies focused on nitrogen content prediction at leaf level, this study succeeded in predicting and visualizing the nutrient

  6. CONTEXT BASED FOOD IMAGE ANALYSIS

    OpenAIRE

    He, Ye; Xu, Chang; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.

    2013-01-01

    We are developing a dietary assessment system that records daily food intake through the use of food images. Recognizing food in an image is difficult due to large visual variance with respect to eating or preparation conditions. This task becomes even more challenging when different foods have similar visual appearance. In this paper we propose to incorporate two types of contextual dietary information, food co-occurrence patterns and personalized learning models, in food image analysis to r...

  7. HCS-Neurons: identifying phenotypic changes in multi-neuron images upon drug treatments of high-content screening.

    Science.gov (United States)

    Charoenkwan, Phasit; Hwang, Eric; Cutler, Robert W; Lee, Hua-Chin; Ko, Li-Wei; Huang, Hui-Ling; Ho, Shinn-Ying

    2013-01-01

    High-content screening (HCS) has become a powerful tool for drug discovery. However, the discovery of drugs targeting neurons is still hampered by the inability to accurately identify and quantify the phenotypic changes of multiple neurons in a single image (named multi-neuron image) of a high-content screen. Therefore, it is desirable to develop an automated image analysis method for analyzing multi-neuron images. We propose an automated analysis method with novel descriptors of neuromorphology features for analyzing HCS-based multi-neuron images, called HCS-neurons. To observe multiple phenotypic changes of neurons, we propose two kinds of descriptors which are neuron feature descriptor (NFD) of 13 neuromorphology features, e.g., neurite length, and generic feature descriptors (GFDs), e.g., Haralick texture. HCS-neurons can 1) automatically extract all quantitative phenotype features in both NFD and GFDs, 2) identify statistically significant phenotypic changes upon drug treatments using ANOVA and regression analysis, and 3) generate an accurate classifier to group neurons treated by different drug concentrations using support vector machine and an intelligent feature selection method. To evaluate HCS-neurons, we treated P19 neurons with nocodazole (a microtubule depolymerizing drug which has been shown to impair neurite development) at six concentrations ranging from 0 to 1000 ng/mL. The experimental results show that all the 13 features of NFD have statistically significant difference with respect to changes in various levels of nocodazole drug concentrations (NDC) and the phenotypic changes of neurites were consistent to the known effect of nocodazole in promoting neurite retraction. Three identified features, total neurite length, average neurite length, and average neurite area were able to achieve an independent test accuracy of 90.28% for the six-dosage classification problem. This NFD module and neuron image datasets are provided as a freely downloadable

  8. Hessian-based quantitative image analysis of host-pathogen confrontation assays.

    Science.gov (United States)

    Cseresnyes, Zoltan; Kraibooj, Kaswara; Figge, Marc Thilo

    2018-03-01

    Host-fungus interactions have gained a lot of interest in the past few decades, mainly due to an increasing number of fungal infections that are often associated with a high mortality rate in the absence of effective therapies. These interactions can be studied at the genetic level or at the functional level via imaging. Here, we introduce a new image processing method that quantifies the interaction between host cells and fungal invaders, for example, alveolar macrophages and the conidia of Aspergillus fumigatus. The new technique relies on the information content of transmitted light bright field microscopy images, utilizing the Hessian matrix eigenvalues to distinguish between unstained macrophages and the background, as well as between macrophages and fungal conidia. The performance of the new algorithm was measured by comparing the results of our method with that of an alternative approach that was based on fluorescence images from the same dataset. The comparison shows that the new algorithm performs very similarly to the fluorescence-based version. Consequently, the new algorithm is able to segment and characterize unlabeled cells, thus reducing the time and expense that would be spent on the fluorescent labeling in preparation for phagocytosis assays. By extending the proposed method to the label-free segmentation of fungal conidia, we will be able to reduce the need for fluorescence-based imaging even further. Our approach should thus help to minimize the possible side effects of fluorescence labeling on biological functions. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  9. Shape analysis in medical image analysis

    CERN Document Server

    Tavares, João

    2014-01-01

    This book contains thirteen contributions from invited experts of international recognition addressing important issues in shape analysis in medical image analysis, including techniques for image segmentation, registration, modelling and classification, and applications in biology, as well as in cardiac, brain, spine, chest, lung and clinical practice. This volume treats topics such as, anatomic and functional shape representation and matching; shape-based medical image segmentation; shape registration; statistical shape analysis; shape deformation; shape-based abnormity detection; shape tracking and longitudinal shape analysis; machine learning for shape modeling and analysis; shape-based computer-aided-diagnosis; shape-based medical navigation; benchmark and validation of shape representation, analysis and modeling algorithms. This work will be of interest to researchers, students, and manufacturers in the fields of artificial intelligence, bioengineering, biomechanics, computational mechanics, computationa...

  10. Improved image retrieval based on fuzzy colour feature vector

    Science.gov (United States)

    Ben-Ahmeida, Ahlam M.; Ben Sasi, Ahmed Y.

    2013-03-01

    One of Image indexing techniques is the Content-Based Image Retrieval which is an efficient way for retrieving images from the image database automatically based on their visual contents such as colour, texture, and shape. In this paper will be discuss how using content-based image retrieval (CBIR) method by colour feature extraction and similarity checking. By dividing the query image and all images in the database into pieces and extract the features of each part separately and comparing the corresponding portions in order to increase the accuracy in the retrieval. The proposed approach is based on the use of fuzzy sets, to overcome the problem of curse of dimensionality. The contribution of colour of each pixel is associated to all the bins in the histogram using fuzzy-set membership functions. As a result, the Fuzzy Colour Histogram (FCH), outperformed the Conventional Colour Histogram (CCH) in image retrieving, due to its speedy results, where were images represented as signatures that took less size of memory, depending on the number of divisions. The results also showed that FCH is less sensitive and more robust to brightness changes than the CCH with better retrieval recall values.

  11. An automated wide-field time-gated optically sectioning fluorescence lifetime imaging multiwell plate reader for high-content analysis of protein-protein interactions

    Science.gov (United States)

    Alibhai, Dominic; Kumar, Sunil; Kelly, Douglas; Warren, Sean; Alexandrov, Yuriy; Munro, Ian; McGinty, James; Talbot, Clifford; Murray, Edward J.; Stuhmeier, Frank; Neil, Mark A. A.; Dunsby, Chris; French, Paul M. W.

    2011-03-01

    We describe an optically-sectioned FLIM multiwell plate reader that combines Nipkow microscopy with wide-field time-gated FLIM, and its application to high content analysis of FRET. The system acquires sectioned FLIM images in fluorescent protein. It has been applied to study the formation of immature HIV virus like particles (VLPs) in live cells by monitoring Gag-Gag protein interactions using FLIM FRET of HIV-1 Gag transfected with CFP or YFP. VLP formation results in FRET between closely packed Gag proteins, as confirmed by our FLIM analysis that includes automatic image segmentation.

  12. High content analysis of phagocytic activity and cell morphology with PuntoMorph.

    Science.gov (United States)

    Al-Ali, Hassan; Gao, Han; Dalby-Hansen, Camilla; Peters, Vanessa Ann; Shi, Yan; Brambilla, Roberta

    2017-11-01

    Phagocytosis is essential for maintenance of normal homeostasis and healthy tissue. As such, it is a therapeutic target for a wide range of clinical applications. The development of phenotypic screens targeting phagocytosis has lagged behind, however, due to the difficulties associated with image-based quantification of phagocytic activity. We present a robust algorithm and cell-based assay system for high content analysis of phagocytic activity. The method utilizes fluorescently labeled beads as a phagocytic substrate with defined physical properties. The algorithm employs statistical modeling to determine the mean fluorescence of individual beads within each image, and uses the information to conduct an accurate count of phagocytosed beads. In addition, the algorithm conducts detailed and sophisticated analysis of cellular morphology, making it a standalone tool for high content screening. We tested our assay system using microglial cultures. Our results recapitulated previous findings on the effects of microglial stimulation on cell morphology and phagocytic activity. Moreover, our cell-level analysis revealed that the two phenotypes associated with microglial activation, specifically cell body hypertrophy and increased phagocytic activity, are not highly correlated. This novel finding suggests the two phenotypes may be under the control of distinct signaling pathways. We demonstrate that our assay system outperforms preexisting methods for quantifying phagocytic activity in multiple dimensions including speed, accuracy, and resolution. We provide a framework to facilitate the development of high content assays suitable for drug screening. For convenience, we implemented our algorithm in a standalone software package, PuntoMorph. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. A Content Analysis of College and University Viewbooks (Brochures).

    Science.gov (United States)

    Hite, Robert E.; Yearwood, Alisa

    2001-01-01

    Systematically examined the content and components of college viewbooks/brochures. Compiled findings on: (1) physical components (e.g., photographs and slogans); (2) message content based on school characteristics such as size, type of school, enrollment, location, etc.; and (3) the type of image schools with different characteristics are seeking…

  14. Cell Painting, a high-content image-based assay for morphological profiling using multiplexed fluorescent dyes

    Science.gov (United States)

    Bray, Mark-Anthony; Singh, Shantanu; Han, Han; Davis, Chadwick T.; Borgeson, Blake; Hartland, Cathy; Kost-Alimova, Maria; Gustafsdottir, Sigrun M.; Gibson, Christopher C.; Carpenter, Anne E.

    2016-01-01

    In morphological profiling, quantitative data are extracted from microscopy images of cells to identify biologically relevant similarities and differences among samples based on these profiles. This protocol describes the design and execution of experiments using Cell Painting, a morphological profiling assay multiplexing six fluorescent dyes imaged in five channels, to reveal eight broadly relevant cellular components or organelles. Cells are plated in multi-well plates, perturbed with the treatments to be tested, stained, fixed, and imaged on a high-throughput microscope. Then, automated image analysis software identifies individual cells and measures ~1,500 morphological features (various measures of size, shape, texture, intensity, etc.) to produce a rich profile suitable for detecting subtle phenotypes. Profiles of cell populations treated with different experimental perturbations can be compared to suit many goals, such as identifying the phenotypic impact of chemical or genetic perturbations, grouping compounds and/or genes into functional pathways, and identifying signatures of disease. Cell culture and image acquisition takes two weeks; feature extraction and data analysis take an additional 1-2 weeks. PMID:27560178

  15. Development of a quantitative morphological assessment of toxicant-treated zebrafish larvae using brightfield imaging and high-content analysis.

    Science.gov (United States)

    Deal, Samantha; Wambaugh, John; Judson, Richard; Mosher, Shad; Radio, Nick; Houck, Keith; Padilla, Stephanie

    2016-09-01

    One of the rate-limiting procedures in a developmental zebrafish screen is the morphological assessment of each larva. Most researchers opt for a time-consuming, structured visual assessment by trained human observer(s). The present studies were designed to develop a more objective, accurate and rapid method for screening zebrafish for dysmorphology. Instead of the very detailed human assessment, we have developed the computational malformation index, which combines the use of high-content imaging with a very brief human visual assessment. Each larva was quickly assessed by a human observer (basic visual assessment), killed, fixed and assessed for dysmorphology with the Zebratox V4 BioApplication using the Cellomics® ArrayScan® V(TI) high-content image analysis platform. The basic visual assessment adds in-life parameters, and the high-content analysis assesses each individual larva for various features (total area, width, spine length, head-tail length, length-width ratio, perimeter-area ratio). In developing the computational malformation index, a training set of hundreds of embryos treated with hundreds of chemicals were visually assessed using the basic or detailed method. In the second phase, we assessed both the stability of these high-content measurements and its performance using a test set of zebrafish treated with a dose range of two reference chemicals (trans-retinoic acid or cadmium). We found the measures were stable for at least 1 week and comparison of these automated measures to detailed visual inspection of the larvae showed excellent congruence. Our computational malformation index provides an objective manner for rapid phenotypic brightfield assessment of individual larva in a developmental zebrafish assay. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Retinal Imaging and Image Analysis

    Science.gov (United States)

    Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

    2011-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:22275207

  17. A multi-scale convolutional neural network for phenotyping high-content cellular images.

    Science.gov (United States)

    Godinez, William J; Hossain, Imtiaz; Lazic, Stanley E; Davies, John W; Zhang, Xian

    2017-07-01

    Identifying phenotypes based on high-content cellular images is challenging. Conventional image analysis pipelines for phenotype identification comprise multiple independent steps, with each step requiring method customization and adjustment of multiple parameters. Here, we present an approach based on a multi-scale convolutional neural network (M-CNN) that classifies, in a single cohesive step, cellular images into phenotypes by using directly and solely the images' pixel intensity values. The only parameters in the approach are the weights of the neural network, which are automatically optimized based on training images. The approach requires no a priori knowledge or manual customization, and is applicable to single- or multi-channel images displaying single or multiple cells. We evaluated the classification performance of the approach on eight diverse benchmark datasets. The approach yielded overall a higher classification accuracy compared with state-of-the-art results, including those of other deep CNN architectures. In addition to using the network to simply obtain a yes-or-no prediction for a given phenotype, we use the probability outputs calculated by the network to quantitatively describe the phenotypes. This study shows that these probability values correlate with chemical treatment concentrations. This finding validates further our approach and enables chemical treatment potency estimation via CNNs. The network specifications and solver definitions are provided in Supplementary Software 1. william_jose.godinez_navarro@novartis.com or xian-1.zhang@novartis.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  18. Object-Based Image Analysis Beyond Remote Sensing - the Human Perspective

    Science.gov (United States)

    Blaschke, T.; Lang, S.; Tiede, D.; Papadakis, M.; Györi, A.

    2016-06-01

    We introduce a prototypical methodological framework for a place-based GIS-RS system for the spatial delineation of place while incorporating spatial analysis and mapping techniques using methods from different fields such as environmental psychology, geography, and computer science. The methodological lynchpin for this to happen - when aiming to delineate place in terms of objects - is object-based image analysis (OBIA).

  19. Complete chromogen separation and analysis in double immunohistochemical stains using Photoshop-based image analysis.

    Science.gov (United States)

    Lehr, H A; van der Loos, C M; Teeling, P; Gown, A M

    1999-01-01

    Simultaneous detection of two different antigens on paraffin-embedded and frozen tissues can be accomplished by double immunohistochemistry. However, many double chromogen systems suffer from signal overlap, precluding definite signal quantification. To separate and quantitatively analyze the different chromogens, we imported images into a Macintosh computer using a CCD camera attached to a diagnostic microscope and used Photoshop software for the recognition, selection, and separation of colors. We show here that Photoshop-based image analysis allows complete separation of chromogens not only on the basis of their RGB spectral characteristics, but also on the basis of information concerning saturation, hue, and luminosity intrinsic to the digitized images. We demonstrate that Photoshop-based image analysis provides superior results compared to color separation using bandpass filters. Quantification of the individual chromogens is then provided by Photoshop using the Histogram command, which supplies information on the luminosity (corresponding to gray levels of black-and-white images) and on the number of pixels as a measure of spatial distribution. (J Histochem Cytochem 47:119-125, 1999)

  20. Information content of poisson images

    International Nuclear Information System (INIS)

    Cederlund, J.

    1979-04-01

    One major problem when producing images with the aid of Poisson distributed quanta is how best to compromise between spatial and contrast resolution. Increasing the number of image elements improves spatial resolution, but at the cost of fewer quanta per image element, which reduces contrast resolution. Information theory arguments are used to analyse this problem. It is argued that information capacity is a useful concept to describe an important property of the imaging device, but that in order to compute the information content of an image produced by this device some statistical properties (such as the a priori probability of the densities) of the object to be depicted must be taken into account. If these statistical properties are not known one cannot make a correct choice between spatial and contrast resolution. (author)

  1. Qualitative Content Analysis

    OpenAIRE

    Philipp Mayring

    2000-01-01

    The article describes an approach of systematic, rule guided qualitative text analysis, which tries to preserve some methodological strengths of quantitative content analysis and widen them to a concept of qualitative procedure. First the development of content analysis is delineated and the basic principles are explained (units of analysis, step models, working with categories, validity and reliability). Then the central procedures of qualitative content analysis, inductive development of ca...

  2. Pathfinder: multiresolution region-based searching of pathology images using IRM.

    OpenAIRE

    Wang, J. Z.

    2000-01-01

    The fast growth of digitized pathology slides has created great challenges in research on image database retrieval. The prevalent retrieval technique involves human-supplied text annotations to describe slide contents. These pathology images typically have very high resolution, making it difficult to search based on image content. In this paper, we present Pathfinder, an efficient multiresolution region-based searching system for high-resolution pathology image libraries. The system uses wave...

  3. High content image-based screening of a protease inhibitor library reveals compounds broadly active against Rift Valley fever virus and other highly pathogenic RNA viruses.

    Directory of Open Access Journals (Sweden)

    Rajini Mudhasani

    2014-08-01

    Full Text Available High content image-based screening was developed as an approach to test a protease inhibitor small molecule library for antiviral activity against Rift Valley fever virus (RVFV and to determine their mechanism of action. RVFV is the causative agent of severe disease of humans and animals throughout Africa and the Arabian Peninsula. Of the 849 compounds screened, 34 compounds exhibited ≥ 50% inhibition against RVFV. All of the hit compounds could be classified into 4 distinct groups based on their unique chemical backbone. Some of the compounds also showed broad antiviral activity against several highly pathogenic RNA viruses including Ebola, Marburg, Venezuela equine encephalitis, and Lassa viruses. Four hit compounds (C795-0925, D011-2120, F694-1532 and G202-0362, which were most active against RVFV and showed broad-spectrum antiviral activity, were selected for further evaluation for their cytotoxicity, dose response profile, and mode of action using classical virological methods and high-content imaging analysis. Time-of-addition assays in RVFV infections suggested that D011-2120 and G202-0362 targeted virus egress, while C795-0925 and F694-1532 inhibited virus replication. We showed that D011-2120 exhibited its antiviral effects by blocking microtubule polymerization, thereby disrupting the Golgi complex and inhibiting viral trafficking to the plasma membrane during virus egress. While G202-0362 also affected virus egress, it appears to do so by a different mechanism, namely by blocking virus budding from the trans Golgi. F694-1532 inhibited viral replication, but also appeared to inhibit overall cellular gene expression. However, G202-0362 and C795-0925 did not alter any of the morphological features that we examined and thus may prove to be good candidates for antiviral drug development. Overall this work demonstrates that high-content image analysis can be used to screen chemical libraries for new antivirals and to determine their

  4. Fat is fashionable and fit: A comparative content analysis of Fatspiration and Health at Every Size® Instagram images.

    Science.gov (United States)

    Webb, Jennifer B; Vinoski, Erin R; Bonar, Adrienne S; Davies, Alexandria E; Etzel, Lena

    2017-09-01

    In step with the proliferation of Thinspiration and Fitspiration content disseminated in popular web-based media, the fat acceptance movement has garnered heightened visibility within mainstream culture via the burgeoning Fatosphere weblog community. The present study extended previous Fatosphere research by comparing the shared and distinct strategies used to represent and motivate a fat-accepting lifestyle among 400 images sourced from Fatspiration- and Health at Every Size ® -themed hashtags on Instagram. Images were systematically analyzed for the socio-demographic and body size attributes of the individuals portrayed alongside content reflecting dimensions of general fat acceptance, physical appearance pride, physical activity and health, fat shaming, and eating and weight loss-related themes. #fatspiration/#fatspo-tagged images more frequently promoted fat acceptance through fashion and beauty-related activism; #healthateverysize/#haes posts more often featured physically-active portrayals, holistic well-being, and weight stigma. Findings provide insight into the common and unique motivational factors and contradictory messages encountered in these fat-accepting social media communities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Content-Based High-Resolution Remote Sensing Image Retrieval via Unsupervised Feature Learning and Collaborative Affinity Metric Fusion

    Directory of Open Access Journals (Sweden)

    Yansheng Li

    2016-08-01

    Full Text Available With the urgent demand for automatic management of large numbers of high-resolution remote sensing images, content-based high-resolution remote sensing image retrieval (CB-HRRS-IR has attracted much research interest. Accordingly, this paper proposes a novel high-resolution remote sensing image retrieval approach via multiple feature representation and collaborative affinity metric fusion (IRMFRCAMF. In IRMFRCAMF, we design four unsupervised convolutional neural networks with different layers to generate four types of unsupervised features from the fine level to the coarse level. In addition to these four types of unsupervised features, we also implement four traditional feature descriptors, including local binary pattern (LBP, gray level co-occurrence (GLCM, maximal response 8 (MR8, and scale-invariant feature transform (SIFT. In order to fully incorporate the complementary information among multiple features of one image and the mutual information across auxiliary images in the image dataset, this paper advocates collaborative affinity metric fusion to measure the similarity between images. The performance evaluation of high-resolution remote sensing image retrieval is implemented on two public datasets, the UC Merced (UCM dataset and the Wuhan University (WH dataset. Large numbers of experiments show that our proposed IRMFRCAMF can significantly outperform the state-of-the-art approaches.

  6. A kernel-based multi-feature image representation for histopathology image classification

    International Nuclear Information System (INIS)

    Moreno J; Caicedo J Gonzalez F

    2010-01-01

    This paper presents a novel strategy for building a high-dimensional feature space to represent histopathology image contents. Histogram features, related to colors, textures and edges, are combined together in a unique image representation space using kernel functions. This feature space is further enhanced by the application of latent semantic analysis, to model hidden relationships among visual patterns. All that information is included in the new image representation space. Then, support vector machine classifiers are used to assign semantic labels to images. Processing and classification algorithms operate on top of kernel functions, so that; the structure of the feature space is completely controlled using similarity measures and a dual representation. The proposed approach has shown a successful performance in a classification task using a dataset with 1,502 real histopathology images in 18 different classes. The results show that our approach for histological image classification obtains an improved average performance of 20.6% when compared to a conventional classification approach based on SVM directly applied to the original kernel.

  7. A KERNEL-BASED MULTI-FEATURE IMAGE REPRESENTATION FOR HISTOPATHOLOGY IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    J Carlos Moreno

    2010-09-01

    Full Text Available This paper presents a novel strategy for building a high-dimensional feature space to represent histopathology image contents. Histogram features, related to colors, textures and edges, are combined together in a unique image representation space using kernel functions. This feature space is further enhanced by the application of Latent Semantic Analysis, to model hidden relationships among visual patterns. All that information is included in the new image representation space. Then, Support Vector Machine classifiers are used to assign semantic labels to images. Processing and classification algorithms operate on top of kernel functions, so that, the structure of the feature space is completely controlled using similarity measures and a dual representation. The proposed approach has shown a successful performance in a classification task using a dataset with 1,502 real histopathology images in 18 different classes. The results show that our approach for histological image classification obtains an improved average performance of 20.6% when compared to a conventional classification approach based on SVM directly applied to the original kernel.

  8. Geographic Object-Based Image Analysis – Towards a new paradigm

    Science.gov (United States)

    Blaschke, Thomas; Hay, Geoffrey J.; Kelly, Maggi; Lang, Stefan; Hofmann, Peter; Addink, Elisabeth; Queiroz Feitosa, Raul; van der Meer, Freek; van der Werff, Harald; van Coillie, Frieke; Tiede, Dirk

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis – GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extraction approaches. This article investigates these development and its implications and asks whether or not this is a new paradigm in remote sensing and Geographic Information Science (GIScience). We first discuss several limitations of prevailing per-pixel methods when applied to high resolution images. Then we explore the paradigm concept developed by Kuhn (1962) and discuss whether GEOBIA can be regarded as a paradigm according to this definition. We crystallize core concepts of GEOBIA, including the role of objects, of ontologies and the multiplicity of scales and we discuss how these conceptual developments support important methods in remote sensing such as change detection and accuracy assessment. The ramifications of the different theoretical foundations between the ‘per-pixel paradigm’ and GEOBIA are analysed, as are some of the challenges along this path from pixels, to objects, to geo-intelligence. Based on several paradigm indications as defined by Kuhn and based on an analysis of peer-reviewed scientific literature we conclude that GEOBIA is a new and evolving paradigm. PMID:24623958

  9. Web Pages Content Analysis Using Browser-Based Volunteer Computing

    Directory of Open Access Journals (Sweden)

    Wojciech Turek

    2013-01-01

    Full Text Available Existing solutions to the problem of finding valuable information on the Websuffers from several limitations like simplified query languages, out-of-date in-formation or arbitrary results sorting. In this paper a different approach to thisproblem is described. It is based on the idea of distributed processing of Webpages content. To provide sufficient performance, the idea of browser-basedvolunteer computing is utilized, which requires the implementation of text pro-cessing algorithms in JavaScript. In this paper the architecture of Web pagescontent analysis system is presented, details concerning the implementation ofthe system and the text processing algorithms are described and test resultsare provided.

  10. Brain Based Learning in Science Education in Turkey: Descriptive Content and Meta Analysis of Dissertations

    Science.gov (United States)

    Yasar, M. Diyaddin

    2017-01-01

    This study aimed at performing content analysis and meta-analysis on dissertations related to brain-based learning in science education to find out the general trend and tendency of brain-based learning in science education and find out the effect of such studies on achievement and attitude of learners with the ultimate aim of raising awareness…

  11. Content Based Searching for INIS

    International Nuclear Information System (INIS)

    Jain, V.; Jain, R.K.

    2016-01-01

    Full text: Whatever a user wants is available on the internet, but to retrieve the information efficiently, a multilingual and most-relevant document search engine is a must. Most current search engines are word based or pattern based. They do not consider the meaning of the query posed to them; purely based on the keywords of the query; no support of multilingual query and and dismissal of nonrelevant results. Current information-retrieval techniques either rely on an encoding process, using a certain perspective or classification scheme, to describe a given item, or perform a full-text analysis, searching for user-specified words. Neither case guarantees content matching because an encoded description might reflect only part of the content and the mere occurrence of a word does not necessarily reflect the document’s content. For general documents, there doesn’t yet seem to be a much better option than lazy full-text analysis, by manually going through those endless results pages. In contrast to this, new search engine should extract the meaning of the query and then perform the search based on this extracted meaning. New search engine should also employ Interlingua based machine translation technology to present information in the language of choice of the user. (author

  12. Identification of Fusarium damaged wheat kernels using image analysis

    Directory of Open Access Journals (Sweden)

    Ondřej Jirsa

    2011-01-01

    Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.

  13. An LG-graph-based early evaluation of segmented images

    International Nuclear Information System (INIS)

    Tsitsoulis, Athanasios; Bourbakis, Nikolaos

    2012-01-01

    Image segmentation is one of the first important parts of image analysis and understanding. Evaluation of image segmentation, however, is a very difficult task, mainly because it requires human intervention and interpretation. In this work, we propose a blind reference evaluation scheme based on regional local–global (RLG) graphs, which aims at measuring the amount and distribution of detail in images produced by segmentation algorithms. The main idea derives from the field of image understanding, where image segmentation is often used as a tool for scene interpretation and object recognition. Evaluation here derives from summarization of the structural information content and not from the assessment of performance after comparisons with a golden standard. Results show measurements for segmented images acquired from three segmentation algorithms, applied on different types of images (human faces/bodies, natural environments and structures (buildings)). (paper)

  14. Two-dimensional DFA scaling analysis applied to encrypted images

    Science.gov (United States)

    Vargas-Olmos, C.; Murguía, J. S.; Ramírez-Torres, M. T.; Mejía Carlos, M.; Rosu, H. C.; González-Aguilar, H.

    2015-01-01

    The technique of detrended fluctuation analysis (DFA) has been widely used to unveil scaling properties of many different signals. In this paper, we determine scaling properties in the encrypted images by means of a two-dimensional DFA approach. To carry out the image encryption, we use an enhanced cryptosystem based on a rule-90 cellular automaton and we compare the results obtained with its unmodified version and the encryption system AES. The numerical results show that the encrypted images present a persistent behavior which is close to that of the 1/f-noise. These results point to the possibility that the DFA scaling exponent can be used to measure the quality of the encrypted image content.

  15. Image Recommendation Algorithm Using Feature-Based Collaborative Filtering

    Science.gov (United States)

    Kim, Deok-Hwan

    As the multimedia contents market continues its rapid expansion, the amount of image contents used in mobile phone services, digital libraries, and catalog service is increasing remarkably. In spite of this rapid growth, users experience high levels of frustration when searching for the desired image. Even though new images are profitable to the service providers, traditional collaborative filtering methods cannot recommend them. To solve this problem, in this paper, we propose feature-based collaborative filtering (FBCF) method to reflect the user's most recent preference by representing his purchase sequence in the visual feature space. The proposed approach represents the images that have been purchased in the past as the feature clusters in the multi-dimensional feature space and then selects neighbors by using an inter-cluster distance function between their feature clusters. Various experiments using real image data demonstrate that the proposed approach provides a higher quality recommendation and better performance than do typical collaborative filtering and content-based filtering techniques.

  16. Content relatedness in the social web based on social explicit semantic analysis

    Science.gov (United States)

    Ntalianis, Klimis; Otterbacher, Jahna; Mastorakis, Nikolaos

    2017-06-01

    In this paper a novel content relatedness algorithm for social media content is proposed, based on the Explicit Semantic Analysis (ESA) technique. The proposed scheme takes into consideration social interactions. In particular starting from the vector space representation model, similarity is expressed by a summation of term weight products. In this paper, term weights are estimated by a social computing method, where the strength of each term is calculated by the attention the terms receives. For this reason each post is split into two parts, title and comments area, while attention is defined by the number of social interactions such as likes and shares. The overall approach is named Social Explicit Semantic Analysis. Experimental results on real data show the advantages and limitations of the proposed approach, while an initial comparison between ESA and S-ESA is very promising.

  17. Measuring populism: comparing two methods of content analysis

    NARCIS (Netherlands)

    Rooduijn, M.; Pauwels, T.

    2011-01-01

    The measurement of populism - particularly over time and space - has received only scarce attention. In this research note two different ways to measure populism are compared: a classical content analysis and a computer-based content analysis. An analysis of political parties in the United Kingdom,

  18. Residual stress distribution analysis of heat treated APS TBC using image based modelling.

    Science.gov (United States)

    Li, Chun; Zhang, Xun; Chen, Ying; Carr, James; Jacques, Simon; Behnsen, Julia; di Michiel, Marco; Xiao, Ping; Cernik, Robert

    2017-08-01

    We carried out a residual stress distribution analysis in a APS TBC throughout the depth of the coatings. The samples were heat treated at 1150 °C for 190 h and the data analysis used image based modelling based on the real 3D images measured by Computed Tomography (CT). The stress distribution in several 2D slices from the 3D model is included in this paper as well as the stress distribution along several paths shown on the slices. Our analysis can explain the occurrence of the "jump" features near the interface between the top coat and the bond coat. These features in the residual stress distribution trend were measured (as a function of depth) by high-energy synchrotron XRD (as shown in our related research article entitled 'Understanding the Residual Stress Distribution through the Thickness of Atmosphere Plasma Sprayed (APS) Thermal Barrier Coatings (TBCs) by high energy Synchrotron XRD; Digital Image Correlation (DIC) and Image Based Modelling') (Li et al., 2017) [1].

  19. Content Analysis of Conceptually Based Physical Education in Southeastern United States Universities and Colleges

    Science.gov (United States)

    Williams, Suzanne Ellen; Greene, Leon; Satinsky, Sonya; Neuberger, John

    2016-01-01

    Purpose: The purposes of this study were to explore PE in higher education through the offering of traditional activity- and skills-based physical education (ASPE) and conceptually-based physical education (CPE) courses, and to conduct an exploratory content analysis on the CPE available to students in randomized colleges and universities in the…

  20. Anti-cancer agents in Saudi Arabian herbals revealed by automated high-content imaging

    KAUST Repository

    Hajjar, Dina A.; Kremb, Stephan Georg; Sioud, Salim; Emwas, Abdul-Hamid M.; Voolstra, Christian R.; Ravasi, Timothy

    2017-01-01

    in cancer therapy. Here, we used cell-based phenotypic profiling and image-based high-content screening to study the mode of action and potential cellular targets of plants historically used in Saudi Arabia's traditional medicine. We compared the cytological

  1. A Hybrid Probabilistic Model for Unified Collaborative and Content-Based Image Tagging.

    Science.gov (United States)

    Zhou, Ning; Cheung, William K; Qiu, Guoping; Xue, Xiangyang

    2011-07-01

    The increasing availability of large quantities of user contributed images with labels has provided opportunities to develop automatic tools to tag images to facilitate image search and retrieval. In this paper, we present a novel hybrid probabilistic model (HPM) which integrates low-level image features and high-level user provided tags to automatically tag images. For images without any tags, HPM predicts new tags based solely on the low-level image features. For images with user provided tags, HPM jointly exploits both the image features and the tags in a unified probabilistic framework to recommend additional tags to label the images. The HPM framework makes use of the tag-image association matrix (TIAM). However, since the number of images is usually very large and user-provided tags are diverse, TIAM is very sparse, thus making it difficult to reliably estimate tag-to-tag co-occurrence probabilities. We developed a collaborative filtering method based on nonnegative matrix factorization (NMF) for tackling this data sparsity issue. Also, an L1 norm kernel method is used to estimate the correlations between image features and semantic concepts. The effectiveness of the proposed approach has been evaluated using three databases containing 5,000 images with 371 tags, 31,695 images with 5,587 tags, and 269,648 images with 5,018 tags, respectively.

  2. Image Analysis for X-ray Imaging of Food

    DEFF Research Database (Denmark)

    Einarsdottir, Hildur

    for quality and safety evaluation of food products. In this effort the fields of statistics, image analysis and statistical learning are combined, to provide analytical tools for determining the aforementioned food traits. The work demonstrated includes a quantitative analysis of heat induced changes......X-ray imaging systems are increasingly used for quality and safety evaluation both within food science and production. They offer non-invasive and nondestructive penetration capabilities to image the inside of food. This thesis presents applications of a novel grating-based X-ray imaging technique...... and defect detection in food. Compared to the complex three dimensional analysis of microstructure, here two dimensional images are considered, making the method applicable for an industrial setting. The advantages obtained by grating-based imaging are compared to conventional X-ray imaging, for both foreign...

  3. The Content Analysis on Doctoral Thesies in Geomatic Engineering in Turkey

    Directory of Open Access Journals (Sweden)

    Tahsin BOZTOPRAK

    2016-10-01

    Full Text Available The purpose of this study is to reach a conclusion based on the synthesis of doctoral dissertations published about surveying (geomatic engineering by using content analysis. For this purpose, 325 doctoral dissertations published in Turkey were analyzed using content analysis technique. As a result of performed analysis, it has been determined that 70.46's% of doctoral dissertations have been published in the last fifteen years. It has been shown that most doctoral dissertations were published by Istanbul Technical University, Yıldız Technical University, Selçuk University and Karadeniz Technical University. Most of the doctorate studies have been carried out in the discipline of public measurements/ land management with the rate of %32.62. 89.54% of the publishing language of the PhD thesis is Turkish, the average number of pages is 156. 68.45% of the PhD thesis advisors have professor doctor title. The most covered subjects are geographic information systems (15.46%, GPS/GNSS (12.36% and satellite image analysis (11.24%.

  4. A World Wide Web Region-Based Image Search Engine

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper the development of an intelligent image content-based search engine for the World Wide Web is presented. This system will offer a new form of media representation and access of content available in WWW. Information Web Crawlers continuously traverse the Internet and collect images...

  5. Artistic image analysis using graph-based learning approaches.

    Science.gov (United States)

    Carneiro, Gustavo

    2013-08-01

    We introduce a new methodology for the problem of artistic image analysis, which among other tasks, involves the automatic identification of visual classes present in an art work. In this paper, we advocate the idea that artistic image analysis must explore a graph that captures the network of artistic influences by computing the similarities in terms of appearance and manual annotation. One of the novelties of our methodology is the proposed formulation that is a principled way of combining these two similarities in a single graph. Using this graph, we show that an efficient random walk algorithm based on an inverted label propagation formulation produces more accurate annotation and retrieval results compared with the following baseline algorithms: bag of visual words, label propagation, matrix completion, and structural learning. We also show that the proposed approach leads to a more efficient inference and training procedures. This experiment is run on a database containing 988 artistic images (with 49 visual classification problems divided into a multiclass problem with 27 classes and 48 binary problems), where we show the inference and training running times, and quantitative comparisons with respect to several retrieval and annotation performance measures.

  6. Validity of Criteria-Based Content Analysis (CBCA) at Trial in Free-Narrative Interviews

    Science.gov (United States)

    Roma, Paolo; San Martini, Pietro; Sabatello, Ugo; Tatarelli, Roberto; Ferracuti, Stefano

    2011-01-01

    Objective: The reliability of child witness testimony in sexual abuse cases is often controversial, and few tools are available. Criteria-Based Content Analysis (CBCA) is a widely used instrument for evaluating psychological credibility in cases of suspected child sexual abuse. Only few studies have evaluated CBCA scores in children suspected of…

  7. Change detection for synthetic aperture radar images based on pattern and intensity distinctiveness analysis

    Science.gov (United States)

    Wang, Xiao; Gao, Feng; Dong, Junyu; Qi, Qiang

    2018-04-01

    Synthetic aperture radar (SAR) image is independent on atmospheric conditions, and it is the ideal image source for change detection. Existing methods directly analysis all the regions in the speckle noise contaminated difference image. The performance of these methods is easily affected by small noisy regions. In this paper, we proposed a novel change detection framework for saliency-guided change detection based on pattern and intensity distinctiveness analysis. The saliency analysis step can remove small noisy regions, and therefore makes the proposed method more robust to the speckle noise. In the proposed method, the log-ratio operator is first utilized to obtain a difference image (DI). Then, the saliency detection method based on pattern and intensity distinctiveness analysis is utilized to obtain the changed region candidates. Finally, principal component analysis and k-means clustering are employed to analysis pixels in the changed region candidates. Thus, the final change map can be obtained by classifying these pixels into changed or unchanged class. The experiment results on two real SAR images datasets have demonstrated the effectiveness of the proposed method.

  8. Gaussian process regression based optimal design of combustion systems using flame images

    International Nuclear Information System (INIS)

    Chen, Junghui; Chan, Lester Lik Teck; Cheng, Yi-Cheng

    2013-01-01

    Highlights: • The digital color images of flames are applied to combustion design. • The combustion with modeling stochastic nature is developed using GP. • GP based uncertainty design is made and evaluated through a real combustion system. - Abstract: With the advanced methods of digital image processing and optical sensing, it is possible to have continuous imaging carried out on-line in combustion processes. In this paper, a method that extracts characteristics from the flame images is presented to immediately predict the outlet content of the flue gas. First, from the large number of flame image data, principal component analysis is used to discover the principal components or combinational variables, which describe the important trends and variations in the operation data. Then stochastic modeling of the combustion process is done by a Gaussian process with the aim to capture the stochastic nature of the flame associated with the oxygen content. The designed oxygen combustion content considers the uncertainty presented in the combustion. A reference image can be designed for the actual combustion process to provide an easy and straightforward maintenance of the combustion process

  9. Determination of fat and total protein content in milk using conventional digital imaging.

    Science.gov (United States)

    Kucheryavskiy, Sergey; Melenteva, Anastasiia; Bogomolov, Andrey

    2014-04-01

    The applicability of conventional digital imaging to quantitative determination of fat and total protein in cow's milk, based on the phenomenon of light scatter, has been proved. A new algorithm for extracting features from digital images of milk samples has been developed. The algorithm takes into account spatial distribution of light, diffusely transmitted through a sample. The proposed method has been tested on two sample sets prepared from industrial raw milk standards, with variable fat and protein content. Partial Least-Squares (PLS) regression on the features calculated from images of monochromatically illuminated milk samples resulted in models with high prediction performance when analysed the sets separately (best models with cross-validated R(2)=0.974 for protein and R(2)=0.973 for fat content). However when analysed the sets jointly with the obtained results were significantly worse (best models with cross-validated R(2)=0.890 for fat content and R(2)=0.720 for protein content). The results have been compared with previously published Vis/SW-NIR spectroscopic study of similar samples. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Assessing cellular toxicities in fibroblasts upon exposure to lipid-based nanoparticles: a high content analysis approach

    International Nuclear Information System (INIS)

    Solmesky, Leonardo J; Weil, Miguel; Shuman, Michal; Goldsmith, Meir; Peer, Dan

    2011-01-01

    Lipid-based nanoparticles (LNPs) are widely used for the delivery of drugs and nucleic acids. Although most of them are considered safe, there is confusing evidence in the literature regarding their potential cellular toxicities. Moreover, little is known about the recovery process cells undergo after a cytotoxic insult. We have previously studied the systemic effects of common LNPs with different surface charge (cationic, anionic, neutral) and revealed that positively charged LNPs ((+)LNPs) activate pro-inflammatory cytokines and induce interferon response by acting as an agonist of Toll-like receptor 4 on immune cells. In this study, we focused on the response of human fibroblasts exposed to LNPs and their cellular recovery process. To this end, we used image-based high content analysis (HCA). Using this strategy, we were able to show simultaneously, in several intracellular parameters, that fibroblasts can recover from the cytotoxic effects of (+)LNPs. The use of HCA opens new avenues in understanding cellular response and nanotoxicity and may become a valuable tool for screening safe materials for drug delivery and tissue engineering.

  11. Assessing cellular toxicities in fibroblasts upon exposure to lipid-based nanoparticles: a high content analysis approach

    Science.gov (United States)

    Solmesky, Leonardo J.; Shuman, Michal; Goldsmith, Meir; Weil, Miguel; Peer, Dan

    2011-12-01

    Lipid-based nanoparticles (LNPs) are widely used for the delivery of drugs and nucleic acids. Although most of them are considered safe, there is confusing evidence in the literature regarding their potential cellular toxicities. Moreover, little is known about the recovery process cells undergo after a cytotoxic insult. We have previously studied the systemic effects of common LNPs with different surface charge (cationic, anionic, neutral) and revealed that positively charged LNPs ((+)LNPs) activate pro-inflammatory cytokines and induce interferon response by acting as an agonist of Toll-like receptor 4 on immune cells. In this study, we focused on the response of human fibroblasts exposed to LNPs and their cellular recovery process. To this end, we used image-based high content analysis (HCA). Using this strategy, we were able to show simultaneously, in several intracellular parameters, that fibroblasts can recover from the cytotoxic effects of (+)LNPs. The use of HCA opens new avenues in understanding cellular response and nanotoxicity and may become a valuable tool for screening safe materials for drug delivery and tissue engineering.

  12. Image analysis and modeling in medical image computing. Recent developments and advances.

    Science.gov (United States)

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body

  13. Multimedia content analysis

    CERN Document Server

    Ohm, Jens

    2016-01-01

    This textbook covers the theoretical backgrounds and practical aspects of image, video and audio feature expression, e.g., color, texture, edge, shape, salient point and area, motion, 3D structure, audio/sound in time, frequency and cepstral domains, structure and melody. Up-to-date algorithms for estimation, search, classification and compact expression of feature data are described in detail. Concepts of signal decomposition (such as segmentation, source tracking and separation), as well as composition, mixing, effects, and rendering, are discussed. Numerous figures and examples help to illustrate the aspects covered. The book was developed on the basis of a graduate-level university course, and most chapters are supplemented by problem-solving exercises. The book is also a self-contained introduction both for researchers and developers of multimedia content analysis systems in industry. .

  14. Improving performance of content-based image retrieval schemes in searching for similar breast mass regions: an assessment

    International Nuclear Information System (INIS)

    Wang Xiaohui; Park, Sang Cheol; Zheng Bin

    2009-01-01

    This study aims to assess three methods commonly used in content-based image retrieval (CBIR) schemes and investigate the approaches to improve scheme performance. A reference database involving 3000 regions of interest (ROIs) was established. Among them, 400 ROIs were randomly selected to form a testing dataset. Three methods, namely mutual information, Pearson's correlation and a multi-feature-based k-nearest neighbor (KNN) algorithm, were applied to search for the 15 'the most similar' reference ROIs to each testing ROI. The clinical relevance and visual similarity of searching results were evaluated using the areas under receiver operating characteristic (ROC) curves (A Z ) and average mean square difference (MSD) of the mass boundary spiculation level ratings between testing and selected ROIs, respectively. The results showed that the A Z values were 0.893 ± 0.009, 0.606 ± 0.021 and 0.699 ± 0.026 for the use of KNN, mutual information and Pearson's correlation, respectively. The A Z values increased to 0.724 ± 0.017 and 0.787 ± 0.016 for mutual information and Pearson's correlation when using ROIs with the size adaptively adjusted based on actual mass size. The corresponding MSD values were 2.107 ± 0.718, 2.301 ± 0.733 and 2.298 ± 0.743. The study demonstrates that due to the diversity of medical images, CBIR schemes using multiple image features and mass size-based ROIs can achieve significantly improved performance.

  15. Quantitative analysis of γ-oryzanol content in cold pressed rice bran oil by TLC-image analysis method.

    Science.gov (United States)

    Sakunpak, Apirak; Suksaeree, Jirapornchai; Monton, Chaowalit; Pathompak, Pathamaporn; Kraisintu, Krisana

    2014-02-01

    To develop and validate an image analysis method for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. TLC-densitometric and TLC-image analysis methods were developed, validated, and used for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. The results obtained by these two different quantification methods were compared by paired t-test. Both assays provided good linearity, accuracy, reproducibility and selectivity for determination of γ-oryzanol. The TLC-densitometric and TLC-image analysis methods provided a similar reproducibility, accuracy and selectivity for the quantitative determination of γ-oryzanol in cold pressed rice bran oil. A statistical comparison of the quantitative determinations of γ-oryzanol in samples did not show any statistically significant difference between TLC-densitometric and TLC-image analysis methods. As both methods were found to be equal, they therefore can be used for the determination of γ-oryzanol in cold pressed rice bran oil.

  16. Content-based retrieval in videos from laparoscopic surgery

    Science.gov (United States)

    Schoeffmann, Klaus; Beecks, Christian; Lux, Mathias; Uysal, Merih Seran; Seidl, Thomas

    2016-03-01

    In the field of medical endoscopy more and more surgeons are changing over to record and store videos of their endoscopic procedures for long-term archival. These endoscopic videos are a good source of information for explanations to patients and follow-up operations. As the endoscope is the "eye of the surgeon", the video shows the same information the surgeon has seen during the operation, and can describe the situation inside the patient much more precisely than an operation report would do. Recorded endoscopic videos can also be used for training young surgeons and in some countries the long-term archival of video recordings from endoscopic procedures is even enforced by law. A major challenge, however, is to efficiently access these very large video archives for later purposes. One problem, for example, is to locate specific images in the videos that show important situations, which are additionally captured as static images during the procedure. This work addresses this problem and focuses on contentbased video retrieval in data from laparoscopic surgery. We propose to use feature signatures, which can appropriately and concisely describe the content of laparoscopic images, and show that by using this content descriptor with an appropriate metric, we are able to efficiently perform content-based retrieval in laparoscopic videos. In a dataset with 600 captured static images from 33 hours recordings, we are able to find the correct video segment for more than 88% of these images.

  17. A Novel Secure Image Hashing Based on Reversible Watermarking for Forensic Analysis

    OpenAIRE

    Doyoddorj, Munkhbaatar; Rhee, Kyung-Hyune

    2011-01-01

    Part 2: Workshop; International audience; Nowadays, digital images and videos have become increasingly popular over the Internet and bring great social impact to a wide audience. In the meanwhile, technology advancement allows people to easily alter the content of digital multimedia and brings serious concern on the trustworthiness of online multimedia information. In this paper, we propose a new framework for multimedia forensics by using compact side information based on reversible watermar...

  18. Experiments with a novel content-based image retrieval software: can we eliminate classification systems in adolescent idiopathic scoliosis?

    Science.gov (United States)

    Menon, K Venugopal; Kumar, Dinesh; Thomas, Tessamma

    2014-02-01

    Study Design Preliminary evaluation of new tool. Objective To ascertain whether the newly developed content-based image retrieval (CBIR) software can be used successfully to retrieve images of similar cases of adolescent idiopathic scoliosis (AIS) from a database to help plan treatment without adhering to a classification scheme. Methods Sixty-two operated cases of AIS were entered into the newly developed CBIR database. Five new cases of different curve patterns were used as query images. The images were fed into the CBIR database that retrieved similar images from the existing cases. These were analyzed by a senior surgeon for conformity to the query image. Results Within the limits of variability set for the query system, all the resultant images conformed to the query image. One case had no similar match in the series. The other four retrieved several images that were matching with the query. No matching case was left out in the series. The postoperative images were then analyzed to check for surgical strategies. Broad guidelines for treatment could be derived from the results. More precise query settings, inclusion of bending films, and a larger database will enhance accurate retrieval and better decision making. Conclusion The CBIR system is an effective tool for accurate documentation and retrieval of scoliosis images. Broad guidelines for surgical strategies can be made from the postoperative images of the existing cases without adhering to any classification scheme.

  19. APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES

    International Nuclear Information System (INIS)

    STOYANOVA, R.S.; OCHS, M.F.; BROWN, T.R.; ROONEY, W.D.; LI, X.; LEE, J.H.; SPRINGER, C.S.

    1999-01-01

    Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content

  20. Tag-Based Social Image Search: Toward Relevant and Diverse Results

    Science.gov (United States)

    Yang, Kuiyuan; Wang, Meng; Hua, Xian-Sheng; Zhang, Hong-Jiang

    Recent years have witnessed a great success of social media websites. Tag-based image search is an important approach to access the image content of interest on these websites. However, the existing ranking methods for tag-based image search frequently return results that are irrelevant or lack of diversity. This chapter presents a diverse relevance ranking scheme which simultaneously takes relevance and diversity into account by exploring the content of images and their associated tags. First, it estimates the relevance scores of images with respect to the query term based on both visual information of images and semantic information of associated tags. Then semantic similarities of social images are estimated based on their tags. Based on the relevance scores and the similarities, the ranking list is generated by a greedy ordering algorithm which optimizes Average Diverse Precision (ADP), a novel measure that is extended from the conventional Average Precision (AP). Comprehensive experiments and user studies demonstrate the effectiveness of the approach.

  1. Fractal-Based Image Analysis In Radiological Applications

    Science.gov (United States)

    Dellepiane, S.; Serpico, S. B.; Vernazza, G.; Viviani, R.

    1987-10-01

    We present some preliminary results of a study aimed to assess the actual effectiveness of fractal theory and to define its limitations in the area of medical image analysis for texture description, in particular, in radiological applications. A general analysis to select appropriate parameters (mask size, tolerance on fractal dimension estimation, etc.) has been performed on synthetically generated images of known fractal dimensions. Moreover, we analyzed some radiological images of human organs in which pathological areas can be observed. Input images were subdivided into blocks of 6x6 pixels; then, for each block, the fractal dimension was computed in order to create fractal images whose intensity was related to the D value, i.e., texture behaviour. Results revealed that the fractal images could point out the differences between normal and pathological tissues. By applying histogram-splitting segmentation to the fractal images, pathological areas were isolated. Two different techniques (i.e., the method developed by Pentland and the "blanket" method) were employed to obtain fractal dimension values, and the results were compared; in both cases, the appropriateness of the fractal description of the original images was verified.

  2. Mobile object retrieval in server-based image databases

    Science.gov (United States)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  3. Estimation of water content in the leaves of fruit trees using infra-red images

    International Nuclear Information System (INIS)

    Muramatsu, N.; Hiraoka, K.

    2006-01-01

    A method was developed to evaluate water contents of fruit trees using infra-red photography. The irrigation of potted satsuma mandarin trees and grapevines was suppressed to induce water stress. During the drought treatment the leaf edges of basal parts of the shoots of grapevines became necrotic and the area of necrosis extended as the duration of stress increased. Necrosis was clearly distinguished from the viable areas on infra-red images. In satsuma mandarin, an abscission layer formed at the basal part of the petiole, then the leaves fell. Thus, detailed analysis was indispensable for detecting of the leaf water content. After obtaining infra-red images of satsuma mandarin leaves with or without water stress, a background treatment (subtraction of the background image) was performed on the images, then the average brightness of the leaf was determined using image analyzing software (Image Pro-plus). Coefficient correlation between the water status index using the infra-red camera and water content determined from dry weight and fresh weight of leaves was significant (r = 0.917 for adaxial surface data and r = 0.880 for abaxial surface data). These data indicate that infra-red photography is useful for detecting the degree of plant water stress

  4. Investigating Online Destination Images Using a Topic-Based Sentiment Analysis Approach

    Directory of Open Access Journals (Sweden)

    Gang Ren

    2017-09-01

    Full Text Available With the development of Web 2.0, many studies have tried to analyze tourist behavior utilizing user-generated contents. The primary purpose of this study is to propose a topic-based sentiment analysis approach, including a polarity classification and an emotion classification. We use the Latent Dirichlet Allocation model to extract topics from online travel review data and analyze the sentiments and emotions for each topic with our proposed approach. The top frequent words are extracted for each topic from online reviews on Ctrip.com. By comparing the relative importance of each topic, we conclude that many tourists prefer to provide “suggestion” reviews. In particular, we propose a new approach to classify the emotions of online reviews at the topic level utilizing an emotion lexicon, focusing on specific emotions to analyze customer complaints. The results reveal that attraction “management” obtains most complaints. These findings may provide useful insights for the development of attractions and the measurement of online destination image. Our proposed method can be used to analyze reviews from many online platforms and domains.

  5. A combined use of multispectral and SAR images for ship detection and characterization through object based image analysis

    Science.gov (United States)

    Aiello, Martina; Gianinetto, Marco

    2017-10-01

    Marine routes represent a huge portion of commercial and human trades, therefore surveillance, security and environmental protection themes are gaining increasing importance. Being able to overcome the limits imposed by terrestrial means of monitoring, ship detection from satellite has recently prompted a renewed interest for a continuous monitoring of illegal activities. This paper describes an automatic Object Based Image Analysis (OBIA) approach to detect vessels made of different materials in various sea environments. The combined use of multispectral and SAR images allows for a regular observation unrestricted by lighting and atmospheric conditions and complementarity in terms of geographic coverage and geometric detail. The method developed adopts a region growing algorithm to segment the image in homogeneous objects, which are then classified through a decision tree algorithm based on spectral and geometrical properties. Then, a spatial analysis retrieves the vessels' position, length and heading parameters and a speed range is associated. Optimization of the image processing chain is performed by selecting image tiles through a statistical index. Vessel candidates are detected over amplitude SAR images using an adaptive threshold Constant False Alarm Rate (CFAR) algorithm prior the object based analysis. Validation is carried out by comparing the retrieved parameters with the information provided by the Automatic Identification System (AIS), when available, or with manual measurement when AIS data are not available. The estimation of length shows R2=0.85 and estimation of heading R2=0.92, computed as the average of R2 values obtained for both optical and radar images.

  6. Shedding Light on Filovirus Infection with High-Content Imaging

    Directory of Open Access Journals (Sweden)

    Rekha G. Panchal

    2012-08-01

    Full Text Available Microscopy has been instrumental in the discovery and characterization of microorganisms. Major advances in high-throughput fluorescence microscopy and automated, high-content image analysis tools are paving the way to the systematic and quantitative study of the molecular properties of cellular systems, both at the population and at the single-cell level. High-Content Imaging (HCI has been used to characterize host-virus interactions in genome-wide reverse genetic screens and to identify novel cellular factors implicated in the binding, entry, replication and egress of several pathogenic viruses. Here we present an overview of the most significant applications of HCI in the context of the cell biology of filovirus infection. HCI assays have been recently implemented to quantitatively study filoviruses in cell culture, employing either infectious viruses in a BSL-4 environment or surrogate genetic systems in a BSL-2 environment. These assays are becoming instrumental for small molecule and siRNA screens aimed at the discovery of both cellular therapeutic targets and of compounds with anti-viral properties. We discuss the current practical constraints limiting the implementation of high-throughput biology in a BSL-4 environment, and propose possible solutions to safely perform high-content, high-throughput filovirus infection assays. Finally, we discuss possible novel applications of HCI in the context of filovirus research with particular emphasis on the identification of possible cellular biomarkers of virus infection.

  7. MO-FG-202-06: Improving the Performance of Gamma Analysis QA with Radiomics- Based Image Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wootton, L; Nyflot, M; Ford, E [University of Washington Department of Radiation Oncology, Seattle, WA (United States); Chaovalitwongse, A [University of Washington Department of Industrial and Systems Engineering, Seattle, Washington (United States); University of Washington Department of Radiology, Seattle, WA (United States); Li, N [University of Washington Department of Industrial and Systems Engineering, Seattle, Washington (United States)

    2016-06-15

    Purpose: The use of gamma analysis for IMRT quality assurance has well-known limitations. Traditionally, a simple thresholding technique is used to evaluated passing criteria. However, like any image the gamma distribution is rich in information which thresholding mostly discards. We therefore propose a novel method of analyzing gamma images that uses quantitative image features borrowed from radiomics, with the goal of improving error detection. Methods: 368 gamma images were generated from 184 clinical IMRT beams. For each beam the dose to a phantom was measured with EPID dosimetry and compared to the TPS dose calculated with and without normally distributed (2mm sigma) errors in MLC positions. The magnitude of 17 intensity histogram and size-zone radiomic features were derived from each image. The features that differed most significantly between image sets were determined with ROC analysis. A linear machine-learning model was trained on these features to classify images as with or without errors on 180 gamma images.The model was then applied to an independent validation set of 188 additional gamma distributions, half with and half without errors. Results: The most significant features for detecting errors were histogram kurtosis (p=0.007) and three size-zone metrics (p<1e-6 for each). The sizezone metrics detected clusters of high gamma-value pixels under mispositioned MLCs. The model applied to the validation set had an AUC of 0.8, compared to 0.56 for traditional gamma analysis with the decision threshold restricted to 98% or less. Conclusion: A radiomics-based image analysis method was developed that is more effective in detecting error than traditional gamma analysis. Though the pilot study here considers only MLC position errors, radiomics-based methods for other error types are being developed, which may provide better error detection and useful information on the source of detected errors. This work was partially supported by a grant from the Agency for

  8. PROTOTYPE CONTENT BASED IMAGE RETRIEVAL UNTUK DETEKSI PEN YAKIT KULIT DENGAN METODE EDGE DETECTION

    Directory of Open Access Journals (Sweden)

    Erick Fernando

    2016-05-01

    Full Text Available Dokter spesialis kulit melakukan pemeriksa secara visual objek mata, capture objek dengan kamera digital dan menanyakan riwayat perjalanan penyakit pasien, tanpa melakukan perbandingan terhadap gejala dan tanda yang ada sebelummnya. Sehingga pemeriksaan dan perkiraan jenis penyakit kulit. Pengolahan data citra dalam bentuk digital khususnya citra medis sudah sangat dibutuhkan dengan pra-processing. Banyak pasien yang dilayani di rumah sakit masih menggunakan data citra analog. Data analog ini membutuhkan ruangan khusus untuk menyimpan guna menghindarkan kerusakan mekanis. Uraian mengatasi permasalahan ini, citra medis dibuat dalam bentuk digital dan disimpan dalam sistem database dan dapat melihat kesamaan citra kulit yang baru. Citra akan dapat ditampilkan dengan pra- processing dengan identifikasi kesamaan dengan Content Based Image Retrieval (CBIR bekerja dengan cara mengukur kemiripan citra query dengan semua citra yang ada dalam database sehingga query cost berbanding lurus dengan jumlah citra dalam database.

  9. High content live cell imaging for the discovery of new antimalarial marine natural products

    Directory of Open Access Journals (Sweden)

    Cervantes Serena

    2012-01-01

    Full Text Available Abstract Background The human malaria parasite remains a burden in developing nations. It is responsible for up to one million deaths a year, a number that could rise due to increasing multi-drug resistance to all antimalarial drugs currently available. Therefore, there is an urgent need for the discovery of new drug therapies. Recently, our laboratory developed a simple one-step fluorescence-based live cell-imaging assay to integrate the complex biology of the human malaria parasite into drug discovery. Here we used our newly developed live cell-imaging platform to discover novel marine natural products and their cellular phenotypic effects against the most lethal malaria parasite, Plasmodium falciparum. Methods A high content live cell imaging platform was used to screen marine extracts effects on malaria. Parasites were grown in vitro in the presence of extracts, stained with RNA sensitive dye, and imaged at timed intervals with the BD Pathway HT automated confocal microscope. Results Image analysis validated our new methodology at a larger scale level and revealed potential antimalarial activity of selected extracts with a minimal cytotoxic effect on host red blood cells. To further validate our assay, we investigated parasite's phenotypes when incubated with the purified bioactive natural product bromophycolide A. We show that bromophycolide A has a strong and specific morphological effect on parasites, similar to the ones observed from the initial extracts. Conclusion Collectively, our results show that high-content live cell-imaging (HCLCI can be used to screen chemical libraries and identify parasite specific inhibitors with limited host cytotoxic effects. All together we provide new leads for the discovery of novel antimalarials.

  10. High content live cell imaging for the discovery of new antimalarial marine natural products.

    Science.gov (United States)

    Cervantes, Serena; Stout, Paige E; Prudhomme, Jacques; Engel, Sebastian; Bruton, Matthew; Cervantes, Michael; Carter, David; Tae-Chang, Young; Hay, Mark E; Aalbersberg, William; Kubanek, Julia; Le Roch, Karine G

    2012-01-03

    The human malaria parasite remains a burden in developing nations. It is responsible for up to one million deaths a year, a number that could rise due to increasing multi-drug resistance to all antimalarial drugs currently available. Therefore, there is an urgent need for the discovery of new drug therapies. Recently, our laboratory developed a simple one-step fluorescence-based live cell-imaging assay to integrate the complex biology of the human malaria parasite into drug discovery. Here we used our newly developed live cell-imaging platform to discover novel marine natural products and their cellular phenotypic effects against the most lethal malaria parasite, Plasmodium falciparum. A high content live cell imaging platform was used to screen marine extracts effects on malaria. Parasites were grown in vitro in the presence of extracts, stained with RNA sensitive dye, and imaged at timed intervals with the BD Pathway HT automated confocal microscope. Image analysis validated our new methodology at a larger scale level and revealed potential antimalarial activity of selected extracts with a minimal cytotoxic effect on host red blood cells. To further validate our assay, we investigated parasite's phenotypes when incubated with the purified bioactive natural product bromophycolide A. We show that bromophycolide A has a strong and specific morphological effect on parasites, similar to the ones observed from the initial extracts. Collectively, our results show that high-content live cell-imaging (HCLCI) can be used to screen chemical libraries and identify parasite specific inhibitors with limited host cytotoxic effects. All together we provide new leads for the discovery of novel antimalarials. © 2011 Cervantes et al; licensee BioMed Central Ltd.

  11. Identification of Application Areas of the Steganalytic Approach Based on the Analysis of Spatial Domain of Digital Contents

    Directory of Open Access Journals (Sweden)

    Akhmamet’eva A.V.

    2016-08-01

    Full Text Available In this paper the identification of application areas of previously developed steganalytic approach based on accounting of quantity of consecutive triads in the matrix of unique colors of digital images, from the point of view of formats of digital contents is carried out. The influence of the size of the digital contents, which are used as containers for the embedding of additional information, on the detection efficiency during steganalysis is analyzed. As a result of fulfilled computational experiments, it has beet stated that the developed approach is effective for digital content in losses formats, and it has been turned out a restriction on size of the containers. Results of computational experiments are demonstrated.

  12. Collagen Content Limits Optical Coherence Tomography Image Depth in Porcine Vocal Fold Tissue.

    Science.gov (United States)

    Garcia, Jordan A; Benboujja, Fouzi; Beaudette, Kathy; Rogers, Derek; Maurer, Rie; Boudoux, Caroline; Hartnick, Christopher J

    2016-11-01

    Vocal fold scarring, a condition defined by increased collagen content, is challenging to treat without a method of noninvasively assessing vocal fold structure in vivo. The goal of this study was to observe the effects of vocal fold collagen content on optical coherence tomography imaging to develop a quantifiable marker of disease. Excised specimen study. Massachusetts Eye and Ear Infirmary. Porcine vocal folds were injected with collagenase to remove collagen from the lamina propria. Optical coherence tomography imaging was performed preinjection and at 0, 45, 90, and 180 minutes postinjection. Mean pixel intensity (or image brightness) was extracted from images of collagenase- and control-treated hemilarynges. Texture analysis of the lamina propria at each injection site was performed to extract image contrast. Two-factor repeated measure analysis of variance and t tests were used to determine statistical significance. Picrosirius red staining was performed to confirm collagenase activity. Mean pixel intensity was higher at injection sites of collagenase-treated vocal folds than control vocal folds (P Fold change in image contrast was significantly increased in collagenase-treated vocal folds than control vocal folds (P = .002). Picrosirius red staining in control specimens revealed collagen fibrils most prominent in the subepithelium and above the thyroarytenoid muscle. Specimens treated with collagenase exhibited a loss of these structures. Collagen removal from vocal fold tissue increases image brightness of underlying structures. This inverse relationship may be useful in treating vocal fold scarring in patients. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2016.

  13. Content-based image retrieval using a signature graph and a self-organizing map

    Directory of Open Access Journals (Sweden)

    Van Thanh The

    2016-06-01

    Full Text Available In order to effectively retrieve a large database of images, a method of creating an image retrieval system CBIR (contentbased image retrieval is applied based on a binary index which aims to describe features of an image object of interest. This index is called the binary signature and builds input data for the problem of matching similar images. To extract the object of interest, we propose an image segmentation method on the basis of low-level visual features including the color and texture of the image. These features are extracted at each block of the image by the discrete wavelet frame transform and the appropriate color space. On the basis of a segmented image, we create a binary signature to describe the location, color and shape of the objects of interest. In order to match similar images, we provide a similarity measure between the images based on binary signatures. Then, we present a CBIR model which combines a signature graph and a self-organizing map to cluster and store similar images. To illustrate the proposed method, experiments on image databases are reported, including COREL,Wang and MSRDI.

  14. Demographic-Based Content Analysis of Web-Based Health-Related Social Media.

    Science.gov (United States)

    Sadah, Shouq A; Shahbazi, Moloud; Wiley, Matthew T; Hristidis, Vagelis

    2016-06-13

    An increasing number of patients from diverse demographic groups share and search for health-related information on Web-based social media. However, little is known about the content of the posted information with respect to the users' demographics. The aims of this study were to analyze the content of Web-based health-related social media based on users' demographics to identify which health topics are discussed in which social media by which demographic groups and to help guide educational and research activities. We analyze 3 different types of health-related social media: (1) general Web-based social networks Twitter and Google+; (2) drug review websites; and (3) health Web forums, with a total of about 6 million users and 20 million posts. We analyzed the content of these posts based on the demographic group of their authors, in terms of sentiment and emotion, top distinctive terms, and top medical concepts. The results of this study are: (1) Pregnancy is the dominant topic for female users in drug review websites and health Web forums, whereas for male users, it is cardiac problems, HIV, and back pain, but this is not the case for Twitter; (2) younger users (0-17 years) mainly talk about attention-deficit hyperactivity disorder (ADHD) and depression-related drugs, users aged 35-44 years discuss about multiple sclerosis (MS) drugs, and middle-aged users (45-64 years) talk about alcohol and smoking; (3) users from the Northeast United States talk about physical disorders, whereas users from the West United States talk about mental disorders and addictive behaviors; (4) Users with higher writing level express less anger in their posts. We studied the popular topics and the sentiment based on users' demographics in Web-based health-related social media. Our results provide valuable information, which can help create targeted and effective educational campaigns and guide experts to reach the right users on Web-based social chatter.

  15. Quantitative analysis of γ-oryzanol content in cold pressed rice bran oil by TLC-image analysis method

    OpenAIRE

    Sakunpak, Apirak; Suksaeree, Jirapornchai; Monton, Chaowalit; Pathompak, Pathamaporn; Kraisintu, Krisana

    2014-01-01

    Objective: To develop and validate an image analysis method for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. Methods: TLC-densitometric and TLC-image analysis methods were developed, validated, and used for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. The results obtained by these two different quantification methods were compared by paired t-test. Results: Both assays provided good linearity, accuracy, reproducibility and selectivity for dete...

  16. Determining ice water content from 2D crystal images in convective cloud systems

    Science.gov (United States)

    Leroy, Delphine; Coutris, Pierre; Fontaine, Emmanuel; Schwarzenboeck, Alfons; Strapp, J. Walter

    2016-04-01

    Cloud microphysical in-situ instrumentation measures bulk parameters like total water content (TWC) and/or derives particle size distributions (PSD) (utilizing optical spectrometers and optical array probes (OAP)). The goal of this work is to introduce a comprehensive methodology to compute TWC from OAP measurements, based on the dataset collected during recent HAIC (High Altitude Ice Crystals)/HIWC (High Ice Water Content) field campaigns. Indeed, the HAIC/HIWC field campaigns in Darwin (2014) and Cayenne (2015) provide a unique opportunity to explore the complex relationship between cloud particle mass and size in ice crystal environments. Numerous mesoscale convective systems (MCSs) were sampled with the French Falcon 20 research aircraft at different temperature levels from -10°C up to 50°C. The aircraft instrumentation included an IKP-2 (isokinetic probe) to get reliable measurements of TWC and the optical array probes 2D-S and PIP recording images over the entire ice crystal size range. Based on the known principle relating crystal mass and size with a power law (m=α•Dβ), Fontaine et al. (2014) performed extended 3D crystal simulations and thereby demonstrated that it is possible to estimate the value of the exponent β from OAP data, by analyzing the surface-size relationship for the 2D images as a function of time. Leroy et al. (2015) proposed an extended version of this method that produces estimates of β from the analysis of both the surface-size and perimeter-size relationships. Knowing the value of β, α then is deduced from the simultaneous IKP-2 TWC measurements for the entire HAIC/HIWC dataset. The statistical analysis of α and β values for the HAIC/HIWC dataset firstly shows that α is closely linked to β and that this link changes with temperature. From these trends, a generalized parameterization for α is proposed. Finally, the comparison with the initial IKP-2 measurements demonstrates that the method is able to predict TWC values

  17. Color image analysis technique for measuring of fat in meat: an application for the meat industry

    Science.gov (United States)

    Ballerini, Lucia; Hogberg, Anders; Lundstrom, Kerstin; Borgefors, Gunilla

    2001-04-01

    Intramuscular fat content in meat influences some important meat quality characteristics. The aim of the present study was to develop and apply image processing techniques to quantify intramuscular fat content in beefs together with the visual appearance of fat in meat (marbling). Color images of M. longissimus dorsi meat samples with a variability of intramuscular fat content and marbling were captured. Image analysis software was specially developed for the interpretation of these images. In particular, a segmentation algorithm (i.e. classification of different substances: fat, muscle and connective tissue) was optimized in order to obtain a proper classification and perform subsequent analysis. Segmentation of muscle from fat was achieved based on their characteristics in the 3D color space, and on the intrinsic fuzzy nature of these structures. The method is fully automatic and it combines a fuzzy clustering algorithm, the Fuzzy c-Means Algorithm, with a Genetic Algorithm. The percentages of various colors (i.e. substances) within the sample are then determined; the number, size distribution, and spatial distributions of the extracted fat flecks are measured. Measurements are correlated with chemical and sensory properties. Results so far show that advanced image analysis is useful for quantify the visual appearance of meat.

  18. Image edge detection based tool condition monitoring with morphological component analysis.

    Science.gov (United States)

    Yu, Xiaolong; Lin, Xin; Dai, Yiquan; Zhu, Kunpeng

    2017-07-01

    The measurement and monitoring of tool condition are keys to the product precision in the automated manufacturing. To meet the need, this study proposes a novel tool wear monitoring approach based on the monitored image edge detection. Image edge detection has been a fundamental tool to obtain features of images. This approach extracts the tool edge with morphological component analysis. Through the decomposition of original tool wear image, the approach reduces the influence of texture and noise for edge measurement. Based on the target image sparse representation and edge detection, the approach could accurately extract the tool wear edge with continuous and complete contour, and is convenient in charactering tool conditions. Compared to the celebrated algorithms developed in the literature, this approach improves the integrity and connectivity of edges, and the results have shown that it achieves better geometry accuracy and lower error rate in the estimation of tool conditions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Attentional Mechanisms for Interactive Image Exploration

    Directory of Open Access Journals (Sweden)

    Philippe Tarroux

    2005-08-01

    Full Text Available A lot of work has been devoted to content-based image retrieval from large image databases. The traditional approaches are based on the analysis of the whole image content both in terms of low-level and semantic characteristics. We investigate in this paper an approach based on attentional mechanisms and active vision. We describe a visual architecture that combines bottom-up and top-down approaches for identifying regions of interest according to a given goal. We show that a coarse description of the searched target combined with a bottom-up saliency map provides an efficient way to find specified targets on images. The proposed system is a first step towards the development of software agents able to search for image content in image databases.

  20. [Analysis of spectral features based on water content of desert vegetation].

    Science.gov (United States)

    Zhao, Zhao; Li, Xia; Yin, Ye-biao; Tang, Jin; Zhou, Sheng-bin

    2010-09-01

    By using HR-768 field-portable spectroradiometer made by the Spectra Vista Corporation (SVC) of America, the hyper-spectral data of nine types of desert plants were measured, and the water content of corresponding vegetation was determined by roasting in lab. The continuum of measured hyperspectral data was removed by using ENVI, and the relationship between the water content of vegetation and the reflectance spectrum was analyzed by using correlation coefficient method. The result shows that the correlation between the bands from 978 to 1030 nm and water content of vegetation is weak while it is better for the bands from 1133 to 1266 nm. The bands from 1374 to 1534 nm are the characteristic bands because of the correlation between them and water content is the best. By using cluster analysis and according to the water content, the vegetation could be marked off into three grades: high (>70%), medium (50%-70%) and low (<50%). The research reveals the relationship between water content of desert vegetation and hyperspectral data, and provides basis for the analysis of area in desert and the monitoring of desert vegetation by using remote sensing data.

  1. Content-based retrieval of brain tumor in contrast-enhanced MRI images using tumor margin information and learned distance metric.

    Science.gov (United States)

    Yang, Wei; Feng, Qianjin; Yu, Mei; Lu, Zhentai; Gao, Yang; Xu, Yikai; Chen, Wufan

    2012-11-01

    A content-based image retrieval (CBIR) method for T1-weighted contrast-enhanced MRI (CE-MRI) images of brain tumors is presented for diagnosis aid. The method is thoroughly evaluated on a large image dataset. Using the tumor region as a query, the authors' CBIR system attempts to retrieve tumors of the same pathological category. Aside from commonly used features such as intensity, texture, and shape features, the authors use a margin information descriptor (MID), which is capable of describing the characteristics of tissue surrounding a tumor, for representing image contents. In addition, the authors designed a distance metric learning algorithm called Maximum mean average Precision Projection (MPP) to maximize the smooth approximated mean average precision (mAP) to optimize retrieval performance. The effectiveness of MID and MPP algorithms was evaluated using a brain CE-MRI dataset consisting of 3108 2D scans acquired from 235 patients with three categories of brain tumors (meningioma, glioma, and pituitary tumor). By combining MID and other features, the mAP of retrieval increased by more than 6% with the learned distance metrics. The distance metric learned by MPP significantly outperformed the other two existing distance metric learning methods in terms of mAP. The CBIR system using the proposed strategies achieved a mAP of 87.3% and a precision of 89.3% when top 10 images were returned by the system. Compared with scale-invariant feature transform, the MID, which uses the intensity profile as descriptor, achieves better retrieval performance. Incorporating tumor margin information represented by MID with the distance metric learned by the MPP algorithm can substantially improve the retrieval performance for brain tumors in CE-MRI.

  2. Perceptual security of encrypted images based on wavelet scaling analysis

    Science.gov (United States)

    Vargas-Olmos, C.; Murguía, J. S.; Ramírez-Torres, M. T.; Mejía Carlos, M.; Rosu, H. C.; González-Aguilar, H.

    2016-08-01

    The scaling behavior of the pixel fluctuations of encrypted images is evaluated by using the detrended fluctuation analysis based on wavelets, a modern technique that has been successfully used recently for a wide range of natural phenomena and technological processes. As encryption algorithms, we use the Advanced Encryption System (AES) in RBT mode and two versions of a cryptosystem based on cellular automata, with the encryption process applied both fully and partially by selecting different bitplanes. In all cases, the results show that the encrypted images in which no understandable information can be visually appreciated and whose pixels look totally random present a persistent scaling behavior with the scaling exponent α close to 0.5, implying no correlation between pixels when the DFA with wavelets is applied. This suggests that the scaling exponents of the encrypted images can be used as a perceptual security criterion in the sense that when their values are close to 0.5 (the white noise value) the encrypted images are more secure also from the perceptual point of view.

  3. Video content analysis of surgical procedures.

    Science.gov (United States)

    Loukas, Constantinos

    2018-02-01

    In addition to its therapeutic benefits, minimally invasive surgery offers the potential for video recording of the operation. The videos may be archived and used later for reasons such as cognitive training, skills assessment, and workflow analysis. Methods from the major field of video content analysis and representation are increasingly applied in the surgical domain. In this paper, we review recent developments and analyze future directions in the field of content-based video analysis of surgical operations. The review was obtained from PubMed and Google Scholar search on combinations of the following keywords: 'surgery', 'video', 'phase', 'task', 'skills', 'event', 'shot', 'analysis', 'retrieval', 'detection', 'classification', and 'recognition'. The collected articles were categorized and reviewed based on the technical goal sought, type of surgery performed, and structure of the operation. A total of 81 articles were included. The publication activity is constantly increasing; more than 50% of these articles were published in the last 3 years. Significant research has been performed for video task detection and retrieval in eye surgery. In endoscopic surgery, the research activity is more diverse: gesture/task classification, skills assessment, tool type recognition, shot/event detection and retrieval. Recent works employ deep neural networks for phase and tool recognition as well as shot detection. Content-based video analysis of surgical operations is a rapidly expanding field. Several future prospects for research exist including, inter alia, shot boundary detection, keyframe extraction, video summarization, pattern discovery, and video annotation. The development of publicly available benchmark datasets to evaluate and compare task-specific algorithms is essential.

  4. Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis

    Science.gov (United States)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-01-01

    To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.

  5. Benchmarking the Applicability of Ontology in Geographic Object-Based Image Analysis

    Directory of Open Access Journals (Sweden)

    Sachit Rajbhandari

    2017-11-01

    Full Text Available In Geographic Object-based Image Analysis (GEOBIA, identification of image objects is normally achieved using rule-based classification techniques supported by appropriate domain knowledge. However, GEOBIA currently lacks a systematic method to formalise the domain knowledge required for image object identification. Ontology provides a representation vocabulary for characterising domain-specific classes. This study proposes an ontological framework that conceptualises domain knowledge in order to support the application of rule-based classifications. The proposed ontological framework is tested with a landslide case study. The Web Ontology Language (OWL is used to construct an ontology in the landslide domain. The segmented image objects with extracted features are incorporated into the ontology as instances. The classification rules are written in Semantic Web Rule Language (SWRL and executed using a semantic reasoner to assign instances to appropriate landslide classes. Machine learning techniques are used to predict new threshold values for feature attributes in the rules. Our framework is compared with published work on landslide detection where ontology was not used for the image classification. Our results demonstrate that a classification derived from the ontological framework accords with non-ontological methods. This study benchmarks the ontological method providing an alternative approach for image classification in the case study of landslides.

  6. Enhancing Image Retrieval System Using Content Based Search ...

    African Journals Online (AJOL)

    The output shows more efficiency in retrieval because instead of performing the search on the entire image database, the image category option directs the retrieval engine to the specified category. Also, there is provision to update or modify the different image categories in the image database as need arise. Keywords: ...

  7. NSCT BASED LOCAL ENHANCEMENT FOR ACTIVE CONTOUR BASED IMAGE SEGMENTATION APPLICATION

    Directory of Open Access Journals (Sweden)

    Hiren Mewada

    2010-08-01

    Full Text Available Because of cross-disciplinary nature, Active Contour modeling techniques have been utilized extensively for the image segmentation. In traditional active contour based segmentation techniques based on level set methods, the energy functions are defined based on the intensity gradient. This makes them highly sensitive to the situation where the underlying image content is characterized by image nonhomogeneities due to illumination and contrast condition. This is the most difficult problem to make them as fully automatic image segmentation techniques. This paper introduces one of the approaches based on image enhancement to this problem. The enhanced image is obtained using NonSubsampled Contourlet Transform, which improves the edges strengths in the direction where the illumination is not proper and then active contour model based on level set technique is utilized to segment the object. Experiment results demonstrate that proposed method can be utilized along with existing active contour model based segmentation method under situation characterized by intensity non-homogeneity to make them fully automatic.

  8. DIMOND II: Measures for optimising radiological information content and dose in digital imaging

    International Nuclear Information System (INIS)

    Dowling, A.; Malone, J.; Marsh, D.

    2001-01-01

    The European Commission concerted action on 'Digital Imaging: Measures for Optimising Radiological Information Content and Dose', DIMOND II, was conducted by 12 European partners over the period January 1997 to June 1999. The objective of the concerted action was to initiate a project in the area of digital medical imaging where practice was evolving without structured research in radiation protection, optimisation or justification. The main issues addressed were patient and staff dosimetry, image quality, quality criteria and technical issues. The scope included computed radiography (CR), image intensifier radiography and fluoroscopy, cardiology and interventional procedures. The concerted action was based on the consolidation of work conducted in the partner's institutions together with elective new work. Protocols and approaches to dosimetry, radiological information content/image quality measurement and quality criteria were established and presented at an international workshop held in Dublin in June 1999. Details of the work conducted during the DIMOND II concerted action and a summary of the main findings and conclusions are presented in this contribution. (author)

  9. Determination of fat and total protein content in milk using conventional digital imaging

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey; Melenteva, Anastasiia; Bogomolov, Andrey

    2014-01-01

    into account spatial distribution of light, diffusely transmitted through a sample. The proposed method has been tested on two sample sets prepared from industrial raw milk standards, with variable fat and protein content. Partial Least-Squares (PLS) regression on the features calculated from images......The applicability of conventional digital imaging to quantitative determination of fat and total protein in cow’s milk, based on the phenomenon of light scatter, has been proved. A new algorithm for extracting features from digital images of milk samples has been developed. The algorithm takes...... of monochromatically illuminated milk samples resulted in models with high prediction performance when analysed the sets separately (best models with cross-validated R2=0.974 for protein and R2=0.973 for fat content). However when analysed the sets jointly the obtained results were significantly worse (best models...

  10. Assessment of the variations in fat content in normal liver using a fast MR imaging method in comparison with results obtained by spectroscopic imaging

    International Nuclear Information System (INIS)

    Irwan, Roy; Edens, Mireille A.; Sijens, Paul E.

    2008-01-01

    A recently published Dixon-based MRI method for quantifying liver fat content using dual-echo breath-hold gradient echo imaging was validated by phantom experiments and compared with results of biopsy in two patients (Radiology 2005;237:1048-1055). We applied this method in ten healthy volunteers and compared the outcomes with the results of MR spectroscopy (MRS), the gold standard in quantifying liver fat content. Novel was the use of spectroscopic imaging yielding the variations in fat content across the liver rather than a single value obtained by single voxel MRS. Compared with the results of MRS, liver fat content according to MRI was too high in nine subjects (range 3.3-10.7% vs. 0.9-7.7%) and correct in one (21.1 vs. 21.3%). Furthermore, in one of the ten subjects the MRI fat content according to the Dixon-based MRI method was incorrect due to a (100-x) versus x percent lipid content mix-up. The second problem was fixed by a minor adjustment of the MRI algorithm. Despite systematic overestimation of liver fat contents by MRI, Spearman's correlation between the adjusted MRI liver fat contents with MRS was high (r = 0.927, P < 0.001). Even after correction of the algorithm, the problem remaining with the Dixon-based MRI method for the assessment of liver fat content,is that, at the lower end range, liver fat content is systematically overestimated by 4%. (orig.)

  11. The Study on the Attenuation of X-ray and Imaging Quality by Contents in Stomach

    International Nuclear Information System (INIS)

    Dong, Kyung Rae; Ji, Youn Sang; Kim, Chang Bok; Choi, Seong Kwan; Moon, Sang In; Dieter, Kevin

    2009-01-01

    This study examined the change in the attenuation of X-rays with the ROI (Region of Interest) in DR (Digital Radiography) according to the stomach contents by manufacturing a tissue equivalent material phantom to simulate real stomach tissue based on the assumption that there is some attenuation of X-rays and a difference in imaging quality according to the stomach contents. The transit dosage by the attenuation of X-rays decreased with increasing protein thickness, which altered the average ROI values in the film and DR images. A comparison of the change in average ROI values of the film and DR image showed that the image in film caused larger density changes with varying thickness of protein than the image by DR. The results indicate that NPO (nothing by mouth) is more important in film system than in DR system.

  12. The Study on the Attenuation of X-ray and Imaging Quality by Contents in Stomach

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Kyung Rae; Ji, Youn Sang; Kim, Chang Bok; Choi, Seong Kwan; Moon, Sang In [Dept. of Radiological Technology, Gwangju Health College University, Gwangju (Korea, Republic of); Dieter, Kevin [Dept. of Physical Therapy, Gwangju Health College University, Gwangju (Korea, Republic of)

    2009-03-15

    This study examined the change in the attenuation of X-rays with the ROI (Region of Interest) in DR (Digital Radiography) according to the stomach contents by manufacturing a tissue equivalent material phantom to simulate real stomach tissue based on the assumption that there is some attenuation of X-rays and a difference in imaging quality according to the stomach contents. The transit dosage by the attenuation of X-rays decreased with increasing protein thickness, which altered the average ROI values in the film and DR images. A comparison of the change in average ROI values of the film and DR image showed that the image in film caused larger density changes with varying thickness of protein than the image by DR. The results indicate that NPO (nothing by mouth) is more important in film system than in DR system.

  13. Learning representative features for facial images based on a modified principal component analysis

    Science.gov (United States)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  14. Image based SAR product simulation for analysis

    Science.gov (United States)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  15. Impact of image segmentation on high-content screening data quality for SK-BR-3 cells

    Directory of Open Access Journals (Sweden)

    Li Yizheng

    2007-09-01

    Full Text Available Abstract Background High content screening (HCS is a powerful method for the exploration of cellular signalling and morphology that is rapidly being adopted in cancer research. HCS uses automated microscopy to collect images of cultured cells. The images are subjected to segmentation algorithms to identify cellular structures and quantitate their morphology, for hundreds to millions of individual cells. However, image analysis may be imperfect, especially for "HCS-unfriendly" cell lines whose morphology is not well handled by current image segmentation algorithms. We asked if segmentation errors were common for a clinically relevant cell line, if such errors had measurable effects on the data, and if HCS data could be improved by automated identification of well-segmented cells. Results Cases of poor cell body segmentation occurred frequently for the SK-BR-3 cell line. We trained classifiers to identify SK-BR-3 cells that were well segmented. On an independent test set created by human review of cell images, our optimal support-vector machine classifier identified well-segmented cells with 81% accuracy. The dose responses of morphological features were measurably different in well- and poorly-segmented populations. Elimination of the poorly-segmented cell population increased the purity of DNA content distributions, while appropriately retaining biological heterogeneity, and simultaneously increasing our ability to resolve specific morphological changes in perturbed cells. Conclusion Image segmentation has a measurable impact on HCS data. The application of a multivariate shape-based filter to identify well-segmented cells improved HCS data quality for an HCS-unfriendly cell line, and could be a valuable post-processing step for some HCS datasets.

  16. A content analysis of thinspiration, fitspiration, and bonespiration imagery on social media.

    Science.gov (United States)

    Talbot, Catherine Victoria; Gavin, Jeffrey; van Steen, Tommy; Morey, Yvette

    2017-01-01

    On social media, images such as thinspiration, fitspiration, and bonespiration, are shared to inspire certain body ideals. Previous research has demonstrated that exposure to these groups of content is associated with increased body dissatisfaction and decreased self-esteem. It is therefore important that the bodies featured within these groups of content are more fully understood so that effective interventions and preventative measures can be informed, developed, and implemented. A content analysis was conducted on a sample of body-focussed images with the hashtags thinspiration, fitspiration, and bonespiration from three social media platforms. The analyses showed that thinspiration and bonespiration content contained more thin and objectified bodies, compared to fitspiration which featured a greater prevalence of muscles and muscular bodies. In addition, bonespiration content contained more bone protrusions and fewer muscles than thinspiration content. The findings suggest fitspiration may be a less unhealthy type of content; however, a subgroup of imagery was identified which idealised the extremely thin body type and as such this content should also be approached with caution. Future research should utilise qualitative methods to further develop understandings of the body ideals that are constructed within these groups of content and the motivations behind posting this content.

  17. Performance Analysis of Segmentation of Hyperspectral Images Based on Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Praveen Agarwal

    2017-06-01

    Full Text Available Image segmentation is a fundamental approach in the field of image processing and based on user’s application .This paper propose an original and simple segmentation strategy based on the EM approach that resolves many informatics problems about hyperspectral images which are observed by airborne sensors. In a first step, to simplify the input color textured image into a color image without texture. The final segmentation is simply achieved by a spatially color segmentation using feature vector with the set of color values contained around the pixel to be classified with some mathematical equations. The spatial constraint allows taking into account the inherent spatial relationships of any image and its color. This approach provides effective PSNR for the segmented image. These results have the better performance as the segmented images are compared with Watershed & Region Growing Algorithm and provide effective segmentation for the Spectral Images & Medical Images.

  18. Automatic comic page image understanding based on edge segment analysis

    Science.gov (United States)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  19. Quantitative image analysis of vertebral body architecture - improved diagnosis in osteoporosis based on high-resolution computed tomography

    International Nuclear Information System (INIS)

    Mundinger, A.; Wiesmeier, B.; Dinkel, E.; Helwig, A.; Beck, A.; Schulte Moenting, J.

    1993-01-01

    71 women, 64 post-menopausal, were examined by single-energy quantitative computed tomography (SEQCT) and by high-resolution computed tomography (HRCT) scans through the middle of lumbar vertebral bodies. Computer-assisted image analysis of the high-resolution images assessed trabecular morphometry of the vertebral spongiosa texture. Texture parameters differed in women with and without age-reduced bone density, and in the former group also in patients with and without vertebral fractures. Discriminating parameters were the total number, diameter and variance of trabecular and intertrabecular spaces as well as the trabecular surface (p < 0.05)). A texture index based on these statistically selected morphometric parameters identified a subgroup of patients suffering from fractures due to abnormal spongiosal architecture but with a bone mineral content not indicative for increased fracture risk. The combination of osteodensitometric and trabecular morphometry improves the diagnosis of osteoporosis and may contribute to the prediction of individual fracture risk. (author)

  20. Complex event processing for content-based text, image, and video retrieval

    NARCIS (Netherlands)

    Bowman, E.K.; Broome, B.D.; Holland, V.M.; Summers-Stay, D.; Rao, R.M.; Duselis, J.; Howe, J.; Madahar, B.K.; Boury-Brisset, A.C.; Forrester, B.; Kwantes, P.; Burghouts, G.; Huis, J. van; Mulayim, A.Y.

    2016-01-01

    This report summarizes the findings of an exploratory team of the North Atlantic Treaty Organization (NATO) Information Systems Technology panel into Content-Based Analytics (CBA). The team carried out a technical review into the current status of theoretical and practical developments of methods,

  1. Evaluation of Yogurt Microstructure Using Confocal Laser Scanning Microscopy and Image Analysis

    DEFF Research Database (Denmark)

    Skytte, Jacob Lercke; Ghita, Ovidiu; Whelan, Paul F.

    2015-01-01

    The microstructure of protein networks in yogurts defines important physical properties of the yogurt and hereby partly its quality. Imaging this protein network using confocal scanning laser microscopy (CSLM) has shown good results, and CSLM has become a standard measuring technique for fermented...... to image texture description. Here, CSLM images from a yogurt fermentation study are investigated, where production factors including fat content, protein content, heat treatment, and incubation temperature are varied. The descriptors are evaluated through nearest neighbor classification, variance analysis...... scanning microscopy images can be used to provide information on the protein microstructure in yogurt products. For large numbers of microscopy images, subjective evaluation becomes a difficult or even impossible approach, if the images should be incorporated in any form of statistical analysis alongside...

  2. Edge enhancement and noise suppression for infrared image based on feature analysis

    Science.gov (United States)

    Jiang, Meng

    2018-06-01

    Infrared images are often suffering from background noise, blurred edges, few details and low signal-to-noise ratios. To improve infrared image quality, it is essential to suppress noise and enhance edges simultaneously. To realize it in this paper, we propose a novel algorithm based on feature analysis in shearlet domain. Firstly, as one of multi-scale geometric analysis (MGA), we introduce the theory and superiority of shearlet transform. Secondly, after analyzing the defects of traditional thresholding technique to suppress noise, we propose a novel feature extraction distinguishing image structures from noise well and use it to improve the traditional thresholding technique. Thirdly, with computing the correlations between neighboring shearlet coefficients, the feature attribute maps identifying the weak detail and strong edges are completed to improve the generalized unsharped masking (GUM). At last, experiment results with infrared images captured in different scenes demonstrate that the proposed algorithm suppresses noise efficiently and enhances image edges adaptively.

  3. Content-based intermedia synchronization

    Science.gov (United States)

    Oh, Dong-Young; Sampath-Kumar, Srihari; Rangan, P. Venkat

    1995-03-01

    Inter-media synchronization methods developed until now have been based on syntactic timestamping of video frames and audio samples. These methods are not fully appropriate for the synchronization of multimedia objects which may have to be accessed individually by their contents, e.g. content-base data retrieval. We propose a content-based multimedia synchronization scheme in which a media stream is viewed as hierarchial composition of smaller objects which are logically structured based on the contents, and the synchronization is achieved by deriving temporal relations among logical units of media object. content-based synchronization offers several advantages such as, elimination of the need for time stamping, freedom from limitations of jitter, synchronization of independently captured media objects in video editing, and compensation for inherent asynchronies in capture times of video and audio.

  4. Cloud solution for histopathological image analysis using region of interest based compression.

    Science.gov (United States)

    Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana

    2017-07-01

    Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.

  5. The analysis of image feature robustness using cometcloud

    Directory of Open Access Journals (Sweden)

    Xin Qi

    2012-01-01

    Full Text Available The robustness of image features is a very important consideration in quantitative image analysis. The objective of this paper is to investigate the robustness of a range of image texture features using hematoxylin stained breast tissue microarray slides which are assessed while simulating different imaging challenges including out of focus, changes in magnification and variations in illumination, noise, compression, distortion, and rotation. We employed five texture analysis methods and tested them while introducing all of the challenges listed above. The texture features that were evaluated include co-occurrence matrix, center-symmetric auto-correlation, texture feature coding method, local binary pattern, and texton. Due to the independence of each transformation and texture descriptor, a network structured combination was proposed and deployed on the Rutgers private cloud. The experiments utilized 20 randomly selected tissue microarray cores. All the combinations of the image transformations and deformations are calculated, and the whole feature extraction procedure was completed in 70 minutes using a cloud equipped with 20 nodes. Center-symmetric auto-correlation outperforms all the other four texture descriptors but also requires the longest computational time. It is roughly 10 times slower than local binary pattern and texton. From a speed perspective, both the local binary pattern and texton features provided excellent performance for classification and content-based image retrieval.

  6. Cytometric analysis of mammalian sperm for induced morphologic and DNA content errors

    International Nuclear Information System (INIS)

    Pinkel, D.

    1983-01-01

    Some flow-cytometric and image analysis procedures under development for quantitative analysis of sperm morphology are reviewed. The results of flow-cytometric DNA-content measurements on sperm from radiation exposed mice are also summarized, the results related to the available cytological information, and their potential dosimetric sensitivity discussed

  7. H-Metric: Characterizing Image Datasets via Homogenization Based on KNN-Queries

    Directory of Open Access Journals (Sweden)

    Welington M da Silva

    2012-01-01

    Full Text Available Precision-Recall is one of the main metrics for evaluating content-based image retrieval techniques. However, it does not provide an ample perception of the properties of an image dataset immersed in a metric space. In this work, we describe an alternative metric named H-Metric, which is determined along a sequence of controlled modifications in the image dataset. The process is named homogenization and works by altering the homogeneity characteristics of the classes of the images. The result is a process that measures how hard it is to deal with a set of images in respect to content-based retrieval, offering support in the task of analyzing configurations of distance functions and of features extractors.

  8. Region-Based Color Image Indexing and Retrieval

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper a region-based color image indexing and retrieval algorithm is presented. As a basis for the indexing, a novel K-Means segmentation algorithm is used, modified so as to take into account the coherence of the regions. A new color distance is also defined for this algorithm. Based on ....... Experimental results demonstrate the performance of the algorithm. The development of an intelligent image content-based search engine for the World Wide Web is also presented, as a direct application of the presented algorithm....

  9. An efficient and robutst method for shape-based image retrieval

    International Nuclear Information System (INIS)

    Salih, N.D.; Besar, R.; Abas, F.S.

    2007-01-01

    Shapes can be thought as being the words oft he visual language. Shape boundaries need to be simplified and estimated in a wide variety of image analysis applications. Representation and description of Shapes is one of the major problems in content-based image retrieval (CBIR). This paper present an a novel method for shape representation and description named block-based shape representation (BSR), which is capable of extracting reliable information of the object outline in a concise manner. Our technique is translation, scale, and rotation invariant. It works well on different types of shapes and fast enough for use in real-time. This technique has been implemented and evaluated in order to analyze its accuracy and Efficiency. Based on the experimental results, we urge that the proposed BSR is a compact and reliable shape representation method. (author)

  10. Standards-based Content Resources: A Prerequisite for Content Integration and Content Interoperability

    Directory of Open Access Journals (Sweden)

    Christian Galinski

    2010-05-01

    Full Text Available Objective: to show how standards-based approaches for content standardization, content management, content related services and tools as well as the respective certification systems not only guarantee reliable content integration and content interoperability, but also are of particular benefit to people with special needs in eAccessibility/eInclusion. Method: document MoU/MG/05 N0221 ''Semantic Interoperability and the need for a coherent policy for a framework of distributed, possibly federated repositories for all kinds of content items on a world-wide scale''2, which was adopted in 2005, was a first step towards the formulation of global interoperability requirements for structured content. These requirements -based on advanced terminological principles- were taken up in EU-projects such as IN-SAFETY (INfrastructure and SAFETY and OASIS (Open architecture for Accessible Services Integration and Standardization. Results: Content integration and content interoperability are key concepts in connection with the emergence of state-of-the-art distributed and federated databases/repositories of structured content. Given the fact that linguistic content items are increasingly combined with or embedded in non-linguistic content items (and vice versa, a systemic and generic approach to data modelling and content management has become the order of the day. Fulfilling the requirements of capability for multilinguality and multimodality, based on open standards makes software and database design fit for eAccessibility/eInclusion from the outset. It also makes structured content capable for global content integration and content interoperability, because it enhances its potential for being re-used and re-purposed in totally different eApplications. Such content as well as the methods, tools and services applied can be subject to new kinds of certification schemes which also should be based on standards. Conclusions: Content must be totally reliable in some

  11. MELDOQ - astrophysical image and pattern analysis in medicine: early recognition of malignant melanomas of the skin by digital image analysis. Final report

    International Nuclear Information System (INIS)

    Bunk, W.; Pompl, R.; Morfill, G.; Stolz, W.; Abmayr, W.

    1999-01-01

    Dermatoscopy is at present the most powerful clinical method for early detection of malignant melanomas. However, the application requires a lot of expertise and experience. Therefore, a quantitative image analysis system has been developed in order to assist dermatologists in 'on site diagnosis' and to improve the detection efficiency. Based on a very extensive dataset of dermatoscopic images, recorded in a standardized manner, a number of features for quantitative characterization of complex patterns in melanocytic skin lesions has been developed. The derived classifier improved the detection rate of malignant and benign melanocytic lesions to over 90% (sensitivity =91.5% and specificity =93.4% in the test set), using only six measures. A distinguishing feature of the system is the visualization of the quantified characteristics that are based on the dermatoscopic ABCD-rule. The developed prototype of a dermatoscopic workplace consists of defined procedures for standardized image acquisition and documentation, components of a necessary data pre-processing (e.g. shading- and colour-correction, removal of artefacts), quantification algorithms (evaluating asymmetry properties, border characteristics, the content of colours and structural components) and classification routines. In 2000 an industrial partner will begin marketing the digital imaging system including the specialized software for the early detection of skin cancer, which is suitable for clinicians and practitioners. The primary used nonlinear analysis techniques (e.g. scaling index method and others) can identify and characterize complex patterns in images and have a diagnostic potential in many other applications. (orig.) [de

  12. A Variable Order Fractional Differential-Based Texture Enhancement Algorithm with Application in Medical Imaging.

    Directory of Open Access Journals (Sweden)

    Qiang Yu

    Full Text Available Texture enhancement is one of the most important techniques in digital image processing and plays an essential role in medical imaging since textures discriminate information. Most image texture enhancement techniques use classical integral order differential mask operators or fractional differential mask operators using fixed fractional order. These masks can produce excessive enhancement of low spatial frequency content, insufficient enhancement of large spatial frequency content, and retention of high spatial frequency noise. To improve upon existing approaches of texture enhancement, we derive an improved Variable Order Fractional Centered Difference (VOFCD scheme which dynamically adjusts the fractional differential order instead of fixing it. The new VOFCD technique is based on the second order Riesz fractional differential operator using a Lagrange 3-point interpolation formula, for both grey scale and colour image enhancement. We then use this method to enhance photographs and a set of medical images related to patients with stroke and Parkinson's disease. The experiments show that our improved fractional differential mask has a higher signal to noise ratio value than the other fractional differential mask operators. Based on the corresponding quantitative analysis we conclude that the new method offers a superior texture enhancement over existing methods.

  13. Profiling stem cell states in three-dimensional biomaterial niches using high content image informatics.

    Science.gov (United States)

    Dhaliwal, Anandika; Brenner, Matthew; Wolujewicz, Paul; Zhang, Zheng; Mao, Yong; Batish, Mona; Kohn, Joachim; Moghe, Prabhas V

    2016-11-01

    A predictive framework for the evolution of stem cell biology in 3-D is currently lacking. In this study we propose deep image informatics of the nuclear biology of stem cells to elucidate how 3-D biomaterials steer stem cell lineage phenotypes. The approach is based on high content imaging informatics to capture minute variations in the 3-D spatial organization of splicing factor SC-35 in the nucleoplasm as a marker to classify emergent cell phenotypes of human mesenchymal stem cells (hMSCs). The cells were cultured in varied 3-D culture systems including hydrogels, electrospun mats and salt leached scaffolds. The approach encompasses high resolution 3-D imaging of SC-35 domains and high content image analysis (HCIA) to compute quantitative 3-D nuclear metrics for SC-35 organization in single cells in concert with machine learning approaches to construct a predictive cell-state classification model. Our findings indicate that hMSCs cultured in collagen hydrogels and induced to differentiate into osteogenic or adipogenic lineages could be classified into the three lineages (stem, adipogenic, osteogenic) with ⩾80% precision and sensitivity, within 72h. Using this framework, the augmentation of osteogenesis by scaffold design exerted by porogen leached scaffolds was also profiled within 72h with ∼80% high sensitivity. Furthermore, by employing 3-D SC-35 organizational metrics, differential osteogenesis induced by novel electrospun fibrous polymer mats incorporating decellularized matrix could also be elucidated and predictably modeled at just 3days with high precision. We demonstrate that 3-D SC-35 organizational metrics can be applied to model the stem cell state in 3-D scaffolds. We propose that this methodology can robustly discern minute changes in stem cell states within complex 3-D architectures and map single cell biological readouts that are critical to assessing population level cell heterogeneity. The sustained development and validation of bioactive

  14. Knowledge-based image analysis: some aspects on the analysis of images using other types of information

    Energy Technology Data Exchange (ETDEWEB)

    Eklundh, J O

    1982-01-01

    The computer vision approach to image analysis is discussed from two aspects. First, this approach is constrasted to the pattern recognition approach. Second, how external knowledge and information and models from other fields of science and engineering can be used for image and scene analysis is discussed. In particular, the connections between computer vision and computer graphics are pointed out.

  15. COLOR IMAGE RETRIEVAL BASED ON FEATURE FUSION THROUGH MULTIPLE LINEAR REGRESSION ANALYSIS

    Directory of Open Access Journals (Sweden)

    K. Seetharaman

    2015-08-01

    Full Text Available This paper proposes a novel technique based on feature fusion using multiple linear regression analysis, and the least-square estimation method is employed to estimate the parameters. The given input query image is segmented into various regions according to the structure of the image. The color and texture features are extracted on each region of the query image, and the features are fused together using the multiple linear regression model. The estimated parameters of the model, which is modeled based on the features, are formed as a vector called a feature vector. The Canberra distance measure is adopted to compare the feature vectors of the query and target images. The F-measure is applied to evaluate the performance of the proposed technique. The obtained results expose that the proposed technique is comparable to the other existing techniques.

  16. Monitoring Urban Tree Cover Using Object-Based Image Analysis and Public Domain Remotely Sensed Data

    Directory of Open Access Journals (Sweden)

    Meghan Halabisky

    2011-10-01

    Full Text Available Urban forest ecosystems provide a range of social and ecological services, but due to the heterogeneity of these canopies their spatial extent is difficult to quantify and monitor. Traditional per-pixel classification methods have been used to map urban canopies, however, such techniques are not generally appropriate for assessing these highly variable landscapes. Landsat imagery has historically been used for per-pixel driven land use/land cover (LULC classifications, but the spatial resolution limits our ability to map small urban features. In such cases, hyperspatial resolution imagery such as aerial or satellite imagery with a resolution of 1 meter or below is preferred. Object-based image analysis (OBIA allows for use of additional variables such as texture, shape, context, and other cognitive information provided by the image analyst to segment and classify image features, and thus, improve classifications. As part of this research we created LULC classifications for a pilot study area in Seattle, WA, USA, using OBIA techniques and freely available public aerial photography. We analyzed the differences in accuracies which can be achieved with OBIA using multispectral and true-color imagery. We also compared our results to a satellite based OBIA LULC and discussed the implications of per-pixel driven vs. OBIA-driven field sampling campaigns. We demonstrated that the OBIA approach can generate good and repeatable LULC classifications suitable for tree cover assessment in urban areas. Another important finding is that spectral content appeared to be more important than spatial detail of hyperspatial data when it comes to an OBIA-driven LULC.

  17. Invariant moments based convolutional neural networks for image analysis

    Directory of Open Access Journals (Sweden)

    Vijayalakshmi G.V. Mahesh

    2017-01-01

    Full Text Available The paper proposes a method using convolutional neural network to effectively evaluate the discrimination between face and non face patterns, gender classification using facial images and facial expression recognition. The novelty of the method lies in the utilization of the initial trainable convolution kernels coefficients derived from the zernike moments by varying the moment order. The performance of the proposed method was compared with the convolutional neural network architecture that used random kernels as initial training parameters. The multilevel configuration of zernike moments was significant in extracting the shape information suitable for hierarchical feature learning to carry out image analysis and classification. Furthermore the results showed an outstanding performance of zernike moment based kernels in terms of the computation time and classification accuracy.

  18. Rapid Analysis and Exploration of Fluorescence Microscopy Images

    OpenAIRE

    Pavie, Benjamin; Rajaram, Satwik; Ouyang, Austin; Altschuler, Jason; Steininger, Robert J; Wu, Lani; Altschuler, Steven

    2014-01-01

    Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard.

  19. Representations of Codeine Misuse on Instagram: Content Analysis

    Science.gov (United States)

    Cherian, Roy; Westbrook, Marisa; Ramo, Danielle

    2018-01-01

    Background Prescription opioid misuse has doubled over the past 10 years and is now a public health epidemic. Analysis of social media data may provide additional insights into opioid misuse to supplement the traditional approaches of data collection (eg, self-report on surveys). Objective The aim of this study was to characterize representations of codeine misuse through analysis of public posts on Instagram to understand text phrases related to misuse. Methods We identified hashtags and searchable text phrases associated with codeine misuse by analyzing 1156 sequential Instagram posts over the course of 2 weeks from May 2016 to July 2016. Content analysis of posts associated with these hashtags identified the most common themes arising in images, as well as culture around misuse, including how misuse is happening and being perpetuated through social media. Results A majority of images (50/100; 50.0%) depicted codeine in its commonly misused form, combined with soda (lean). Codeine misuse was commonly represented with the ingestion of alcohol, cannabis, and benzodiazepines. Some images highlighted the previously noted affinity between codeine misuse and hip-hop culture or mainstream popular culture images. Conclusions The prevalence of codeine misuse images, glamorizing of ingestion with soda and alcohol, and their integration with mainstream, popular culture imagery holds the potential to normalize and increase codeine misuse and overdose. To reduce harm and prevent misuse, immediate public health efforts are needed to better understand the relationship between the potential normalization, ritualization, and commercialization of codeine misuse. PMID:29559422

  20. An ion beam analysis software based on ImageJ

    International Nuclear Information System (INIS)

    Udalagama, C.; Chen, X.; Bettiol, A.A.; Watt, F.

    2013-01-01

    The suit of techniques (RBS, STIM, ERDS, PIXE, IL, IF,…) available in ion beam analysis yields a variety of rich information. Typically, after the initial challenge of acquiring data we are then faced with the task of having to extract relevant information or to present the data in a format with the greatest impact. This process sometimes requires developing new software tools. When faced with such situations the usual practice at the Centre for Ion Beam Applications (CIBA) in Singapore has been to use our computational expertise to develop ad hoc software tools as and when we need them. It then became apparent that the whole ion beam community can benefit from such tools; specifically from a common software toolset that can be developed and maintained by everyone with freedom to use and allowance to modify. In addition to the benefits of readymade tools and sharing the onus of development, this also opens up the possibility for collaborators to access and analyse ion beam data without having to depend on an ion beam lab. This has the virtue of making the ion beam techniques more accessible to a broader scientific community. We have identified ImageJ as an appropriate software base to develop such a common toolset. In addition to being in the public domain and been setup for collaborative tool development, ImageJ is accompanied by hundreds of modules (plugins) that allow great breadth in analysis. The present work is the first step towards integrating ion beam analysis into ImageJ. Some of the features of the current version of the ImageJ ‘ion beam’ plugin are: (1) reading list mode or event-by-event files, (2) energy gates/sorts, (3) sort stacks, (4) colour function, (5) real time map updating, (6) real time colour updating and (7) median and average map creation

  1. An ion beam analysis software based on ImageJ

    Energy Technology Data Exchange (ETDEWEB)

    Udalagama, C., E-mail: chammika@nus.edu.sg [Centre for Ion Beam Applications (CIBA), Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117 542 (Singapore); Chen, X.; Bettiol, A.A.; Watt, F. [Centre for Ion Beam Applications (CIBA), Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117 542 (Singapore)

    2013-07-01

    The suit of techniques (RBS, STIM, ERDS, PIXE, IL, IF,…) available in ion beam analysis yields a variety of rich information. Typically, after the initial challenge of acquiring data we are then faced with the task of having to extract relevant information or to present the data in a format with the greatest impact. This process sometimes requires developing new software tools. When faced with such situations the usual practice at the Centre for Ion Beam Applications (CIBA) in Singapore has been to use our computational expertise to develop ad hoc software tools as and when we need them. It then became apparent that the whole ion beam community can benefit from such tools; specifically from a common software toolset that can be developed and maintained by everyone with freedom to use and allowance to modify. In addition to the benefits of readymade tools and sharing the onus of development, this also opens up the possibility for collaborators to access and analyse ion beam data without having to depend on an ion beam lab. This has the virtue of making the ion beam techniques more accessible to a broader scientific community. We have identified ImageJ as an appropriate software base to develop such a common toolset. In addition to being in the public domain and been setup for collaborative tool development, ImageJ is accompanied by hundreds of modules (plugins) that allow great breadth in analysis. The present work is the first step towards integrating ion beam analysis into ImageJ. Some of the features of the current version of the ImageJ ‘ion beam’ plugin are: (1) reading list mode or event-by-event files, (2) energy gates/sorts, (3) sort stacks, (4) colour function, (5) real time map updating, (6) real time colour updating and (7) median and average map creation.

  2. The rice growth image files - The Rice Growth Monitoring for The Phenotypic Functional Analysis | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us The Rice Growth Monitoring for The Phenotypic Functional Analysis The rice growth image file...s Data detail Data name The rice growth image files DOI 10.18908/lsdba.nbdc00945-004 Description of data contents The rice growth ima...ge files categorized based on file size. Data file File name: image files (director...y) File URL: ftp://ftp.biosciencedbc.jp/archive/agritogo-rice-phenome/LATEST/image...ite Policy | Contact Us The rice growth image files - The Rice Growth Monitoring for The Phenotypic Functional Analysis | LSDB Archive ...

  3. Determination of renewable energy yield from mixed waste material from the use of novel image analysis methods.

    Science.gov (United States)

    Wagland, S T; Dudley, R; Naftaly, M; Longhurst, P J

    2013-11-01

    Two novel techniques are presented in this study which together aim to provide a system able to determine the renewable energy potential of mixed waste materials. An image analysis tool was applied to two waste samples prepared using known quantities of source-segregated recyclable materials. The technique was used to determine the composition of the wastes, where through the use of waste component properties the biogenic content of the samples was calculated. The percentage renewable energy determined by image analysis for each sample was accurate to within 5% of the actual values calculated. Microwave-based multiple-point imaging (AutoHarvest) was used to demonstrate the ability of such a technique to determine the moisture content of mixed samples. This proof-of-concept experiment was shown to produce moisture measurement accurate to within 10%. Overall, the image analysis tool was able to determine the renewable energy potential of the mixed samples, and the AutoHarvest should enable the net calorific value calculations through the provision of moisture content measurements. The proposed system is suitable for combustion facilities, and enables the operator to understand the renewable energy potential of the waste prior to combustion. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Fan fault diagnosis based on symmetrized dot pattern analysis and image matching

    Science.gov (United States)

    Xu, Xiaogang; Liu, Haixiao; Zhu, Hao; Wang, Songling

    2016-07-01

    To detect the mechanical failure of fans, a new diagnostic method based on the symmetrized dot pattern (SDP) analysis and image matching is proposed. Vibration signals of 13 kinds of running states are acquired on a centrifugal fan test bed and reconstructed by the SDP technique. The SDP pattern templates of each running state are established. An image matching method is performed to diagnose the fault. In order to improve the diagnostic accuracy, the single template, multiple templates and clustering fault templates are used to perform the image matching.

  5. Accuracy of lung nodule density on HRCT: analysis by PSF-based image simulation.

    Science.gov (United States)

    Ohno, Ken; Ohkubo, Masaki; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi

    2012-11-08

    A computed tomography (CT) image simulation technique based on the point spread function (PSF) was applied to analyze the accuracy of CT-based clinical evaluations of lung nodule density. The PSF of the CT system was measured and used to perform the lung nodule image simulation. Then, the simulated image was resampled at intervals equal to the pixel size and the slice interval found in clinical high-resolution CT (HRCT) images. On those images, the nodule density was measured by placing a region of interest (ROI) commonly used for routine clinical practice, and comparing the measured value with the true value (a known density of object function used in the image simulation). It was quantitatively determined that the measured nodule density depended on the nodule diameter and the image reconstruction parameters (kernel and slice thickness). In addition, the measured density fluctuated, depending on the offset between the nodule center and the image voxel center. This fluctuation was reduced by decreasing the slice interval (i.e., with the use of overlapping reconstruction), leading to a stable density evaluation. Our proposed method of PSF-based image simulation accompanied with resampling enables a quantitative analysis of the accuracy of CT-based evaluations of lung nodule density. These results could potentially reveal clinical misreadings in diagnosis, and lead to more accurate and precise density evaluations. They would also be of value for determining the optimum scan and reconstruction parameters, such as image reconstruction kernels and slice thicknesses/intervals.

  6. Image quality assessment based on multiscale geometric analysis.

    Science.gov (United States)

    Gao, Xinbo; Lu, Wen; Tao, Dacheng; Li, Xuelong

    2009-07-01

    Reduced-reference (RR) image quality assessment (IQA) has been recognized as an effective and efficient way to predict the visual quality of distorted images. The current standard is the wavelet-domain natural image statistics model (WNISM), which applies the Kullback-Leibler divergence between the marginal distributions of wavelet coefficients of the reference and distorted images to measure the image distortion. However, WNISM fails to consider the statistical correlations of wavelet coefficients in different subbands and the visual response characteristics of the mammalian cortical simple cells. In addition, wavelet transforms are optimal greedy approximations to extract singularity structures, so they fail to explicitly extract the image geometric information, e.g., lines and curves. Finally, wavelet coefficients are dense for smooth image edge contours. In this paper, to target the aforementioned problems in IQA, we develop a novel framework for IQA to mimic the human visual system (HVS) by incorporating the merits from multiscale geometric analysis (MGA), contrast sensitivity function (CSF), and the Weber's law of just noticeable difference (JND). In the proposed framework, MGA is utilized to decompose images and then extract features to mimic the multichannel structure of HVS. Additionally, MGA offers a series of transforms including wavelet, curvelet, bandelet, contourlet, wavelet-based contourlet transform (WBCT), and hybrid wavelets and directional filter banks (HWD), and different transforms capture different types of image geometric information. CSF is applied to weight coefficients obtained by MGA to simulate the appearance of images to observers by taking into account many of the nonlinearities inherent in HVS. JND is finally introduced to produce a noticeable variation in sensory experience. Thorough empirical studies are carried out upon the LIVE database against subjective mean opinion score (MOS) and demonstrate that 1) the proposed framework has

  7. Horror Image Recognition Based on Context-Aware Multi-Instance Learning.

    Science.gov (United States)

    Li, Bing; Xiong, Weihua; Wu, Ou; Hu, Weiming; Maybank, Stephen; Yan, Shuicheng

    2015-12-01

    Horror content sharing on the Web is a growing phenomenon that can interfere with our daily life and affect the mental health of those involved. As an important form of expression, horror images have their own characteristics that can evoke extreme emotions. In this paper, we present a novel context-aware multi-instance learning (CMIL) algorithm for horror image recognition. The CMIL algorithm identifies horror images and picks out the regions that cause the sensation of horror in these horror images. It obtains contextual cues among adjacent regions in an image using a random walk on a contextual graph. Borrowing the strength of the fuzzy support vector machine (FSVM), we define a heuristic optimization procedure based on the FSVM to search for the optimal classifier for the CMIL. To improve the initialization of the CMIL, we propose a novel visual saliency model based on the tensor analysis. The average saliency value of each segmented region is set as its initial fuzzy membership in the CMIL. The advantage of the tensor-based visual saliency model is that it not only adaptively selects features, but also dynamically determines fusion weights for saliency value combination from different feature subspaces. The effectiveness of the proposed CMIL model is demonstrated by its use in horror image recognition on two large-scale image sets collected from the Internet.

  8. Bundle Adjustment-Based Stability Analysis Method with a Case Study of a Dual Fluoroscopy Imaging System

    Science.gov (United States)

    Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.

    2018-05-01

    A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.

  9. Computational content analysis of European Central Bank statements

    NARCIS (Netherlands)

    Milea, D.V.; Almeida, R.J.; Sharef, N.M.; Kaymak, U.; Frasincar, F.

    2012-01-01

    In this paper we present a framework for the computational content analysis of European Central Bank (ECB) statements. Based on this framework, we provide two approaches that can be used in a practical context. Both approaches use the content of ECB statements to predict upward and downward movement

  10. Operational Automatic Remote Sensing Image Understanding Systems: Beyond Geographic Object-Based and Object-Oriented Image Analysis (GEOBIA/GEOOIA. Part 1: Introduction

    Directory of Open Access Journals (Sweden)

    Andrea Baraldi

    2012-09-01

    Full Text Available According to existing literature and despite their commercial success, state-of-the-art two-stage non-iterative geographic object-based image analysis (GEOBIA systems and three-stage iterative geographic object-oriented image analysis (GEOOIA systems, where GEOOIA/GEOBIA, remain affected by a lack of productivity, general consensus and research. To outperform the degree of automation, accuracy, efficiency, robustness, scalability and timeliness of existing GEOBIA/GEOOIA systems in compliance with the Quality Assurance Framework for Earth Observation (QA4EO guidelines, this methodological work is split into two parts. The present first paper provides a multi-disciplinary Strengths, Weaknesses, Opportunities and Threats (SWOT analysis of the GEOBIA/GEOOIA approaches that augments similar analyses proposed in recent years. In line with constraints stemming from human vision, this SWOT analysis promotes a shift of learning paradigm in the pre-attentive vision first stage of a remote sensing (RS image understanding system (RS-IUS, from sub-symbolic statistical model-based (inductive image segmentation to symbolic physical model-based (deductive image preliminary classification. Hence, a symbolic deductive pre-attentive vision first stage accomplishes image sub-symbolic segmentation and image symbolic pre-classification simultaneously. In the second part of this work a novel hybrid (combined deductive and inductive RS-IUS architecture featuring a symbolic deductive pre-attentive vision first stage is proposed and discussed in terms of: (a computational theory (system design; (b information/knowledge representation; (c algorithm design; and (d implementation. As proof-of-concept of symbolic physical model-based pre-attentive vision first stage, the spectral knowledge-based, operational, near real-time Satellite Image Automatic Mapper™ (SIAM™ is selected from existing literature. To the best of these authors’ knowledge, this is the first time a

  11. #fitspo on Instagram: A mixed-methods approach using Netlytic and photo analysis, uncovering the online discussion and author/image characteristics.

    Science.gov (United States)

    Santarossa, Sara; Coyne, Paige; Lisinski, Carly; Woodruff, Sarah J

    2016-11-01

    The #fitspo 'tag' is a recent trend on Instagram, which is used on posts to motivate others towards a healthy lifestyle through exercise/eating habits. This study used a mixed-methods approach consisting of text and network analysis via the Netlytic program ( N = 10,000 #fitspo posts), and content analysis of #fitspo images ( N = 122) was used to examine author and image characteristics. Results suggest that #fitspo posts may motivate through appearance-mediated themes, as the largest content categories (based on the associated text) were 'feeling good' and 'appearance'. Furthermore, #fitspo posts may create peer influence/support as personal (opposed to non-personal) accounts were associated with higher popularity of images (i.e. number of likes/followers). Finally, most images contained posed individuals with some degree of objectification.

  12. Methods in quantitative image analysis.

    Science.gov (United States)

    Oberholzer, M; Ostreicher, M; Christen, H; Brühlmann, M

    1996-05-01

    The main steps of image analysis are image capturing, image storage (compression), correcting imaging defects (e.g. non-uniform illumination, electronic-noise, glare effect), image enhancement, segmentation of objects in the image and image measurements. Digitisation is made by a camera. The most modern types include a frame-grabber, converting the analog-to-digital signal into digital (numerical) information. The numerical information consists of the grey values describing the brightness of every point within the image, named a pixel. The information is stored in bits. Eight bits are summarised in one byte. Therefore, grey values can have a value between 0 and 256 (2(8)). The human eye seems to be quite content with a display of 5-bit images (corresponding to 64 different grey values). In a digitised image, the pixel grey values can vary within regions that are uniform in the original scene: the image is noisy. The noise is mainly manifested in the background of the image. For an optimal discrimination between different objects or features in an image, uniformity of illumination in the whole image is required. These defects can be minimised by shading correction [subtraction of a background (white) image from the original image, pixel per pixel, or division of the original image by the background image]. The brightness of an image represented by its grey values can be analysed for every single pixel or for a group of pixels. The most frequently used pixel-based image descriptors are optical density, integrated optical density, the histogram of the grey values, mean grey value and entropy. The distribution of the grey values existing within an image is one of the most important characteristics of the image. However, the histogram gives no information about the texture of the image. The simplest way to improve the contrast of an image is to expand the brightness scale by spreading the histogram out to the full available range. Rules for transforming the grey value

  13. An image adaptive, wavelet-based watermarking of digital images

    Science.gov (United States)

    Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia

    2007-12-01

    In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.

  14. Research on Wavelet-Based Algorithm for Image Contrast Enhancement

    Institute of Scientific and Technical Information of China (English)

    Wu Ying-qian; Du Pei-jun; Shi Peng-fei

    2004-01-01

    A novel wavelet-based algorithm for image enhancement is proposed in the paper. On the basis of multiscale analysis, the proposed algorithm solves efficiently the problem of noise over-enhancement, which commonly occurs in the traditional methods for contrast enhancement. The decomposed coefficients at same scales are processed by a nonlinear method, and the coefficients at different scales are enhanced in different degree. During the procedure, the method takes full advantage of the properties of Human visual system so as to achieve better performance. The simulations demonstrate that these characters of the proposed approach enable it to fully enhance the content in images, to efficiently alleviate the enhancement of noise and to achieve much better enhancement effect than the traditional approaches.

  15. IMAGE ANALYSIS BASED ON EDGE DETECTION TECHNIQUES

    Institute of Scientific and Technical Information of China (English)

    纳瑟; 刘重庆

    2002-01-01

    A method that incorporates edge detection technique, Markov Random field (MRF), watershed segmentation and merging techniques was presented for performing image segmentation and edge detection tasks. It first applies edge detection technique to obtain a Difference In Strength (DIS) map. An initial segmented result is obtained based on K-means clustering technique and the minimum distance. Then the region process is modeled by MRF to obtain an image that contains different intensity regions. The gradient values are calculated and then the watershed technique is used. DIS calculation is used for each pixel to define all the edges (weak or strong) in the image. The DIS map is obtained. This help as priority knowledge to know the possibility of the region segmentation by the next step (MRF), which gives an image that has all the edges and regions information. In MRF model,gray level l, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The segmentation results are improved by using watershed algorithm. After all pixels of the segmented regions are processed, a map of primitive region with edges is generated. The edge map is obtained using a merge process based on averaged intensity mean values. A common edge detectors that work on (MRF) segmented image are used and the results are compared. The segmentation and edge detection result is one closed boundary per actual region in the image.

  16. Steganalysis based on reducing the differences of image statistical characteristics

    Science.gov (United States)

    Wang, Ran; Niu, Shaozhang; Ping, Xijian; Zhang, Tao

    2018-04-01

    Compared with the process of embedding, the image contents make a more significant impact on the differences of image statistical characteristics. This makes the image steganalysis to be a classification problem with bigger withinclass scatter distances and smaller between-class scatter distances. As a result, the steganalysis features will be inseparate caused by the differences of image statistical characteristics. In this paper, a new steganalysis framework which can reduce the differences of image statistical characteristics caused by various content and processing methods is proposed. The given images are segmented to several sub-images according to the texture complexity. Steganalysis features are separately extracted from each subset with the same or close texture complexity to build a classifier. The final steganalysis result is figured out through a weighted fusing process. The theoretical analysis and experimental results can demonstrate the validity of the framework.

  17. Integrating fuzzy object based image analysis and ant colony optimization for road extraction from remotely sensed images

    Science.gov (United States)

    Maboudi, Mehdi; Amini, Jalal; Malihi, Shirin; Hahn, Michael

    2018-04-01

    Updated road network as a crucial part of the transportation database plays an important role in various applications. Thus, increasing the automation of the road extraction approaches from remote sensing images has been the subject of extensive research. In this paper, we propose an object based road extraction approach from very high resolution satellite images. Based on the object based image analysis, our approach incorporates various spatial, spectral, and textural objects' descriptors, the capabilities of the fuzzy logic system for handling the uncertainties in road modelling, and the effectiveness and suitability of ant colony algorithm for optimization of network related problems. Four VHR optical satellite images which are acquired by Worldview-2 and IKONOS satellites are used in order to evaluate the proposed approach. Evaluation of the extracted road networks shows that the average completeness, correctness, and quality of the results can reach 89%, 93% and 83% respectively, indicating that the proposed approach is applicable for urban road extraction. We also analyzed the sensitivity of our algorithm to different ant colony optimization parameter values. Comparison of the achieved results with the results of four state-of-the-art algorithms and quantifying the robustness of the fuzzy rule set demonstrate that the proposed approach is both efficient and transferable to other comparable images.

  18. A NEW FRAMEWORK FOR OBJECT-BASED IMAGE ANALYSIS BASED ON SEGMENTATION SCALE SPACE AND RANDOM FOREST CLASSIFIER

    Directory of Open Access Journals (Sweden)

    A. Hadavand

    2015-12-01

    Full Text Available In this paper a new object-based framework is developed for automate scale selection in image segmentation. The quality of image objects have an important impact on further analyses. Due to the strong dependency of segmentation results to the scale parameter, choosing the best value for this parameter, for each class, becomes a main challenge in object-based image analysis. We propose a new framework which employs pixel-based land cover map to estimate the initial scale dedicated to each class. These scales are used to build segmentation scale space (SSS, a hierarchy of image objects. Optimization of SSS, respect to NDVI and DSM values in each super object is used to get the best scale in local regions of image scene. Optimized SSS segmentations are finally classified to produce the final land cover map. Very high resolution aerial image and digital surface model provided by ISPRS 2D semantic labelling dataset is used in our experiments. The result of our proposed method is comparable to those of ESP tool, a well-known method to estimate the scale of segmentation, and marginally improved the overall accuracy of classification from 79% to 80%.

  19. Automating the construction of scene classifiers for content-based video retrieval

    NARCIS (Netherlands)

    Khan, L.; Israël, Menno; Petrushin, V.A.; van den Broek, Egon; van der Putten, Peter

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a

  20. Competency-Based Teaching in Radiology - Implementation and Evaluation of Interactive Workstation-Based Learning to Apply NKLM-Based Content.

    Science.gov (United States)

    Koestner, Wolfgang; Otten, Wiebke; Kaireit, Till; Wacker, Frank K; Dettmer, Sabine

    2017-11-01

    Purpose  New teaching formats are required to implement competency-based teaching in radiology teaching. Therefore, we have established and evaluated two practical competency-based radiological courses. Materials and Methods  The courses were held in a multimedia room with 25 computers and a professional DICOM viewer. Students were taught basic image analysis and presented clinical cases with a DICOM viewer under supervision of an instructor using desktop monitoring software. Two courses (elective course and obligatory course) were evaluated by the students (n = 160 and n = 100) and instructors (n = 9) using an anonymized online survey. Results  Courses were evaluated positively by the students and instructors. From the perspective of the students, the courses increased understanding of cross-sectional anatomy (elective/obligatory course: 97 %/95 %) and radiologic findings (97 %/99 %). Furthermore, the course increased the students' interest in radiology (61 %/65 %). The students considered this way of teaching to be relevant to their future occupation (92 % of students in the obligatory course). The higher incidence of teacher-student interaction and the possibility of independent image analysis were rated positively. The majority of instructors did not observe increased distractibility due to the computers (67 %) or notice worse preparation for MC tests (56 %). However, 56 % of instructors reported greater preparation effort. Conclusion  Practical competency-based radiological teaching using a DICOM viewer is a feasible innovative approach with high acceptance among students and instructors. It fosters competency-based learning as proposed by the model curriculum of the German Radiological Society (DRG) and the National Competency-based Catalogue of Learning Objectives for Undergraduate Medical Education (NKLM). Key Points   · Practical competency-based radiological teaching is highly accepted by students and instructors

  1. Texture analysis of B-mode ultrasound images to stage hepatic lipidosis in the dairy cow: A methodological study.

    Science.gov (United States)

    Banzato, Tommaso; Fiore, Enrico; Morgante, Massimo; Manuali, Elisabetta; Zotti, Alessandro

    2016-10-01

    Hepatic lipidosis is the most diffused hepatic disease in the lactating cow. A new methodology to estimate the degree of fatty infiltration of the liver in lactating cows by means of texture analysis of B-mode ultrasound images is proposed. B-mode ultrasonography of the liver was performed in 48 Holstein Friesian cows using standardized ultrasound parameters. Liver biopsies to determine the triacylglycerol content of the liver (TAGqa) were obtained from each animal. A large number of texture parameters were calculated on the ultrasound images by means of a free software. Based on the TAGqa content of the liver, 29 samples were classified as mild (TAGqa100mg/g) and 13 as severe (TAG>100mg/g) in steatosis. Stepwise linear regression analysis was performed to predict the TAGqa content of the liver (TAGpred) from the texture parameters calculated on the ultrasound images. A five-variable model was used to predict the TAG content from the ultrasound images. The regression model explained 83.4% of the variance. An area under the curve (AUC) of 0.949 was calculated for 50mg/g of TAGqa; using an optimal cut-off value of 72mg/g TAGpred had a sensitivity of 86.2% and a specificity of 84.2%. An AUC of 0.978 for 100mg/g of TAGqa was calculated; using an optimal cut-off value of 89mg/g, TAGpred sensitivity was 92.3% and specificity was 88.6%. Texture analysis of B-mode ultrasound images may therefore be used to accurately predict the TAG content of the liver in lactating cows. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  3. iPixel: a visual content-based and semantic search engine for retrieving digitized mammograms by using collective intelligence.

    Science.gov (United States)

    Alor-Hernández, Giner; Pérez-Gallardo, Yuliana; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Rodríguez-González, Alejandro; Aguilar-Laserre, Alberto A

    2012-09-01

    Nowadays, traditional search engines such as Google, Yahoo and Bing facilitate the retrieval of information in the format of images, but the results are not always useful for the users. This is mainly due to two problems: (1) the semantic keywords are not taken into consideration and (2) it is not always possible to establish a query using the image features. This issue has been covered in different domains in order to develop content-based image retrieval (CBIR) systems. The expert community has focussed their attention on the healthcare domain, where a lot of visual information for medical analysis is available. This paper provides a solution called iPixel Visual Search Engine, which involves semantics and content issues in order to search for digitized mammograms. iPixel offers the possibility of retrieving mammogram features using collective intelligence and implementing a CBIR algorithm. Our proposal compares not only features with similar semantic meaning, but also visual features. In this sense, the comparisons are made in different ways: by the number of regions per image, by maximum and minimum size of regions per image and by average intensity level of each region. iPixel Visual Search Engine supports the medical community in differential diagnoses related to the diseases of the breast. The iPixel Visual Search Engine has been validated by experts in the healthcare domain, such as radiologists, in addition to experts in digital image analysis.

  4. Evaluation of moisture content distribution in wood by soft X-ray imaging

    International Nuclear Information System (INIS)

    Tanaka, T.; Avramidis, S.; Shida, S.

    2009-01-01

    A technique for nondestructive evaluation of moisture content distribution of Japanese cedar (sugi) during drying using a newly developed soft X-ray digital microscope was investigated. Radial, tangential, and cross-sectional samples measuring 100 x 100 x 10 mm were cut from green sugi wood. Each sample was dried in several steps in an oven and upon completion of each step, the mass was recorded and a soft X-ray image was taken. The relationship between moisture content and the average grayscale value of the soft X-ray image at each step was linear. In addition, the linear regressions overlapped each other regardless of the sample sections. These results showed that soft X-ray images could accurately estimate the moisture content. Applying this relationship to a small section of each sample, the moisture content distribution was estimated from the image differential between the soft X-ray pictures obtained from the sample in question and the same sample in the oven-dried condition. Moisture content profiles for 10-mm-wide parts at the centers of the samples were also obtained. The shapes of the profiles supported the evaluation method used in this study

  5. Single Session Web-Based Counselling: A Thematic Analysis of Content from the Perspective of the Client

    Science.gov (United States)

    Rodda, S. N.; Lubman, D. I.; Cheetham, A.; Dowling, N. A.; Jackson, A. C.

    2015-01-01

    Despite the exponential growth of non-appointment-based web counselling, there is limited information on what happens in a single session intervention. This exploratory study, involving a thematic analysis of 85 counselling transcripts of people seeking help for problem gambling, aimed to describe the presentation and content of online…

  6. Spatial compression algorithm for the analysis of very large multivariate images

    Science.gov (United States)

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  7. Image Retrieval based on Integration between Color and Geometric Moment Features

    International Nuclear Information System (INIS)

    Saad, M.H.; Saleh, H.I.; Konbor, H.; Ashour, M.

    2012-01-01

    Content based image retrieval is the retrieval of images based on visual features such as colour, texture and shape. .the Current approaches to CBIR differ in terms of which image features are extracted; recent work deals with combination of distances or scores from different and usually independent representations in an attempt to induce high level semantics from the low level descriptors of the images. content-based image retrieval has many application areas such as, education, commerce, military, searching, commerce, and biomedicine and Web image classification. This paper proposes a new image retrieval system, which uses color and geometric moment feature to form the feature vectors. Bhattacharyya distance and histogram intersection are used to perform feature matching. This framework integrates the color histogram which represents the global feature and geometric moment as local descriptor to enhance the retrieval results. The proposed technique is proper for precisely retrieving images even in deformation cases such as geometric deformations and noise. It is tested on a standard the results shows that a combination of our approach as a local image descriptor with other global descriptors outperforms other approaches.

  8. Context-based coding of bilevel images enhanced by digital straight line analysis

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2006-01-01

    , or segmentation maps are also encoded efficiently. The algorithm is not targeted at document images with text, which can be coded efficiently with dictionary-based techniques as in JBIG2. The scheme is based on a local analysis of the digital straightness of the causal part of the object boundary, which is used...... in the context definition for arithmetic encoding. Tested on individual images of standard TV resolution binary shapes and the binary layers of a digital map, the proposed algorithm outperforms PWC, JBIG, JBIG2, and MPEG-4 CAE. On the binary shapes, the code lengths are reduced by 21%, 27 %, 28 %, and 41...

  9. The impact of image-size manipulation and sugar content on children's cereal consumption.

    Science.gov (United States)

    Neyens, E; Aerts, G; Smits, T

    2015-12-01

    Previous studies have demonstrated that portion sizes and food energy-density influence children's eating behavior. However, the potential effects of front-of-pack image-sizes of serving suggestions and sugar content have not been tested. Using a mixed experimental design among young children, this study examines the effects of image-size manipulation and sugar content on cereal and milk consumption. Children poured and consumed significantly more cereal and drank significantly more milk when exposed to a larger sized image of serving suggestion as compared to a smaller image-size. Sugar content showed no main effects. Nevertheless, cereal consumption only differed significantly between small and large image-sizes when sugar content was low. An advantage of this study was the mundane setting in which the data were collected: a school's dining room instead of an artificial lab. Future studies should include a control condition, with children eating by themselves to reflect an even more natural context. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Iris image recognition wavelet filter-banks based iris feature extraction schemes

    CERN Document Server

    Rahulkar, Amol D

    2014-01-01

    This book provides the new results in wavelet filter banks based feature extraction, and the classifier in the field of iris image recognition. It provides the broad treatment on the design of separable, non-separable wavelets filter banks, and the classifier. The design techniques presented in the book are applied on iris image analysis for person authentication. This book also brings together the three strands of research (wavelets, iris image analysis, and classifier). It compares the performance of the presented techniques with state-of-the-art available schemes. This book contains the compilation of basic material on the design of wavelets that avoids reading many different books. Therefore, it provide an easier path for the new-comers, researchers to master the contents. In addition, the designed filter banks and classifier can also be effectively used than existing filter-banks in many signal processing applications like pattern classification, data-compression, watermarking, denoising etc.  that will...

  11. Framing Service, Benefit, and Credibility Through Images and Texts: A Content Analysis of Online Promotional Messages of Korean Medical Tourism Industry.

    Science.gov (United States)

    Jun, Jungmi

    2016-07-01

    This study examines how the Korean medical tourism industry frames its service, benefit, and credibility issues through texts and images of online brochures. The results of content analysis suggest that the Korean medical tourism industry attempts to frame their medical/health services as "excellence in surgeries and cancer care" and "advanced health technology and facilities." However, the use of cost-saving appeals was limited, which can be seen as a strategy to avoid consumers' association of lower cost with lower quality services, and to stress safety and credibility.

  12. Three-dimensional spatiotemporal features for fast content-based retrieval of focal liver lesions.

    Science.gov (United States)

    Roy, Sharmili; Chi, Yanling; Liu, Jimin; Venkatesh, Sudhakar K; Brown, Michael S

    2014-11-01

    Content-based image retrieval systems for 3-D medical datasets still largely rely on 2-D image-based features extracted from a few representative slices of the image stack. Most 2 -D features that are currently used in the literature not only model a 3-D tumor incompletely but are also highly expensive in terms of computation time, especially for high-resolution datasets. Radiologist-specified semantic labels are sometimes used along with image-based 2-D features to improve the retrieval performance. Since radiological labels show large interuser variability, are often unstructured, and require user interaction, their use as lesion characterizing features is highly subjective, tedious, and slow. In this paper, we propose a 3-D image-based spatiotemporal feature extraction framework for fast content-based retrieval of focal liver lesions. All the features are computer generated and are extracted from four-phase abdominal CT images. Retrieval performance and query processing times for the proposed framework is evaluated on a database of 44 hepatic lesions comprising of five pathological types. Bull's eye percentage score above 85% is achieved for three out of the five lesion pathologies and for 98% of query lesions, at least one same type of lesion is ranked among the top two retrieved results. Experiments show that the proposed system's query processing is more than 20 times faster than other already published systems that use 2-D features. With fast computation time and high retrieval accuracy, the proposed system has the potential to be used as an assistant to radiologists for routine hepatic tumor diagnosis.

  13. High content analysis of differentiation and cell death in human adipocytes.

    Science.gov (United States)

    Doan-Xuan, Quang Minh; Sarvari, Anitta K; Fischer-Posovszky, Pamela; Wabitsch, Martin; Balajthy, Zoltan; Fesus, Laszlo; Bacso, Zsolt

    2013-10-01

    Understanding adipocyte biology and its homeostasis is in the focus of current obesity research. We aimed to introduce a high-content analysis procedure for directly visualizing and quantifying adipogenesis and adipoapoptosis by laser scanning cytometry (LSC) in a large population of cell. Slide-based image cytometry and image processing algorithms were used and optimized for high-throughput analysis of differentiating cells and apoptotic processes in cell culture at high confluence. Both preadipocytes and adipocytes were simultaneously scrutinized for lipid accumulation, texture properties, nuclear condensation, and DNA fragmentation. Adipocyte commitment was found after incubation in adipogenic medium for 3 days identified by lipid droplet formation and increased light absorption, while terminal differentiation of adipocytes occurred throughout day 9-14 with characteristic nuclear shrinkage, eccentric nuclei localization, chromatin condensation, and massive lipid deposition. Preadipocytes were shown to be more prone to tumor necrosis factor alpha (TNFα)-induced apoptosis compared to mature adipocytes. Importantly, spontaneous DNA fragmentation was observed at early stage when adipocyte commitment occurs. This DNA damage was independent from either spontaneous or induced apoptosis and probably was part of the differentiation program. © 2013 International Society for Advancement of Cytometry. Copyright © 2013 International Society for Advancement of Cytometry.

  14. An automated, high-throughput plant phenotyping system using machine learning-based plant segmentation and image analysis.

    Science.gov (United States)

    Lee, Unseok; Chang, Sungyul; Putra, Gian Anantrio; Kim, Hyoungseok; Kim, Dong Hwan

    2018-01-01

    A high-throughput plant phenotyping system automatically observes and grows many plant samples. Many plant sample images are acquired by the system to determine the characteristics of the plants (populations). Stable image acquisition and processing is very important to accurately determine the characteristics. However, hardware for acquiring plant images rapidly and stably, while minimizing plant stress, is lacking. Moreover, most software cannot adequately handle large-scale plant imaging. To address these problems, we developed a new, automated, high-throughput plant phenotyping system using simple and robust hardware, and an automated plant-imaging-analysis pipeline consisting of machine-learning-based plant segmentation. Our hardware acquires images reliably and quickly and minimizes plant stress. Furthermore, the images are processed automatically. In particular, large-scale plant-image datasets can be segmented precisely using a classifier developed using a superpixel-based machine-learning algorithm (Random Forest), and variations in plant parameters (such as area) over time can be assessed using the segmented images. We performed comparative evaluations to identify an appropriate learning algorithm for our proposed system, and tested three robust learning algorithms. We developed not only an automatic analysis pipeline but also a convenient means of plant-growth analysis that provides a learning data interface and visualization of plant growth trends. Thus, our system allows end-users such as plant biologists to analyze plant growth via large-scale plant image data easily.

  15. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  16. Stochastic geometry for image analysis

    CERN Document Server

    Descombes, Xavier

    2013-01-01

    This book develops the stochastic geometry framework for image analysis purpose. Two main frameworks are  described: marked point process and random closed sets models. We derive the main issues for defining an appropriate model. The algorithms for sampling and optimizing the models as well as for estimating parameters are reviewed.  Numerous applications, covering remote sensing images, biological and medical imaging, are detailed.  This book provides all the necessary tools for developing an image analysis application based on modern stochastic modeling.

  17. A survey of MRI-based medical image analysis for brain tumor studies

    Science.gov (United States)

    Bauer, Stefan; Wiest, Roland; Nolte, Lutz-P.; Reyes, Mauricio

    2013-07-01

    MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While the pioneering approaches applying automated methods for the analysis of brain tumor images date back almost two decades, the current methods are becoming more mature and coming closer to routine clinical application. This review aims to provide a comprehensive overview by giving a brief introduction to brain tumors and imaging of brain tumors first. Then, we review the state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas. The objective in the segmentation is outlining the tumor including its sub-compartments and surrounding tissues, while the main challenge in registration and modeling is the handling of morphological changes caused by the tumor. The qualities of different approaches are discussed with a focus on methods that can be applied on standard clinical imaging protocols. Finally, a critical assessment of the current state is performed and future developments and trends are addressed, giving special attention to recent developments in radiological tumor assessment guidelines.

  18. A survey of MRI-based medical image analysis for brain tumor studies

    International Nuclear Information System (INIS)

    Bauer, Stefan; Nolte, Lutz-P; Reyes, Mauricio; Wiest, Roland

    2013-01-01

    MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While the pioneering approaches applying automated methods for the analysis of brain tumor images date back almost two decades, the current methods are becoming more mature and coming closer to routine clinical application. This review aims to provide a comprehensive overview by giving a brief introduction to brain tumors and imaging of brain tumors first. Then, we review the state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas. The objective in the segmentation is outlining the tumor including its sub-compartments and surrounding tissues, while the main challenge in registration and modeling is the handling of morphological changes caused by the tumor. The qualities of different approaches are discussed with a focus on methods that can be applied on standard clinical imaging protocols. Finally, a critical assessment of the current state is performed and future developments and trends are addressed, giving special attention to recent developments in radiological tumor assessment guidelines. (topical review)

  19. Nuclear physical express analysis of solid fuel sulphur content

    International Nuclear Information System (INIS)

    Pak, Yu.; Ponomaryova, M

    2005-01-01

    Full text: Sulphur content is an important qualitative coal parameter. The problem of coal sulphur content determining remains one of the most important both in Kazakhstan and in other coal-mining countries. The traditional method of sampling, the final stage of which is chemical analysis of coal for sulphur, is characterized by high labour intensity and low productivity. That's why it is ineffective for mass express analytical quality control and technological schemes of coal processing control. In this connection it is very urgent to develop a method of coal sulphur content on the base of a series nuclear-geophysical equipment with an isotope source of primary radiation, allowing to increase analysis representativity and maximally take into account coal real composition inconstancy. To solve the problem set it is necessary to study the main laws of X-ray-radiometric method applied to the coal quality analysis for working out instrumental methods of speed determining of coal sulphur content with satisfactory accuracy for technological tasks, to determine laws of changing the flows of characteristic X-ray and scattered radiation from coal sulphur content of various real composition and to optimize methodical and hardware parameters, providing minimal error of sulphur content control. On the base of studying laws of real composition coal components and their interconnections with sulphur content there has been substantiated the expediency of using hardware functions of calcium and iron to control coal sulphur contents; there has been suggested a model to estimate the methodical error of coal sulphur content determining on the base of the data about sensitivity to sulphur and effecting factors using ultimate methods of coal components substitution methods allowing to optimize sulphur control parameters; there has been worked out an algorithm of X-ray-radiometric control of sulphur content based on the sequential radiating the analyzed coal with gamma-radiation of

  20. Nephrus: expert system model in intelligent multilayers for evaluation of urinary system based on scintigraphic image analysis

    International Nuclear Information System (INIS)

    Silva, Jorge Wagner Esteves da; Schirru, Roberto; Boasquevisque, Edson Mendes

    1999-01-01

    Renal function can be measured noninvasively with radionuclides in a extremely safe way compared to other diagnosis techniques. Nevertheless, due to the fact that radioactive materials are used in this procedure, it is necessary to maximize its benefits, therefore all efforts are justifiable in the development of data analysis support tools for this diagnosis modality. The objective of this work is to develop a prototype for a system model based on Artificial Intelligence devices able to perform functions related to cintilographic image analysis of the urinary system. Rules used by medical experts in the analysis of images obtained with 99m Tc+DTPA and /or 99m Tc+DMSA were modeled and a Neural Network diagnosis technique was implemented. Special attention was given for designing programs user-interface. Human Factor Engineering techniques were taking in account allowing friendliness and robustness. The image segmentation adopts a model based on Ideal ROIs, which represent the normal anatomic concept for urinary system organs. Results obtained using Artificial Neural Networks for qualitative image analysis and knowledge model constructed show the feasibility of Artificial Neural Networks for qualitative image analysis and knowledge model constructed show feasibility of Artificial Intelligence implementation that uses inherent abilities of each technique in the medical diagnosis image analysis. (author)

  1. Information granules in image histogram analysis.

    Science.gov (United States)

    Wieclawek, Wojciech

    2018-04-01

    A concept of granular computing employed in intensity-based image enhancement is discussed. First, a weighted granular computing idea is introduced. Then, the implementation of this term in the image processing area is presented. Finally, multidimensional granular histogram analysis is introduced. The proposed approach is dedicated to digital images, especially to medical images acquired by Computed Tomography (CT). As the histogram equalization approach, this method is based on image histogram analysis. Yet, unlike the histogram equalization technique, it works on a selected range of the pixel intensity and is controlled by two parameters. Performance is tested on anonymous clinical CT series. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Classification in Medical Imaging

    DEFF Research Database (Denmark)

    Chen, Chen

    Classification is extensively used in the context of medical image analysis for the purpose of diagnosis or prognosis. In order to classify image content correctly, one needs to extract efficient features with discriminative properties and build classifiers based on these features. In addition...... on characterizing human faces and emphysema disease in lung CT images....

  3. Anti-cancer agents in Saudi Arabian herbals revealed by automated high-content imaging

    KAUST Repository

    Hajjar, Dina

    2017-06-13

    Natural products have been used for medical applications since ancient times. Commonly, natural products are structurally complex chemical compounds that efficiently interact with their biological targets, making them useful drug candidates in cancer therapy. Here, we used cell-based phenotypic profiling and image-based high-content screening to study the mode of action and potential cellular targets of plants historically used in Saudi Arabia\\'s traditional medicine. We compared the cytological profiles of fractions taken from Juniperus phoenicea (Arar), Anastatica hierochuntica (Kaff Maryam), and Citrullus colocynthis (Hanzal) with a set of reference compounds with established modes of action. Cluster analyses of the cytological profiles of the tested compounds suggested that these plants contain possible topoisomerase inhibitors that could be effective in cancer treatment. Using histone H2AX phosphorylation as a marker for DNA damage, we discovered that some of the compounds induced double-strand DNA breaks. Furthermore, chemical analysis of the active fraction isolated from Juniperus phoenicea revealed possible anti-cancer compounds. Our results demonstrate the usefulness of cell-based phenotypic screening of natural products to reveal their biological activities.

  4. Quantitative Assessment of Pap Smear Cells by PC-Based Cytopathologic Image Analysis System and Support Vector Machine

    Science.gov (United States)

    Huang, Po-Chi; Chan, Yung-Kuan; Chan, Po-Chou; Chen, Yung-Fu; Chen, Rung-Ching; Huang, Yu-Ruei

    Cytologic screening has been widely used for controlling the prevalence of cervical cancer. Errors from sampling, screening and interpretation, still concealed some unpleasant results. This study aims at designing a cellular image analysis system based on feasible and available software and hardware for a routine cytologic laboratory. Totally 1814 cellular images from the liquid-based cervical smears with Papanicolaou stain in 100x, 200x, and 400x magnification were captured by a digital camera. Cell images were reviewed by pathologic experts with peer agreement and only 503 images were selected for further study. The images were divided into 4 diagnostic categories. A PC-based cellular image analysis system (PCCIA) was developed for computing morphometric parameters. Then support vector machine (SVM) was used to classify signature patterns. The results show that the selected 13 morphometric parameters can be used to correctly differentiate the dysplastic cells from the normal cells (pgynecologic cytologic specimens.

  5. Algorithm for image retrieval based on edge gradient orientation statistical code.

    Science.gov (United States)

    Zeng, Jiexian; Zhao, Yonggang; Li, Weiye; Fu, Xiang

    2014-01-01

    Image edge gradient direction not only contains important information of the shape, but also has a simple, lower complexity characteristic. Considering that the edge gradient direction histograms and edge direction autocorrelogram do not have the rotation invariance, we put forward the image retrieval algorithm which is based on edge gradient orientation statistical code (hereinafter referred to as EGOSC) by sharing the application of the statistics method in the edge direction of the chain code in eight neighborhoods to the statistics of the edge gradient direction. Firstly, we construct the n-direction vector and make maximal summation restriction on EGOSC to make sure this algorithm is invariable for rotation effectively. Then, we use Euclidean distance of edge gradient direction entropy to measure shape similarity, so that this method is not sensitive to scaling, color, and illumination change. The experimental results and the algorithm analysis demonstrate that the algorithm can be used for content-based image retrieval and has good retrieval results.

  6. [Present status and trend of heart fluid mechanics research based on medical image analysis].

    Science.gov (United States)

    Gan, Jianhong; Yin, Lixue; Xie, Shenghua; Li, Wenhua; Lu, Jing; Luo, Anguo

    2014-06-01

    With introduction of current main methods for heart fluid mechanics researches, we studied the characteristics and weakness for three primary analysis methods based on magnetic resonance imaging, color Doppler ultrasound and grayscale ultrasound image, respectively. It is pointed out that particle image velocity (PIV), speckle tracking and block match have the same nature, and three algorithms all adopt block correlation. The further analysis shows that, with the development of information technology and sensor, the research for cardiac function and fluid mechanics will focus on energy transfer process of heart fluid, characteristics of Chamber wall related to blood fluid and Fluid-structure interaction in the future heart fluid mechanics fields.

  7. How Content Analysis may Complement and Extend the Insights of Discourse Analysis

    Directory of Open Access Journals (Sweden)

    Tracey Feltham-King

    2016-02-01

    Full Text Available Although discourse analysis is a well-established qualitative research methodology, little attention has been paid to how discourse analysis may be enhanced through careful supplementation with the quantification allowed in content analysis. In this article, we report on a research study that involved the use of both Foucauldian discourse analysis (FDA and directed content analysis based on social constructionist theory and our qualitative research findings. The research focused on the discourses deployed, and the ways in which women were discursively positioned, in relation to abortion in 300 newspaper articles, published in 25 national and regional South African newspapers over 28 years, from 1978 to 2005. While the FDA was able to illuminate the constitutive network of power relations constructing women as subjects of a particular kind, questions emerged that were beyond the scope of the FDA. These questions concerned understanding the relative weightings of various discourses and tracing historical changes in the deployment of these discourses. In this article, we show how the decision to combine FDA and content analysis affected our sampling methodology. Using specific examples, we illustrate the contribution of the FDA to the study. Then, we indicate how subject positioning formed the link between the FDA and the content analysis. Drawing on the same examples, we demonstrate how the content analysis supplemented the FDA through tracking changes over time and providing empirical evidence of the extent to which subject positionings were deployed.

  8. Enriching Discovery Layers: A Product Comparison of Content Enrichment Services Syndetic Solutions and Content Café 2

    Directory of Open Access Journals (Sweden)

    Allison DaSilva

    2014-10-01

    Full Text Available A comparative analysis of content enrichment services, Syndetic Solutions and Content Café 2, was undertaken to explore which service would provide public library users with a superior online search and discovery experience through the enriched data elements offered, specifically looking at the cover image data element and exploring what factors impact the display of said element. A data-set of 250 items in five different formats, including books, CDs, DVDs, e-books, and video games, was searched in four North American public libraries’ discovery layers to compare the integration, extent, and quality of the cover image data element supplied by Syndetic Solutions and Content Café 2. Based on an analysis of the URLs, ISBNs, and UPCs for each of the 250 items, it was determined that the integration, and therefore the display, of the cover image data element was impacted by: (1 whether or not an ISBN or UPC was listed in the MARC bibliographic record; (2 which ISBN or UPC was listed, as items could potentially have more than one; (3 the inclusion of both ISBNs and UPCs in the record and the settings of the discovery tool; (4 the order in which the ISBNs or UPCs were listed in the record; and (5 whether or not Syndetic Solutions or Content Café 2 had the image data in its database at the time of the search. The quality of the cover image displayed was found to be impacted by the size requested by the library and the size of the image provided by the publisher. These findings may also have implications for the integration of other enriched data elements.

  9. Histogram analysis of T2*-based pharmacokinetic imaging in cerebral glioma grading.

    Science.gov (United States)

    Liu, Hua-Shan; Chiang, Shih-Wei; Chung, Hsiao-Wen; Tsai, Ping-Huei; Hsu, Fei-Ting; Cho, Nai-Yu; Wang, Chao-Ying; Chou, Ming-Chung; Chen, Cheng-Yu

    2018-03-01

    To investigate the feasibility of histogram analysis of the T2*-based permeability parameter volume transfer constant (K trans ) for glioma grading and to explore the diagnostic performance of the histogram analysis of K trans and blood plasma volume (v p ). We recruited 31 and 11 patients with high- and low-grade gliomas, respectively. The histogram parameters of K trans and v p , derived from the first-pass pharmacokinetic modeling based on the T2* dynamic susceptibility-weighted contrast-enhanced perfusion-weighted magnetic resonance imaging (T2* DSC-PW-MRI) from the entire tumor volume, were evaluated for differentiating glioma grades. Histogram parameters of K trans and v p showed significant differences between high- and low-grade gliomas and exhibited significant correlations with tumor grades. The mean K trans derived from the T2* DSC-PW-MRI had the highest sensitivity and specificity for differentiating high-grade gliomas from low-grade gliomas compared with other histogram parameters of K trans and v p . Histogram analysis of T2*-based pharmacokinetic imaging is useful for cerebral glioma grading. The histogram parameters of the entire tumor K trans measurement can provide increased accuracy with additional information regarding microvascular permeability changes for identifying high-grade brain tumors. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Segmental Quantitative MR Imaging analysis of diurnal variation of water content in the lumbar intervertebral discs

    International Nuclear Information System (INIS)

    Zhu, Ting Ting; Ai, Tao; Zhang, Wei; Li, Tao; Li, Xiao Ming

    2015-01-01

    To investigate the changes in water content in the lumbar intervertebral discs by quantitative T2 MR imaging in the morning after bed rest and evening after a diurnal load. Twenty healthy volunteers were separately examined in the morning after bed rest and in the evening after finishing daily work. T2-mapping images were obtained and analyzed. An equally-sized rectangular region of interest (ROI) was manually placed in both, the anterior and the posterior annulus fibrosus (AF), in the outermost 20% of the disc. Three ROIs were placed in the space defined as the nucleus pulposus (NP). Repeated-measures analysis of variance and paired 2-tailed t tests were used for statistical analysis, with p < 0.05 as significantly different. T2 values significantly decreased from morning to evening, in the NP (anterior NP = -13.9 ms; central NP = -17.0 ms; posterior NP = -13.3 ms; all p < 0.001). Meanwhile T2 values significantly increased in the anterior AF (+2.9 ms; p = 0.025) and the posterior AF (+5.9 ms; p < 0.001). T2 values in the posterior AF showed the largest degree of variation among the 5 ROIs, but there was no statistical significance (p = 0.414). Discs with initially low T2 values in the center NP showed a smaller degree of variation in the anterior NP and in the central NP, than in discs with initially high T2 values in the center NP (10.0% vs. 16.1%, p = 0.037; 6.4% vs. 16.1%, p = 0.006, respectively). Segmental quantitative T2 MRI provides valuable insights into physiological aspects of normal discs.

  11. Digital image analysis

    DEFF Research Database (Denmark)

    Riber-Hansen, Rikke; Vainer, Ben; Steiniche, Torben

    2012-01-01

    Digital image analysis (DIA) is increasingly implemented in histopathological research to facilitate truly quantitative measurements, decrease inter-observer variation and reduce hands-on time. Originally, efforts were made to enable DIA to reproduce manually obtained results on histological slides...... reproducibility, application of stereology-based quantitative measurements, time consumption, optimization of histological slides, regions of interest selection and recent developments in staining and imaging techniques....

  12. Representations of Codeine Misuse on Instagram: Content Analysis.

    Science.gov (United States)

    Cherian, Roy; Westbrook, Marisa; Ramo, Danielle; Sarkar, Urmimala

    2018-03-20

    Prescription opioid misuse has doubled over the past 10 years and is now a public health epidemic. Analysis of social media data may provide additional insights into opioid misuse to supplement the traditional approaches of data collection (eg, self-report on surveys). The aim of this study was to characterize representations of codeine misuse through analysis of public posts on Instagram to understand text phrases related to misuse. We identified hashtags and searchable text phrases associated with codeine misuse by analyzing 1156 sequential Instagram posts over the course of 2 weeks from May 2016 to July 2016. Content analysis of posts associated with these hashtags identified the most common themes arising in images, as well as culture around misuse, including how misuse is happening and being perpetuated through social media. A majority of images (50/100; 50.0%) depicted codeine in its commonly misused form, combined with soda (lean). Codeine misuse was commonly represented with the ingestion of alcohol, cannabis, and benzodiazepines. Some images highlighted the previously noted affinity between codeine misuse and hip-hop culture or mainstream popular culture images. The prevalence of codeine misuse images, glamorizing of ingestion with soda and alcohol, and their integration with mainstream, popular culture imagery holds the potential to normalize and increase codeine misuse and overdose. To reduce harm and prevent misuse, immediate public health efforts are needed to better understand the relationship between the potential normalization, ritualization, and commercialization of codeine misuse. ©Roy Cherian, Marisa Westbrook, Danielle Ramo, Urmimala Sarkar. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 20.03.2018.

  13. Web content adaptation for mobile device: A fuzzy-based approach

    Directory of Open Access Journals (Sweden)

    Frank C.C. Wu

    2012-03-01

    Full Text Available While HTML will continue to be used to develop Web content, how to effectively and efficiently transform HTML-based content automatically into formats suitable for mobile devices remains a challenge. In this paper, we introduce a concept of coherence set and propose an algorithm to automatically identify and detect coherence sets based on quantified similarity between adjacent presentation groups. Experimental results demonstrate that our method enhances Web content analysis and adaptation on the mobile Internet.

  14. Primena satelitskih snimaka za dopunu sadržaja topografskih karata / An application of satellite images for improving the content of topographic maps

    Directory of Open Access Journals (Sweden)

    Miodrag D. Regodić

    2010-10-01

    necessary to improve its quality and emphasize the date to be collected. The enhancement of image quality can be: - radiometric (radiometric enhancement - contrast enhancement - histogram stretching and equalization - brightness optimization (brightness enhancement, - spatial enhancement - sharpening - averagening - pointing out line elements and edges (edge enhancement - spectral enhancement - principal components analysis Image conveying By various image transformations (conveying, we get a 'new' image from a raw one, which results in emphasizing certain phenomena and attributes. Image transformation is an important step in digital processing, but for the geodesy purposes, i.e. mapping, this phase is not of essential importance. Geometric image correction Previous image processing of remote sensing is based on the analysis of radiometric and geometric characteristics. By understanding these image attributes, it is possible to correct image deformations, prove the quality and readability. Within this experiment, geometric deformations and a way of eliminating them were processed. Digital image analysis and interpretation A very important phase of satellite images processing is image analysis and interpretation. The analysis and interpretation can be visual and instrumental (by computer. The image resolution and possibility of recognizing image content are important for both of them. The analysis is a procedure of determining differences in characteristics and extraction of objects or terrain area based on their individual attributes. Visual image analysis and interpretation A visual image analysis and interpretation include registration and recognition of various useful data by eyesight. Computer added analysis and photo-interpretation-data conversion, geometric and radiometric image correction.-were done on a digital image. Elements of visual image analysis and interpretation A visual or logical analysis was done by observing images, notifying the differences and

  15. Using Sentinel-1 and Landsat 8 satellite images to estimate surface soil moisture content.

    Science.gov (United States)

    Mexis, Philippos-Dimitrios; Alexakis, Dimitrios D.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.

    2016-04-01

    Nowadays, the potential for more accurate assessment of Soil Moisture (SM) content exploiting Earth Observation (EO) technology, by exploring the use of synergistic approaches among a variety of EO instruments has emerged. This study is the first to investigate the potential of Synthetic Aperture Radar (SAR) (Sentinel-1) and optical (Landsat 8) images in combination with ground measurements to estimate volumetric SM content in support of water management and agricultural practices. SAR and optical data are downloaded and corrected in terms of atmospheric, geometric and radiometric corrections. SAR images are also corrected in terms of roughness and vegetation with the synergistic use of Oh and Topp models using a dataset consisting of backscattering coefficients and corresponding direct measurements of ground parameters (moisture, roughness). Following, various vegetation indices (NDVI, SAVI, MSAVI, EVI, etc.) are estimated to record diachronically the vegetation regime within the study area and as auxiliary data in the final modeling. Furthermore, thermal images from optical data are corrected and incorporated to the overall approach. The basic principle of Thermal InfraRed (TIR) method is that Land Surface Temperature (LST) is sensitive to surface SM content due to its impact on surface heating process (heat capacity and thermal conductivity) under bare soil or sparse vegetation cover conditions. Ground truth data are collected from a Time-domain reflectometer (TRD) gauge network established in western Crete, Greece, during 2015. Sophisticated algorithms based on Artificial Neural Networks (ANNs) and Multiple Linear Regression (MLR) approaches are used to explore the statistical relationship between backscattering measurements and SM content. Results highlight the potential of SAR and optical satellite images to contribute to effective SM content detection in support of water resources management and precision agriculture. Keywords: Sentinel-1, Landsat 8, Soil

  16. ESTIMATION OF MELANIN CONTENT IN IRIS OF HUMAN EYE

    Directory of Open Access Journals (Sweden)

    E. A. Genina

    2008-12-01

    Full Text Available Based on the experimental data obtained in vivo from digital analysis of color images of human irises, the mean melanin content in human eye irises has been estimated. For registration of color images the digital camera Olympus C-5060 has been used. The images have been obtained from irises of healthy volunteers as well as from irises of patients with open-angle glaucoma. The computer program has been developed for digital analysis of the images. The result has been useful for development of novel methods and optimization of already existing ones for non-invasive glaucoma diagnostics.

  17. Quantitative analysis of γ–oryzanol content in cold pressed rice bran oil by TLC–image analysis method

    Directory of Open Access Journals (Sweden)

    Apirak Sakunpak

    2014-02-01

    Conclusions: The TLC-densitometric and TLC-image analysis methods provided a similar reproducibility, accuracy and selectivity for the quantitative determination of γ-oryzanol in cold pressed rice bran oil. A statistical comparison of the quantitative determinations of γ-oryzanol in samples did not show any statistically significant difference between TLC-densitometric and TLC-image analysis methods. As both methods were found to be equal, they therefore can be used for the determination of γ-oryzanol in cold pressed rice bran oil.

  18. A Subdivision-Based Representation for Vector Image Editing.

    Science.gov (United States)

    Liao, Zicheng; Hoppe, Hugues; Forsyth, David; Yu, Yizhou

    2012-11-01

    Vector graphics has been employed in a wide variety of applications due to its scalability and editability. Editability is a high priority for artists and designers who wish to produce vector-based graphical content with user interaction. In this paper, we introduce a new vector image representation based on piecewise smooth subdivision surfaces, which is a simple, unified and flexible framework that supports a variety of operations, including shape editing, color editing, image stylization, and vector image processing. These operations effectively create novel vector graphics by reusing and altering existing image vectorization results. Because image vectorization yields an abstraction of the original raster image, controlling the level of detail of this abstraction is highly desirable. To this end, we design a feature-oriented vector image pyramid that offers multiple levels of abstraction simultaneously. Our new vector image representation can be rasterized efficiently using GPU-accelerated subdivision. Experiments indicate that our vector image representation achieves high visual quality and better supports editing operations than existing representations.

  19. HC StratoMineR: A Web-Based Tool for the Rapid Analysis of High-Content Datasets.

    Science.gov (United States)

    Omta, Wienand A; van Heesbeen, Roy G; Pagliero, Romina J; van der Velden, Lieke M; Lelieveld, Daphne; Nellen, Mehdi; Kramer, Maik; Yeong, Marley; Saeidi, Amir M; Medema, Rene H; Spruit, Marco; Brinkkemper, Sjaak; Klumperman, Judith; Egan, David A

    2016-10-01

    High-content screening (HCS) can generate large multidimensional datasets and when aligned with the appropriate data mining tools, it can yield valuable insights into the mechanism of action of bioactive molecules. However, easy-to-use data mining tools are not widely available, with the result that these datasets are frequently underutilized. Here, we present HC StratoMineR, a web-based tool for high-content data analysis. It is a decision-supportive platform that guides even non-expert users through a high-content data analysis workflow. HC StratoMineR is built by using My Structured Query Language for storage and querying, PHP: Hypertext Preprocessor as the main programming language, and jQuery for additional user interface functionality. R is used for statistical calculations, logic and data visualizations. Furthermore, C++ and graphical processor unit power is diffusely embedded in R by using the rcpp and rpud libraries for operations that are computationally highly intensive. We show that we can use HC StratoMineR for the analysis of multivariate data from a high-content siRNA knock-down screen and a small-molecule screen. It can be used to rapidly filter out undesirable data; to select relevant data; and to perform quality control, data reduction, data exploration, morphological hit picking, and data clustering. Our results demonstrate that HC StratoMineR can be used to functionally categorize HCS hits and, thus, provide valuable information for hit prioritization.

  20. Computer-based quantitative computed tomography image analysis in idiopathic pulmonary fibrosis: A mini review.

    Science.gov (United States)

    Ohkubo, Hirotsugu; Nakagawa, Hiroaki; Niimi, Akio

    2018-01-01

    Idiopathic pulmonary fibrosis (IPF) is the most common type of progressive idiopathic interstitial pneumonia in adults. Many computer-based image analysis methods of chest computed tomography (CT) used in patients with IPF include the mean CT value of the whole lungs, density histogram analysis, density mask technique, and texture classification methods. Most of these methods offer good assessment of pulmonary functions, disease progression, and mortality. Each method has merits that can be used in clinical practice. One of the texture classification methods is reported to be superior to visual CT scoring by radiologist for correlation with pulmonary function and prediction of mortality. In this mini review, we summarize the current literature on computer-based CT image analysis of IPF and discuss its limitations and several future directions. Copyright © 2017 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.

  1. Rapid analysis and exploration of fluorescence microscopy images.

    Science.gov (United States)

    Pavie, Benjamin; Rajaram, Satwik; Ouyang, Austin; Altschuler, Jason M; Steininger, Robert J; Wu, Lani F; Altschuler, Steven J

    2014-03-19

    Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard. Here we present an alternate, cell-segmentation-free workflow based on PhenoRipper, an open-source software platform designed for the rapid analysis and exploration of microscopy images. The pipeline presented here is optimized for immunofluorescence microscopy images of cell cultures and requires minimal user intervention. Within half an hour, PhenoRipper can analyze data from a typical 96-well experiment and generate image profiles. Users can then visually explore their data, perform quality control on their experiment, ensure response to perturbations and check reproducibility of replicates. This facilitates a rapid feedback cycle between analysis and experiment, which is crucial during assay optimization. This protocol is useful not just as a first pass analysis for quality control, but also may be used as an end-to-end solution, especially for screening. The workflow described here scales to large data sets such as those generated by high-throughput screens, and has been shown to group experimental conditions by phenotype accurately over a wide range of biological systems. The PhenoBrowser interface provides an intuitive framework to explore the phenotypic space and relate image properties to biological annotations. Taken together, the protocol described here will lower the barriers to adopting quantitative analysis of image based screens.

  2. Information-theoretical analysis of private content identification

    NARCIS (Netherlands)

    Voloshynovskiy, S.; Koval, O.; Beekhof, F.; Farhadzadeh, F.; Holotyak, T.

    2010-01-01

    In recent years, content identification based on digital fingerprinting attracts a lot of attention in different emerging applications. At the same time, the theoretical analysis of digital fingerprinting systems for finite length case remains an open issue. Additionally, privacy leaks caused by

  3. Adolescent judgment of sexual content on television: implications for future content analysis research.

    Science.gov (United States)

    Manganello, Jennifer A; Henderson, Vani R; Jordan, Amy; Trentacoste, Nicole; Martin, Suzanne; Hennessy, Michael; Fishbein, Martin

    2010-07-01

    Many studies of sexual messages in media utilize content analysis methods. At times, this research assumes that researchers and trained coders using content analysis methods and the intended audience view and interpret media content similarly. This article compares adolescents' perceptions of the presence or absence of sexual content on television to those of researchers using three different coding schemes. Results from this formative research study suggest that participants and researchers are most likely to agree with content categories assessing manifest content, and that differences exist among adolescents who view sexual messages on television. Researchers using content analysis methods to examine sexual content in media and media effects on sexual behavior should consider identifying how audience characteristics may affect interpretation of content and account for audience perspectives in content analysis study protocols when appropriate for study goals.

  4. Computer-Based Image Analysis for Plus Disease Diagnosis in Retinopathy of Prematurity: Performance of the "i-ROP" System and Image Features Associated With Expert Diagnosis.

    Science.gov (United States)

    Ataer-Cansizoglu, Esra; Bolon-Canedo, Veronica; Campbell, J Peter; Bozkurt, Alican; Erdogmus, Deniz; Kalpathy-Cramer, Jayashree; Patel, Samir; Jonas, Karyn; Chan, R V Paul; Ostmo, Susan; Chiang, Michael F

    2015-11-01

    We developed and evaluated the performance of a novel computer-based image analysis system for grading plus disease in retinopathy of prematurity (ROP), and identified the image features, shapes, and sizes that best correlate with expert diagnosis. A dataset of 77 wide-angle retinal images from infants screened for ROP was collected. A reference standard diagnosis was determined for each image by combining image grading from 3 experts with the clinical diagnosis from ophthalmoscopic examination. Manually segmented images were cropped into a range of shapes and sizes, and a computer algorithm was developed to extract tortuosity and dilation features from arteries and veins. Each feature was fed into our system to identify the set of characteristics that yielded the highest-performing system compared to the reference standard, which we refer to as the "i-ROP" system. Among the tested crop shapes, sizes, and measured features, point-based measurements of arterial and venous tortuosity (combined), and a large circular cropped image (with radius 6 times the disc diameter), provided the highest diagnostic accuracy. The i-ROP system achieved 95% accuracy for classifying preplus and plus disease compared to the reference standard. This was comparable to the performance of the 3 individual experts (96%, 94%, 92%), and significantly higher than the mean performance of 31 nonexperts (81%). This comprehensive analysis of computer-based plus disease suggests that it may be feasible to develop a fully-automated system based on wide-angle retinal images that performs comparably to expert graders at three-level plus disease discrimination. Computer-based image analysis, using objective and quantitative retinal vascular features, has potential to complement clinical ROP diagnosis by ophthalmologists.

  5. UAV-based urban structural damage assessment using object-based image analysis and semantic reasoning

    Science.gov (United States)

    Fernandez Galarreta, J.; Kerle, N.; Gerke, M.

    2015-06-01

    Structural damage assessment is critical after disasters but remains a challenge. Many studies have explored the potential of remote sensing data, but limitations of vertical data persist. Oblique imagery has been identified as more useful, though the multi-angle imagery also adds a new dimension of complexity. This paper addresses damage assessment based on multi-perspective, overlapping, very high resolution oblique images obtained with unmanned aerial vehicles (UAVs). 3-D point-cloud assessment for the entire building is combined with detailed object-based image analysis (OBIA) of façades and roofs. This research focuses not on automatic damage assessment, but on creating a methodology that supports the often ambiguous classification of intermediate damage levels, aiming at producing comprehensive per-building damage scores. We identify completely damaged structures in the 3-D point cloud, and for all other cases provide the OBIA-based damage indicators to be used as auxiliary information by damage analysts. The results demonstrate the usability of the 3-D point-cloud data to identify major damage features. Also the UAV-derived and OBIA-processed oblique images are shown to be a suitable basis for the identification of detailed damage features on façades and roofs. Finally, we also demonstrate the possibility of aggregating the multi-perspective damage information at building level.

  6. Principal component analysis-based imaging angle determination for 3D motion monitoring using single-slice on-board imaging.

    Science.gov (United States)

    Chen, Ting; Zhang, Miao; Jabbour, Salma; Wang, Hesheng; Barbee, David; Das, Indra J; Yue, Ning

    2018-04-10

    Through-plane motion introduces uncertainty in three-dimensional (3D) motion monitoring when using single-slice on-board imaging (OBI) modalities such as cine MRI. We propose a principal component analysis (PCA)-based framework to determine the optimal imaging plane to minimize the through-plane motion for single-slice imaging-based motion monitoring. Four-dimensional computed tomography (4DCT) images of eight thoracic cancer patients were retrospectively analyzed. The target volumes were manually delineated at different respiratory phases of 4DCT. We performed automated image registration to establish the 4D respiratory target motion trajectories for all patients. PCA was conducted using the motion information to define the three principal components of the respiratory motion trajectories. Two imaging planes were determined perpendicular to the second and third principal component, respectively, to avoid imaging with the primary principal component of the through-plane motion. Single-slice images were reconstructed from 4DCT in the PCA-derived orthogonal imaging planes and were compared against the traditional AP/Lateral image pairs on through-plane motion, residual error in motion monitoring, absolute motion amplitude error and the similarity between target segmentations at different phases. We evaluated the significance of the proposed motion monitoring improvement using paired t test analysis. The PCA-determined imaging planes had overall less through-plane motion compared against the AP/Lateral image pairs. For all patients, the average through-plane motion was 3.6 mm (range: 1.6-5.6 mm) for the AP view and 1.7 mm (range: 0.6-2.7 mm) for the Lateral view. With PCA optimization, the average through-plane motion was 2.5 mm (range: 1.3-3.9 mm) and 0.6 mm (range: 0.2-1.5 mm) for the two imaging planes, respectively. The absolute residual error of the reconstructed max-exhale-to-inhale motion averaged 0.7 mm (range: 0.4-1.3 mm, 95% CI: 0.4-1.1 mm) using

  7. Estimation of physiological parameters using knowledge-based factor analysis of dynamic nuclear medicine image sequences

    International Nuclear Information System (INIS)

    Yap, J.T.; Chen, C.T.; Cooper, M.

    1995-01-01

    The authors have previously developed a knowledge-based method of factor analysis to analyze dynamic nuclear medicine image sequences. In this paper, the authors analyze dynamic PET cerebral glucose metabolism and neuroreceptor binding studies. These methods have shown the ability to reduce the dimensionality of the data, enhance the image quality of the sequence, and generate meaningful functional images and their corresponding physiological time functions. The new information produced by the factor analysis has now been used to improve the estimation of various physiological parameters. A principal component analysis (PCA) is first performed to identify statistically significant temporal variations and remove the uncorrelated variations (noise) due to Poisson counting statistics. The statistically significant principal components are then used to reconstruct a noise-reduced image sequence as well as provide an initial solution for the factor analysis. Prior knowledge such as the compartmental models or the requirement of positivity and simple structure can be used to constrain the analysis. These constraints are used to rotate the factors to the most physically and physiologically realistic solution. The final result is a small number of time functions (factors) representing the underlying physiological processes and their associated weighting images representing the spatial localization of these functions. Estimation of physiological parameters can then be performed using the noise-reduced image sequence generated from the statistically significant PCs and/or the final factor images and time functions. These results are compared to the parameter estimation using standard methods and the original raw image sequences. Graphical analysis was performed at the pixel level to generate comparable parametric images of the slope and intercept (influx constant and distribution volume)

  8. Density-based similarity measures for content based search

    Energy Technology Data Exchange (ETDEWEB)

    Hush, Don R [Los Alamos National Laboratory; Porter, Reid B [Los Alamos National Laboratory; Ruggiero, Christy E [Los Alamos National Laboratory

    2009-01-01

    We consider the query by multiple example problem where the goal is to identify database samples whose content is similar to a coUection of query samples. To assess the similarity we use a relative content density which quantifies the relative concentration of the query distribution to the database distribution. If the database distribution is a mixture of the query distribution and a background distribution then it can be shown that database samples whose relative content density is greater than a particular threshold {rho} are more likely to have been generated by the query distribution than the background distribution. We describe an algorithm for predicting samples with relative content density greater than {rho} that is computationally efficient and possesses strong performance guarantees. We also show empirical results for applications in computer network monitoring and image segmentation.

  9. Vanillin inhibits translation and induces messenger ribonucleoprotein (mRNP) granule formation in saccharomyces cerevisiae: application and validation of high-content, image-based profiling.

    Science.gov (United States)

    Iwaki, Aya; Ohnuki, Shinsuke; Suga, Yohei; Izawa, Shingo; Ohya, Yoshikazu

    2013-01-01

    Vanillin, generated by acid hydrolysis of lignocellulose, acts as a potent inhibitor of the growth of the yeast Saccharomyces cerevisiae. Here, we investigated the cellular processes affected by vanillin using high-content, image-based profiling. Among 4,718 non-essential yeast deletion mutants, the morphology of those defective in the large ribosomal subunit showed significant similarity to that of vanillin-treated cells. The defects in these mutants were clustered in three domains of the ribosome: the mRNA tunnel entrance, exit and backbone required for small subunit attachment. To confirm that vanillin inhibited ribosomal function, we assessed polysome and messenger ribonucleoprotein granule formation after treatment with vanillin. Analysis of polysome profiles showed disassembly of the polysomes in the presence of vanillin. Processing bodies and stress granules, which are composed of non-translating mRNAs and various proteins, were formed after treatment with vanillin. These results suggest that vanillin represses translation in yeast cells.

  10. Photoshop-based image analysis of canine articular cartilage after subchondral damage.

    Science.gov (United States)

    Lahm, A; Uhl, M; Lehr, H A; Ihling, C; Kreuz, P C; Haberstroh, J

    2004-09-01

    The validity of histopathological grading is a major problem in the assessment of articular cartilage. Calculating the cumulative strength of signal intensity of different stains gives information regarding the amount of proteoglycan, glycoproteins, etc. Using this system, we examined the medium-term effect of subchondral lesions on initially healthy articular cartilage. After cadaver studies, an animal model was created to produce pure subchondral damage without affecting the articular cartilage in 12 beagle dogs under MRI control. Quantification of the different stains was provided using a Photoshop-based image analysis (pixel analysis) with the histogram command 6 months after subchondral trauma. FLASH 3D sequences revealed intact cartilage after impact in all cases. The best detection of subchondral fractures was achieved with fat-suppressed TIRM sequences. Semiquantitative image analysis showed changes in proteoglycan and glycoprotein quantities in 9 of 12 samples that had not shown any evidence of damage during the initial examination. Correlation analysis showed a loss of the physiological distribution of proteoglycans and glycoproteins in the different zones of articular cartilage. Currently available software programs can be applied for comparative analysis of histologic stains of hyaline cartilage. After subchondral fractures, significant changes in the cartilage itself occur after 6 months.

  11. Quantitative analysis of receptor imaging

    International Nuclear Information System (INIS)

    Fu Zhanli; Wang Rongfu

    2004-01-01

    Model-based methods for quantitative analysis of receptor imaging, including kinetic, graphical and equilibrium methods, are introduced in detail. Some technical problem facing quantitative analysis of receptor imaging, such as the correction for in vivo metabolism of the tracer and the radioactivity contribution from blood volume within ROI, and the estimation of the nondisplaceable ligand concentration, is also reviewed briefly

  12. "Appearance potent"? A content analysis of UK gay and straight men's magazines.

    Science.gov (United States)

    Jankowski, Glen S; Fawkner, Helen; Slater, Amy; Tiggemann, Marika

    2014-09-01

    With little actual appraisal, a more 'appearance potent' (i.e., a reverence for appearance ideals) subculture has been used to explain gay men's greater body dissatisfaction in comparison to straight men's. This study sought to assess the respective appearance potency of each subculture by a content analysis of 32 issues of the most read gay (Attitude, Gay Times) and straight men's magazines (Men's Health, FHM) in the UK. Images of men and women were coded for their physical characteristics, objectification and nudity, as were the number of appearance adverts and articles. The gay men's magazines featured more images of men that were appearance ideal, nude and sexualized than the straight men's magazines. The converse was true for the images of women and appearance adverts. Although more research is needed to understand the effect of this content on the viewer, the findings are consistent with a more appearance potent gay male subculture. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  13. Exploration of mineral resource deposits based on analysis of aerial and satellite image data employing artificial intelligence methods

    Science.gov (United States)

    Osipov, Gennady

    2013-04-01

    We propose a solution to the problem of exploration of various mineral resource deposits, determination of their forms / classification of types (oil, gas, minerals, gold, etc.) with the help of satellite photography of the region of interest. Images received from satellite are processed and analyzed to reveal the presence of specific signs of deposits of various minerals. Course of data processing and making forecast can be divided into some stages: Pre-processing of images. Normalization of color and luminosity characteristics, determination of the necessary contrast level and integration of a great number of separate photos into a single map of the region are performed. Construction of semantic map image. Recognition of bitmapped image and allocation of objects and primitives known to system are realized. Intelligent analysis. At this stage acquired information is analyzed with the help of a knowledge base, which contain so-called "attention landscapes" of experts. Used methods of recognition and identification of images: a) combined method of image recognition, b)semantic analysis of posterized images, c) reconstruction of three-dimensional objects from bitmapped images, d)cognitive technology of processing and interpretation of images. This stage is fundamentally new and it distinguishes suggested technology from all others. Automatic registration of allocation of experts` attention - registration of so-called "attention landscape" of experts - is the base of the technology. Landscapes of attention are, essentially, highly effective filters that cut off unnecessary information and emphasize exactly the factors used by an expert for making a decision. The technology based on denoted principles involves the next stages, which are implemented in corresponding program agents. Training mode -> Creation of base of ophthalmologic images (OI) -> Processing and making generalized OI (GOI) -> Mode of recognition and interpretation of unknown images. Training mode

  14. The Role of Content in Inquiry-Based Elementary Science Lessons: An Analysis of Teacher Beliefs and Enactment

    Science.gov (United States)

    Furtak, Erin Marie; Alonzo, Alicia C.

    2010-05-01

    The Trends in International Mathematics and Science Study (TIMSS) Video Study explored instructional practices in the United States (US) in comparison with other countries that ranked higher on the 1999 TIMSS assessment, and revealed that 8th grade science teachers in the US emphasize activities over content during lessons (Roth et al. 2006). This study applies the content framework from the TIMSS Video Study to a sample of 28 3rd grade teachers enacting an inquiry-based unit on floating and sinking, and seeks a deeper understanding of teachers’ practices through analysis of interviews with those teachers. Transcripts of observed lessons were coded according to the TIMSS framework for types of content, and transcripts of teacher interviews were coded to capture the ways in which teachers described their role in and purposes for teaching science, particularly with respect to the floating and sinking unit. Results indicate that teachers focused more on canonical, procedural and experimental knowledge during lessons than on real-world connections and the nature of science; however, none of the types of content received major emphasis in a majority of the classrooms in the sample. During interviews, teachers described their practice in ways that prioritized helping students to like science over specific content outcomes. The study suggests that elementary school teachers’ emphasis on doing and feeling during inquiry-based lessons may interfere with teaching of content.

  15. Fruit-related terms and images on food packages and advertisements affect children's perceptions of foods' fruit content.

    Science.gov (United States)

    Heller, Rebecca; Martin-Biggers, Jennifer; Berhaupt-Glickstein, Amanda; Quick, Virginia; Byrd-Bredbenner, Carol

    2015-10-01

    To determine whether food label information and advertisements for foods containing no fruit cause children to have a false impression of the foods' fruit content. In the food label condition, a trained researcher showed each child sixteen different food label photographs depicting front-of-food label packages that varied with regard to fruit content (i.e. real fruit v. sham fruit) and label elements. In the food advertisement condition, children viewed sixteen, 30 s television food advertisements with similar fruit content and label elements as in the food label condition. After viewing each food label and advertisement, children responded to the question 'Did they use fruit to make this?' with responses of yes, no or don't know. Schools, day-care centres, after-school programmes and other community groups. Children aged 4-7 years. In the food label condition, χ 2 analysis of within fruit content variation differences indicated children (n 58; mean age 4·2 years) were significantly more accurate in identifying real fruit foods as the label's informational load increased and were least accurate when neither a fruit name nor an image was on the label. Children (n 49; mean age 5·4 years) in the food advertisement condition were more likely to identify real fruit foods when advertisements had fruit images compared with when no image was included, while fruit images in advertisements for sham fruit foods significantly reduced accuracy of responses. Findings suggest that labels and advertisements for sham fruit foods mislead children with regard to the food's real fruit content.

  16. An Object-Based Image Analysis Approach for Detecting Penguin Guano in very High Spatial Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Chandi Witharana

    2016-04-01

    Full Text Available The logistical challenges of Antarctic field work and the increasing availability of very high resolution commercial imagery have driven an interest in more efficient search and classification of remotely sensed imagery. This exploratory study employed geographic object-based analysis (GEOBIA methods to classify guano stains, indicative of chinstrap and Adélie penguin breeding areas, from very high spatial resolution (VHSR satellite imagery and closely examined the transferability of knowledge-based GEOBIA rules across different study sites focusing on the same semantic class. We systematically gauged the segmentation quality, classification accuracy, and the reproducibility of fuzzy rules. A master ruleset was developed based on one study site and it was re-tasked “without adaptation” and “with adaptation” on candidate image scenes comprising guano stains. Our results suggest that object-based methods incorporating the spectral, textural, spatial, and contextual characteristics of guano are capable of successfully detecting guano stains. Reapplication of the master ruleset on candidate scenes without modifications produced inferior classification results, while adapted rules produced comparable or superior results compared to the reference image. This work provides a road map to an operational “image-to-assessment pipeline” that will enable Antarctic wildlife researchers to seamlessly integrate VHSR imagery into on-demand penguin population census.

  17. Image-Based 3d Reconstruction and Analysis for Orthodontia

    Science.gov (United States)

    Knyaz, V. A.

    2012-08-01

    Among the main tasks of orthodontia are analysis of teeth arches and treatment planning for providing correct position for every tooth. The treatment plan is based on measurement of teeth parameters and designing perfect teeth arch curve which teeth are to create after treatment. The most common technique for teeth moving uses standard brackets which put on teeth and a wire of given shape which is clamped by these brackets for producing necessary forces to every tooth for moving it in given direction. The disadvantages of standard bracket technique are low accuracy of tooth dimensions measurements and problems with applying standard approach for wide variety of complex orthodontic cases. The image-based technique for orthodontic planning, treatment and documenting aimed at overcoming these disadvantages is proposed. The proposed approach provides performing accurate measurements of teeth parameters needed for adequate planning, designing correct teeth position and monitoring treatment process. The developed technique applies photogrammetric means for teeth arch 3D model generation, brackets position determination and teeth shifting analysis.

  18. Social media use by community-based organizations conducting health promotion: a content analysis.

    Science.gov (United States)

    Ramanadhan, Shoba; Mendez, Samuel R; Rao, Megan; Viswanath, Kasisomayajula

    2013-12-05

    Community-based organizations (CBOs) are critical channels for the delivery of health promotion programs. Much of their influence comes from the relationships they have with community members and other key stakeholders and they may be able to harness the power of social media tools to develop and maintain these relationships. There are limited data describing if and how CBOs are using social media. This study assesses the extent to which CBOs engaged in health promotion use popular social media channels, the types of content typically shared, and the extent to which the interactive aspects of social media tools are utilized. We assessed the social media presence and patterns of usage of CBOs engaged in health promotion in Boston, Lawrence, and Worcester, Massachusetts. We coded content on three popular channels: Facebook, Twitter, and YouTube. We used content analysis techniques to quantitatively summarize posts, tweets, and videos on these channels, respectively. For each organization, we coded all content put forth by the CBO on the three channels in a 30-day window. Two coders were trained and conducted the coding. Data were collected between November 2011 and January 2012. A total of 166 organizations were included in our census. We found that 42% of organizations used at least one of the channels of interest. Across the three channels, organization promotion was the most common theme for content (66% of posts, 63% of tweets, and 93% of videos included this content). Most organizations updated Facebook and Twitter content at rates close to recommended frequencies. We found limited interaction/engagement with audience members. Much of the use of social media tools appeared to be uni-directional, a flow of information from the organization to the audience. By better leveraging opportunities for interaction and user engagement, these organizations can reap greater benefits from the non-trivial investment required to use social media well. Future research should

  19. Attention-based image similarity measure with application to content-based information retrieval

    Science.gov (United States)

    Stentiford, Fred W. M.

    2003-01-01

    Whilst storage and capture technologies are able to cope with huge numbers of images, image retrieval is in danger of rendering many repositories valueless because of the difficulty of access. This paper proposes a similarity measure that imposes only very weak assumptions on the nature of the features used in the recognition process. This approach does not make use of a pre-defined set of feature measurements which are extracted from a query image and used to match those from database images, but instead generates features on a trial and error basis during the calculation of the similarity measure. This has the significant advantage that features that determine similarity can match whatever image property is important in a particular region whether it be a shape, a texture, a colour or a combination of all three. It means that effort is expended searching for the best feature for the region rather than expecting that a fixed feature set will perform optimally over the whole area of an image and over every image in a database. The similarity measure is evaluated on a problem of distinguishing similar shapes in sets of black and white symbols.

  20. Texture analysis of speckle in optical coherence tomography images of tissue phantoms

    International Nuclear Information System (INIS)

    Gossage, Kirk W; Smith, Cynthia M; Kanter, Elizabeth M; Hariri, Lida P; Stone, Alice L; Rodriguez, Jeffrey J; Williams, Stuart K; Barton, Jennifer K

    2006-01-01

    Optical coherence tomography (OCT) is an imaging modality capable of acquiring cross-sectional images of tissue using back-reflected light. Conventional OCT images have a resolution of 10-15 μm, and are thus best suited for visualizing tissue layers and structures. OCT images of collagen (with and without endothelial cells) have no resolvable features and may appear to simply show an exponential decrease in intensity with depth. However, examination of these images reveals that they display a characteristic repetitive structure due to speckle.The purpose of this study is to evaluate the application of statistical and spectral texture analysis techniques for differentiating living and non-living tissue phantoms containing various sizes and distributions of scatterers based on speckle content in OCT images. Statistically significant differences between texture parameters and excellent classification rates were obtained when comparing various endothelial cell concentrations ranging from 0 cells/ml to 25 million cells/ml. Statistically significant results and excellent classification rates were also obtained using various sizes of microspheres with concentrations ranging from 0 microspheres/ml to 500 million microspheres/ml. This study has shown that texture analysis of OCT images may be capable of differentiating tissue phantoms containing various sizes and distributions of scatterers

  1. Fiber array based hyperspectral Raman imaging for chemical selective analysis of malaria-infected red blood cells

    Energy Technology Data Exchange (ETDEWEB)

    Brückner, Michael [Leibniz Institute of Photonic Technology, 07745 Jena (Germany); Becker, Katja [Justus Liebig University Giessen, Biochemistry and Molecular Biology, 35392 Giessen (Germany); Popp, Jürgen [Leibniz Institute of Photonic Technology, 07745 Jena (Germany); Friedrich Schiller University Jena, Institute for Physical Chemistry, 07745 Jena (Germany); Friedrich Schiller University Jena, Abbe Centre of Photonics, 07745 Jena (Germany); Frosch, Torsten, E-mail: torsten.frosch@uni-jena.de [Leibniz Institute of Photonic Technology, 07745 Jena (Germany); Friedrich Schiller University Jena, Institute for Physical Chemistry, 07745 Jena (Germany); Friedrich Schiller University Jena, Abbe Centre of Photonics, 07745 Jena (Germany)

    2015-09-24

    A new setup for Raman spectroscopic wide-field imaging is presented. It combines the advantages of a fiber array based spectral translator with a tailor-made laser illumination system for high-quality Raman chemical imaging of sensitive biological samples. The Gaussian-like intensity distribution of the illuminating laser beam is shaped by a square-core optical multimode fiber to a top-hat profile with very homogeneous intensity distribution to fulfill the conditions of Koehler. The 30 m long optical fiber and an additional vibrator efficiently destroy the polarization and coherence of the illuminating light. This homogeneous, incoherent illumination is an essential prerequisite for stable quantitative imaging of complex biological samples. The fiber array translates the two-dimensional lateral information of the Raman stray light into separated spectral channels with very high contrast. The Raman image can be correlated with a corresponding white light microscopic image of the sample. The new setup enables simultaneous quantification of all Raman spectra across the whole spatial area with very good spectral resolution and thus outperforms other Raman imaging approaches based on scanning and tunable filters. The unique capabilities of the setup for fast, gentle, sensitive, and selective chemical imaging of biological samples were applied for automated hemozoin analysis. A special algorithm was developed to generate Raman images based on the hemozoin distribution in red blood cells without any influence from other Raman scattering. The new imaging setup in combination with the robust algorithm provides a novel, elegant way for chemical selective analysis of the malaria pigment hemozoin in early ring stages of Plasmodium falciparum infected erythrocytes. - Highlights: • Raman hyperspectral imaging allows for chemical selective analysis of biological samples with spatial heterogeneity. • A homogeneous, incoherent illumination is essential for reliable

  2. Fiber array based hyperspectral Raman imaging for chemical selective analysis of malaria-infected red blood cells

    International Nuclear Information System (INIS)

    Brückner, Michael; Becker, Katja; Popp, Jürgen; Frosch, Torsten

    2015-01-01

    A new setup for Raman spectroscopic wide-field imaging is presented. It combines the advantages of a fiber array based spectral translator with a tailor-made laser illumination system for high-quality Raman chemical imaging of sensitive biological samples. The Gaussian-like intensity distribution of the illuminating laser beam is shaped by a square-core optical multimode fiber to a top-hat profile with very homogeneous intensity distribution to fulfill the conditions of Koehler. The 30 m long optical fiber and an additional vibrator efficiently destroy the polarization and coherence of the illuminating light. This homogeneous, incoherent illumination is an essential prerequisite for stable quantitative imaging of complex biological samples. The fiber array translates the two-dimensional lateral information of the Raman stray light into separated spectral channels with very high contrast. The Raman image can be correlated with a corresponding white light microscopic image of the sample. The new setup enables simultaneous quantification of all Raman spectra across the whole spatial area with very good spectral resolution and thus outperforms other Raman imaging approaches based on scanning and tunable filters. The unique capabilities of the setup for fast, gentle, sensitive, and selective chemical imaging of biological samples were applied for automated hemozoin analysis. A special algorithm was developed to generate Raman images based on the hemozoin distribution in red blood cells without any influence from other Raman scattering. The new imaging setup in combination with the robust algorithm provides a novel, elegant way for chemical selective analysis of the malaria pigment hemozoin in early ring stages of Plasmodium falciparum infected erythrocytes. - Highlights: • Raman hyperspectral imaging allows for chemical selective analysis of biological samples with spatial heterogeneity. • A homogeneous, incoherent illumination is essential for reliable

  3. A fully automatic end-to-end method for content-based image retrieval of CT scans with similar liver lesion annotations.

    Science.gov (United States)

    Spanier, A B; Caplan, N; Sosna, J; Acar, B; Joskowicz, L

    2018-01-01

    The goal of medical content-based image retrieval (M-CBIR) is to assist radiologists in the decision-making process by retrieving medical cases similar to a given image. One of the key interests of radiologists is lesions and their annotations, since the patient treatment depends on the lesion diagnosis. Therefore, a key feature of M-CBIR systems is the retrieval of scans with the most similar lesion annotations. To be of value, M-CBIR systems should be fully automatic to handle large case databases. We present a fully automatic end-to-end method for the retrieval of CT scans with similar liver lesion annotations. The input is a database of abdominal CT scans labeled with liver lesions, a query CT scan, and optionally one radiologist-specified lesion annotation of interest. The output is an ordered list of the database CT scans with the most similar liver lesion annotations. The method starts by automatically segmenting the liver in the scan. It then extracts a histogram-based features vector from the segmented region, learns the features' relative importance, and ranks the database scans according to the relative importance measure. The main advantages of our method are that it fully automates the end-to-end querying process, that it uses simple and efficient techniques that are scalable to large datasets, and that it produces quality retrieval results using an unannotated CT scan. Our experimental results on 9 CT queries on a dataset of 41 volumetric CT scans from the 2014 Image CLEF Liver Annotation Task yield an average retrieval accuracy (Normalized Discounted Cumulative Gain index) of 0.77 and 0.84 without/with annotation, respectively. Fully automatic end-to-end retrieval of similar cases based on image information alone, rather that on disease diagnosis, may help radiologists to better diagnose liver lesions.

  4. Near-IR laser-based spectrophotometer for comparative analysis of isotope content of CO2 in exhale air samples

    International Nuclear Information System (INIS)

    Stepanov, E V; Glushko, A N; Kasoev, S G; Koval', A V; Lapshin, D A

    2011-01-01

    We present a laser spectrophotometer aimed at high-accuracy comparative analysis of content of 12 CO 2 and 13 CO 2 isotope modifications in the exhale air samples and based on a tunable near-IR diode laser (2.05 μm). The two-channel optical scheme of the spectrophotometer and the special digital system for its control are described. An algorithm of spectral data processing aimed at determining the difference in the isotope composition of gas mixtures is proposed. A few spectral regions (near 4880 cm -1 ) are determined to be optimal for analysis of relative content of 12 CO 2 and 13 CO 2 in the exhale air. The use of the proposed spectrophotometer scheme and the developed algorithm makes the results of the analysis less susceptible to the influence of the interference in optical elements, to the absorption in the open atmosphere, to the slow drift of the laser pulse envelope, and to the offset of optical channels. The sensitivity of the comparative analysis of the isotope content of CO 2 in exhale air samples, achieved using the proposed scheme, is estimated to be nearly 0.1‰.

  5. Image registration based on virtual frame sequence analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chen, H.; Ng, W.S. [Nanyang Technological University, Computer Integrated Medical Intervention Laboratory, School of Mechanical and Aerospace Engineering, Singapore (Singapore); Shi, D. (Nanyang Technological University, School of Computer Engineering, Singapore, Singpore); Wee, S.B. [Tan Tock Seng Hospital, Department of General Surgery, Singapore (Singapore)

    2007-08-15

    This paper is to propose a new framework for medical image registration with large nonrigid deformations, which still remains one of the biggest challenges for image fusion and further analysis in many medical applications. Registration problem is formulated as to recover a deformation process with the known initial state and final state. To deal with large nonlinear deformations, virtual frames are proposed to be inserted to model the deformation process. A time parameter is introduced and the deformation between consecutive frames is described with a linear affine transformation. Experiments are conducted with simple geometric deformation as well as complex deformations presented in MRI and ultrasound images. All the deformations are characterized with nonlinearity. The positive results demonstrated the effectiveness of this algorithm. The framework proposed in this paper is feasible to register medical images with large nonlinear deformations and is especially useful for sequential images. (orig.)

  6. Imaging of the skull base anatomy; Schnittbildanatomie der Schaedelbasis

    Energy Technology Data Exchange (ETDEWEB)

    Wuest, Wolfgang; Uder, Michael; Lell, Michael [Erlangen-Nuernberg Univ., Universitaetsklinikum (Germany). Radiologisches Institut

    2016-09-15

    The skull base divides the extracranial from the intracranial compartment and contains a multiplicity of bony and soft tissue structures. For evaluating the skull base profound knowledge of the complex anatomy is mandatory. To limit the number of differential diagnosis it is important to be familiar with the contents of the different compartments. Due to the technical progress and the difficulty in assessing the skull base clinically imaging plays a significant role in diagnosis. For imaging both MRI and CT are used, which represent not competing but complementary methods.

  7. Componential distribution analysis of food using near infrared ray image

    Science.gov (United States)

    Yamauchi, Hiroki; Kato, Kunihito; Yamamoto, Kazuhiko; Ogawa, Noriko; Ohba, Kimie

    2008-11-01

    The components of the food related to the "deliciousness" are usually evaluated by componential analysis. The component content and type of components in the food are determined by this analysis. However, componential analysis is not able to analyze measurements in detail, and the measurement is time consuming. We propose a method to measure the two-dimensional distribution of the component in food using a near infrared ray (IR) image. The advantage of our method is to be able to visualize the invisible components. Many components in food have characteristics such as absorption and reflection of light in the IR range. The component content is measured using subtraction between two wavelengths of near IR light. In this paper, we describe a method to measure the component of food using near IR image processing, and we show an application to visualize the saccharose in the pumpkin.

  8. Cytological study of DNA content and nuclear morphometric analysis for aid in the diagnosis of high-grade dysplasia within oral leukoplakia.

    Science.gov (United States)

    Yang, Xi; Xiao, Xuan; Wu, Wenyan; Shen, Xuemin; Zhou, Zengtong; Liu, Wei; Shi, Linjun

    2017-09-01

    To quantitatively examine the DNA content and nuclear morphometric status of oral leukoplakia (OL) and investigate its association with the degree of dysplasia in a cytologic study. Oral cytobrush biopsy was carried out to obtain exfoliative epithelial cells from lesions before scalpel biopsy at the same location in a blinded series of 70 patients with OL. Analysis of nuclear morphometry and DNA content status using image cytometry was performed with oral smears stained with the Feulgen-thionin method. Nuclear morphometric analysis revealed significant differences in DNA content amount, DNA index, nuclear area, nuclear radius, nuclear intensity, sphericity, entropy, and fractal dimension (all P content analysis identified 34 patients with OL (48.6%) with DNA content abnormality. Nonhomogeneous lesion (P = .018) and high-grade dysplasia (P = .008) were significantly associated with abnormal DNA content. Importantly, the positive correlation between the degree of oral dysplasia and DNA content status was significant (P = .004, correlation coefficient = 0.342). Cytology analysis of DNA content and nuclear morphometric status using image cytometry may support their use as a screening and monitoring tool for OL progression. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Lattice and strain analysis of atomic resolution Z-contrast images based on template matching

    Energy Technology Data Exchange (ETDEWEB)

    Zuo, Jian-Min, E-mail: jianzuo@uiuc.edu [Department of Materials Science and Engineering, University of Illinois, Urbana, IL 61801 (United States); Seitz Materials Research Laboratory, University of Illinois, Urbana, IL 61801 (United States); Shah, Amish B. [Center for Microanalysis of Materials, Materials Research Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL 61801 (United States); Kim, Honggyu; Meng, Yifei; Gao, Wenpei [Department of Materials Science and Engineering, University of Illinois, Urbana, IL 61801 (United States); Seitz Materials Research Laboratory, University of Illinois, Urbana, IL 61801 (United States); Rouviére, Jean-Luc [CEA-INAC/UJF-Grenoble UMR-E, SP2M, LEMMA, Minatec, Grenoble 38054 (France)

    2014-01-15

    A real space approach is developed based on template matching for quantitative lattice analysis using atomic resolution Z-contrast images. The method, called TeMA, uses the template of an atomic column, or a group of atomic columns, to transform the image into a lattice of correlation peaks. This is helped by using a local intensity adjusted correlation and by the design of templates. Lattice analysis is performed on the correlation peaks. A reference lattice is used to correct for scan noise and scan distortions in the recorded images. Using these methods, we demonstrate that a precision of few picometers is achievable in lattice measurement using aberration corrected Z-contrast images. For application, we apply the methods to strain analysis of a molecular beam epitaxy (MBE) grown LaMnO{sub 3} and SrMnO{sub 3} superlattice. The results show alternating epitaxial strain inside the superlattice and its variations across interfaces at the spatial resolution of a single perovskite unit cell. Our methods are general, model free and provide high spatial resolution for lattice analysis. - Highlights: • A real space approach is developed for strain analysis using atomic resolution Z-contrast images and template matching. • A precision of few picometers is achievable in the measurement of lattice displacements. • The spatial resolution of a single perovskite unit cell is demonstrated for a LaMnO{sub 3} and SrMnO{sub 3} superlattice grown by MBE.

  10. A System for the Semantic Multimodal Analysis of News Audio-Visual Content

    Directory of Open Access Journals (Sweden)

    Michael G. Strintzis

    2010-01-01

    Full Text Available News-related content is nowadays among the most popular types of content for users in everyday applications. Although the generation and distribution of news content has become commonplace, due to the availability of inexpensive media capturing devices and the development of media sharing services targeting both professional and user-generated news content, the automatic analysis and annotation that is required for supporting intelligent search and delivery of this content remains an open issue. In this paper, a complete architecture for knowledge-assisted multimodal analysis of news-related multimedia content is presented, along with its constituent components. The proposed analysis architecture employs state-of-the-art methods for the analysis of each individual modality (visual, audio, text separately and proposes a novel fusion technique based on the particular characteristics of news-related content for the combination of the individual modality analysis results. Experimental results on news broadcast video illustrate the usefulness of the proposed techniques in the automatic generation of semantic annotations.

  11. A quality quantitative method of silicon direct bonding based on wavelet image analysis

    Science.gov (United States)

    Tan, Xiao; Tao, Zhi; Li, Haiwang; Xu, Tiantong; Yu, Mingxing

    2018-04-01

    The rapid development of MEMS (micro-electro-mechanical systems) has received significant attention from researchers in various fields and subjects. In particular, the MEMS fabrication process is elaborate and, as such, has been the focus of extensive research inquiries. However, in MEMS fabrication, component bonding is difficult to achieve and requires a complex approach. Thus, improvements in bonding quality are relatively important objectives. A higher quality bond can only be achieved with improved measurement and testing capabilities. In particular, the traditional testing methods mainly include infrared testing, tensile testing, and strength testing, despite the fact that using these methods to measure bond quality often results in low efficiency or destructive analysis. Therefore, this paper focuses on the development of a precise, nondestructive visual testing method based on wavelet image analysis that is shown to be highly effective in practice. The process of wavelet image analysis includes wavelet image denoising, wavelet image enhancement, and contrast enhancement, and as an end result, can display an image with low background noise. In addition, because the wavelet analysis software was developed with MATLAB, it can reveal the bonding boundaries and bonding rates to precisely indicate the bond quality at all locations on the wafer. This work also presents a set of orthogonal experiments that consist of three prebonding factors, the prebonding temperature, the positive pressure value and the prebonding time, which are used to analyze the prebonding quality. This method was used to quantify the quality of silicon-to-silicon wafer bonding, yielding standard treatment quantities that could be practical for large-scale use.

  12. An expert image analysis system for chromosome analysis application

    International Nuclear Information System (INIS)

    Wu, Q.; Suetens, P.; Oosterlinck, A.; Van den Berghe, H.

    1987-01-01

    This paper reports a recent study on applying a knowledge-based system approach as a new attempt to solve the problem of chromosome classification. A theoretical framework of an expert image analysis system is proposed, based on such a study. In this scheme, chromosome classification can be carried out under a hypothesize-and-verify paradigm, by integrating a rule-based component, in which the expertise of chromosome karyotyping is formulated with an existing image analysis system which uses conventional pattern recognition techniques. Results from the existing system can be used to bring in hypotheses, and with the rule-based verification and modification procedures, improvement of the classification performance can be excepted

  13. Analysis of rocket flight stability based on optical image measurement

    Science.gov (United States)

    Cui, Shuhua; Liu, Junhu; Shen, Si; Wang, Min; Liu, Jun

    2018-02-01

    Based on the abundant optical image measurement data from the optical measurement information, this paper puts forward the method of evaluating the rocket flight stability performance by using the measurement data of the characteristics of the carrier rocket in imaging. On the basis of the method of measuring the characteristics of the carrier rocket, the attitude parameters of the rocket body in the coordinate system are calculated by using the measurements data of multiple high-speed television sets, and then the parameters are transferred to the rocket body attack angle and it is assessed whether the rocket has a good flight stability flying with a small attack angle. The measurement method and the mathematical algorithm steps through the data processing test, where you can intuitively observe the rocket flight stability state, and also can visually identify the guidance system or failure analysis.

  14. Leaf Chlorophyll Content Estimation of Winter Wheat Based on Visible and Near-Infrared Sensors.

    Science.gov (United States)

    Zhang, Jianfeng; Han, Wenting; Huang, Lvwen; Zhang, Zhiyong; Ma, Yimian; Hu, Yamin

    2016-03-25

    The leaf chlorophyll content is one of the most important factors for the growth of winter wheat. Visual and near-infrared sensors are a quick and non-destructive testing technology for the estimation of crop leaf chlorophyll content. In this paper, a new approach is developed for leaf chlorophyll content estimation of winter wheat based on visible and near-infrared sensors. First, the sliding window smoothing (SWS) was integrated with the multiplicative scatter correction (MSC) or the standard normal variable transformation (SNV) to preprocess the reflectance spectra images of wheat leaves. Then, a model for the relationship between the leaf relative chlorophyll content and the reflectance spectra was developed using the partial least squares (PLS) and the back propagation neural network. A total of 300 samples from areas surrounding Yangling, China, were used for the experimental studies. The samples of visible and near-infrared spectroscopy at the wavelength of 450,900 nm were preprocessed using SWS, MSC and SNV. The experimental results indicate that the preprocessing using SWS and SNV and then modeling using PLS can achieve the most accurate estimation, with the correlation coefficient at 0.8492 and the root mean square error at 1.7216. Thus, the proposed approach can be widely used for winter wheat chlorophyll content analysis.

  15. Fourier transform infrared spectroscopic imaging and multivariate regression for prediction of proteoglycan content of articular cartilage.

    Directory of Open Access Journals (Sweden)

    Lassi Rieppo

    Full Text Available Fourier Transform Infrared (FT-IR spectroscopic imaging has been earlier applied for the spatial estimation of the collagen and the proteoglycan (PG contents of articular cartilage (AC. However, earlier studies have been limited to the use of univariate analysis techniques. Current analysis methods lack the needed specificity for collagen and PGs. The aim of the present study was to evaluate the suitability of partial least squares regression (PLSR and principal component regression (PCR methods for the analysis of the PG content of AC. Multivariate regression models were compared with earlier used univariate methods and tested with a sample material consisting of healthy and enzymatically degraded steer AC. Chondroitinase ABC enzyme was used to increase the variation in PG content levels as compared to intact AC. Digital densitometric measurements of Safranin O-stained sections provided the reference for PG content. The results showed that multivariate regression models predict PG content of AC significantly better than earlier used absorbance spectrum (i.e. the area of carbohydrate region with or without amide I normalization or second derivative spectrum univariate parameters. Increased molecular specificity favours the use of multivariate regression models, but they require more knowledge of chemometric analysis and extended laboratory resources for gathering reference data for establishing the models. When true molecular specificity is required, the multivariate models should be used.

  16. Research Trends in Technology-Based Learning from 2000 to 2009: A Content Analysis of Publications in Selected Journals

    Science.gov (United States)

    Hsu, Yu-Chen; Ho, Hsin Ning Jessie; Tsai, Chin-Chung; Hwang, Gwo-Jen; Chu, Hui-Chun; Wang, Chin-Yeh; Chen, Nian-Shing

    2012-01-01

    This paper provides a content analysis of studies in technology-based learning (TBL) that were published in five Social Sciences Citation Index (SSCI) journals (i.e. "the British Journal of Educational Technology, Computers & Education, Educational Technology Research & Development, Educational Technology & Society, the Journal of Computer…

  17. Explicit area-based accuracy assessment for mangrove tree crown delineation using Geographic Object-Based Image Analysis (GEOBIA)

    Science.gov (United States)

    Kamal, Muhammad; Johansen, Kasper

    2017-10-01

    Effective mangrove management requires spatially explicit information of mangrove tree crown map as a basis for ecosystem diversity study and health assessment. Accuracy assessment is an integral part of any mapping activities to measure the effectiveness of the classification approach. In geographic object-based image analysis (GEOBIA) the assessment of the geometric accuracy (shape, symmetry and location) of the created image objects from image segmentation is required. In this study we used an explicit area-based accuracy assessment to measure the degree of similarity between the results of the classification and reference data from different aspects, including overall quality (OQ), user's accuracy (UA), producer's accuracy (PA) and overall accuracy (OA). We developed a rule set to delineate the mangrove tree crown using WorldView-2 pan-sharpened image. The reference map was obtained by visual delineation of the mangrove tree crowns boundaries form a very high-spatial resolution aerial photograph (7.5cm pixel size). Ten random points with a 10 m radius circular buffer were created to calculate the area-based accuracy assessment. The resulting circular polygons were used to clip both the classified image objects and reference map for area comparisons. In this case, the area-based accuracy assessment resulted 64% and 68% for the OQ and OA, respectively. The overall quality of the calculation results shows the class-related area accuracy; which is the area of correctly classified as tree crowns was 64% out of the total area of tree crowns. On the other hand, the overall accuracy of 68% was calculated as the percentage of all correctly classified classes (tree crowns and canopy gaps) in comparison to the total class area (an entire image). Overall, the area-based accuracy assessment was simple to implement and easy to interpret. It also shows explicitly the omission and commission error variations of object boundary delineation with colour coded polygons.

  18. Image Analysis Based on Soft Computing and Applied on Space Shuttle During the Liftoff Process

    Science.gov (United States)

    Dominquez, Jesus A.; Klinko, Steve J.

    2007-01-01

    Imaging techniques based on Soft Computing (SC) and developed at Kennedy Space Center (KSC) have been implemented on a variety of prototype applications related to the safety operation of the Space Shuttle during the liftoff process. These SC-based prototype applications include detection and tracking of moving Foreign Objects Debris (FOD) during the Space Shuttle liftoff, visual anomaly detection on slidewires used in the emergency egress system for the Space Shuttle at the laJlIlch pad, and visual detection of distant birds approaching the Space Shuttle launch pad. This SC-based image analysis capability developed at KSC was also used to analyze images acquired during the accident of the Space Shuttle Columbia and estimate the trajectory and velocity of the foam that caused the accident.

  19. Image Post-Processing and Analysis. Chapter 17

    Energy Technology Data Exchange (ETDEWEB)

    Yushkevich, P. A. [University of Pennsylvania, Philadelphia (United States)

    2014-09-15

    For decades, scientists have used computers to enhance and analyse medical images. At first, they developed simple computer algorithms to enhance the appearance of interesting features in images, helping humans read and interpret them better. Later, they created more advanced algorithms, where the computer would not only enhance images but also participate in facilitating understanding of their content. Segmentation algorithms were developed to detect and extract specific anatomical objects in images, such as malignant lesions in mammograms. Registration algorithms were developed to align images of different modalities and to find corresponding anatomical locations in images from different subjects. These algorithms have made computer aided detection and diagnosis, computer guided surgery and other highly complex medical technologies possible. Nowadays, the field of image processing and analysis is a complex branch of science that lies at the intersection of applied mathematics, computer science, physics, statistics and biomedical sciences. This chapter will give a general overview of the most common problems in this field and the algorithms that address them.

  20. Container-Based Clinical Solutions for Portable and Reproducible Image Analysis.

    Science.gov (United States)

    Matelsky, Jordan; Kiar, Gregory; Johnson, Erik; Rivera, Corban; Toma, Michael; Gray-Roncal, William

    2018-05-08

    Medical imaging analysis depends on the reproducibility of complex computation. Linux containers enable the abstraction, installation, and configuration of environments so that software can be both distributed in self-contained images and used repeatably by tool consumers. While several initiatives in neuroimaging have adopted approaches for creating and sharing more reliable scientific methods and findings, Linux containers are not yet mainstream in clinical settings. We explore related technologies and their efficacy in this setting, highlight important shortcomings, demonstrate a simple use-case, and endorse the use of Linux containers for medical image analysis.

  1. Segmentation-based retrospective shading correction in fluorescence microscopy E. coli images for quantitative analysis

    Science.gov (United States)

    Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.

    2009-10-01

    Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.

  2. Heterogeneity, histological features and DNA ploidy in oral carcinoma by image-based analysis.

    Science.gov (United States)

    Diwakar, N; Sperandio, M; Sherriff, M; Brown, A; Odell, E W

    2005-04-01

    Oral squamous carcinomas appear heterogeneous on DNA ploidy analysis. However, this may be partly a result of sample dilution or the detection limit of techniques. The aim of this study was to determine whether oral squamous carcinomas are heterogeneous for ploidy status using image-based ploidy analysis and to determine whether ploidy status correlates with histological parameters. Multiple samples from 42 oral squamous carcinomas were analysed for DNA ploidy using an image-based system and scored for histological parameters. 22 were uniformly aneuploid, 1 uniformly tetraploid and 3 uniformly diploid. 16 appeared heterogeneous but only 8 appeared to be genuinely heterogeneous when minor ploidy histogram peaks were taken into account. Ploidy was closely related to nuclear pleomorphism but not differentiation. Sample variation, detection limits and diagnostic criteria account for much of the ploidy heterogeneity observed. Confident diagnosis of diploid status in an oral squamous cell carcinoma requires a minimum of 5 samples.

  3. Automated processing of zebrafish imaging data: a survey.

    Science.gov (United States)

    Mikut, Ralf; Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A; Kausler, Bernhard X; Ledesma-Carbayo, María J; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine

    2013-09-01

    Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines.

  4. Automated Processing of Zebrafish Imaging Data: A Survey

    Science.gov (United States)

    Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A.; Kausler, Bernhard X.; Ledesma-Carbayo, María J.; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine

    2013-01-01

    Abstract Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines. PMID:23758125

  5. Target Identification Using Harmonic Wavelet Based ISAR Imaging

    Science.gov (United States)

    Shreyamsha Kumar, B. K.; Prabhakar, B.; Suryanarayana, K.; Thilagavathi, V.; Rajagopal, R.

    2006-12-01

    A new approach has been proposed to reduce the computations involved in the ISAR imaging, which uses harmonic wavelet-(HW) based time-frequency representation (TFR). Since the HW-based TFR falls into a category of nonparametric time-frequency (T-F) analysis tool, it is computationally efficient compared to parametric T-F analysis tools such as adaptive joint time-frequency transform (AJTFT), adaptive wavelet transform (AWT), and evolutionary AWT (EAWT). Further, the performance of the proposed method of ISAR imaging is compared with the ISAR imaging by other nonparametric T-F analysis tools such as short-time Fourier transform (STFT) and Choi-Williams distribution (CWD). In the ISAR imaging, the use of HW-based TFR provides similar/better results with significant (92%) computational advantage compared to that obtained by CWD. The ISAR images thus obtained are identified using a neural network-based classification scheme with feature set invariant to translation, rotation, and scaling.

  6. Space-based infrared sensors of space target imaging effect analysis

    Science.gov (United States)

    Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang

    2018-02-01

    Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.

  7. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    Science.gov (United States)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  8. Evaluation of fat grains in gothaj sausage using image analysis

    Directory of Open Access Journals (Sweden)

    Ludmila Luňáková

    2016-12-01

    Full Text Available Fat is an irreplacable ingredient in the production of sausages and it determines the appearance of the resulting cut to a significant extent. When shopping, consumers choose a traditional product mostly according to its appearance, based onwhat they are used to. Chemical analysis is capable to determine the total fat content in the product, but it cannot accurately describe the shape and size of fat grains which the consumer observes when looking at the product. The size of fat grains considered acceptable by consumers can be determined using sensory analysis or image analysis. In recent years, image analysis has become widely used when examining meat and meat products. Compared to the human eye, image analysis using a computer system is highly effective, since a correctly adjusted computer program is able to evaluate results with lower error rate. The most commonly monitored parameter in meat products is the aforementioned fat. The fat is located in the cut surface of the product in the form of dispersed particles which can be fairly reliably identified based on color differences in the individual parts of the product matrix. The size of the fat grains depends on the input raw material used as well as on the production technology. The present article describes the application of image analysis when evaluating fat grains in the appearance of cut of the Gothaj sausage whose sensory requirements are set by Czech legislation, namely by Decree No. 326/2001 Coll., as amended. The paper evaluates the size of fat mosaic grains in Gothaj sausages from different manufacturers. Fat grains were divided into ten size classes according to various size limits; specifically, 0.25, 0.5, 0.75, 1.0, 1.5, 2.0, 2.5, 5.0, 8.0 and over 8 mm. The upper limit of up to 8 mm in diameter was chosen based on the limit for the size of individual fat grains set by the legislation. This upper limit was not exceeded by any of the products. On the other side the mosaic had the

  9. Semantics-Based Intelligent Indexing and Retrieval of Digital Images - A Case Study

    Science.gov (United States)

    Osman, Taha; Thakker, Dhavalkumar; Schaefer, Gerald

    The proliferation of digital media has led to a huge interest in classifying and indexing media objects for generic search and usage. In particular, we are witnessing colossal growth in digital image repositories that are difficult to navigate using free-text search mechanisms, which often return inaccurate matches as they typically rely on statistical analysis of query keyword recurrence in the image annotation or surrounding text. In this chapter we present a semantically enabled image annotation and retrieval engine that is designed to satisfy the requirements of commercial image collections market in terms of both accuracy and efficiency of the retrieval process. Our search engine relies on methodically structured ontologies for image annotation, thus allowing for more intelligent reasoning about the image content and subsequently obtaining a more accurate set of results and a richer set of alternatives matchmaking the original query. We also show how our well-analysed and designed domain ontology contributes to the implicit expansion of user queries as well as presenting our initial thoughts on exploiting lexical databases for explicit semantic-based query expansion.

  10. Image analysis

    International Nuclear Information System (INIS)

    Berman, M.; Bischof, L.M.; Breen, E.J.; Peden, G.M.

    1994-01-01

    This paper provides an overview of modern image analysis techniques pertinent to materials science. The usual approach in image analysis contains two basic steps: first, the image is segmented into its constituent components (e.g. individual grains), and second, measurement and quantitative analysis are performed. Usually, the segmentation part of the process is the harder of the two. Consequently, much of the paper concentrates on this aspect, reviewing both fundamental segmentation tools (commonly found in commercial image analysis packages) and more advanced segmentation tools. There is also a review of the most widely used quantitative analysis methods for measuring the size, shape and spatial arrangements of objects. Many of the segmentation and analysis methods are demonstrated using complex real-world examples. Finally, there is a discussion of hardware and software issues. 42 refs., 17 figs

  11. Deep Learning MR Imaging-based Attenuation Correction for PET/MR Imaging.

    Science.gov (United States)

    Liu, Fang; Jang, Hyungseok; Kijowski, Richard; Bradshaw, Tyler; McMillan, Alan B

    2018-02-01

    Purpose To develop and evaluate the feasibility of deep learning approaches for magnetic resonance (MR) imaging-based attenuation correction (AC) (termed deep MRAC) in brain positron emission tomography (PET)/MR imaging. Materials and Methods A PET/MR imaging AC pipeline was built by using a deep learning approach to generate pseudo computed tomographic (CT) scans from MR images. A deep convolutional auto-encoder network was trained to identify air, bone, and soft tissue in volumetric head MR images coregistered to CT data for training. A set of 30 retrospective three-dimensional T1-weighted head images was used to train the model, which was then evaluated in 10 patients by comparing the generated pseudo CT scan to an acquired CT scan. A prospective study was carried out for utilizing simultaneous PET/MR imaging for five subjects by using the proposed approach. Analysis of covariance and paired-sample t tests were used for statistical analysis to compare PET reconstruction error with deep MRAC and two existing MR imaging-based AC approaches with CT-based AC. Results Deep MRAC provides an accurate pseudo CT scan with a mean Dice coefficient of 0.971 ± 0.005 for air, 0.936 ± 0.011 for soft tissue, and 0.803 ± 0.021 for bone. Furthermore, deep MRAC provides good PET results, with average errors of less than 1% in most brain regions. Significantly lower PET reconstruction errors were realized with deep MRAC (-0.7% ± 1.1) compared with Dixon-based soft-tissue and air segmentation (-5.8% ± 3.1) and anatomic CT-based template registration (-4.8% ± 2.2). Conclusion The authors developed an automated approach that allows generation of discrete-valued pseudo CT scans (soft tissue, bone, and air) from a single high-spatial-resolution diagnostic-quality three-dimensional MR image and evaluated it in brain PET/MR imaging. This deep learning approach for MR imaging-based AC provided reduced PET reconstruction error relative to a CT-based standard within the brain compared

  12. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    This book is a result of a collaboration between DTU Informatics at the Technical University of Denmark and the Laboratory of Computer Vision and Media Technology at Aalborg University. It is partly based on the book ”Image and Video Processing”, second edition by Thomas Moeslund. The aim...... of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code...

  13. Image sequence analysis workstation for multipoint motion analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  14. Design and Realization of Music Retrieval System Based on Feature Content

    Directory of Open Access Journals (Sweden)

    Li Lei

    2015-01-01

    Full Text Available As computer technology develops rapidly, retrieval systems have also undergone great changes. People are no longer contented with singular retrieval means, but are trying many other ways to retrieve feature content. When it comes to music, however, the complexity of sound is still preventing its retrieval from moving further forward. To solve this problem, systematic analysis and study is carried out on music retrieval system based on feature content. A music retrieval system model based on feature content consisting of technical approaches for processing and retrieving of extraction symbols of music feature content is built and realized. An SML model is proposed and tested on two different types of song sets. The result shows good performance of the system. Besides, the shortfalls of the model are also noted and the future prospects of the music retrieval system based on feature content are outlined.

  15. Corporate Social Responsibility programs of Big Food in Australia: a content analysis of industry documents.

    Science.gov (United States)

    Richards, Zoe; Thomas, Samantha L; Randle, Melanie; Pettigrew, Simone

    2015-12-01

    To examine Corporate Social Responsibility (CSR) tactics by identifying the key characteristics of CSR strategies as described in the corporate documents of selected 'Big Food' companies. A mixed methods content analysis was used to analyse the information contained on Australian Big Food company websites. Data sources included company CSR reports and web-based content that related to CSR initiatives employed in Australia. A total of 256 CSR activities were identified across six organisations. Of these, the majority related to the categories of environment (30.5%), responsibility to consumers (25.0%) or community (19.5%). Big Food companies appear to be using CSR activities to: 1) build brand image through initiatives associated with the environment and responsibility to consumers; 2) target parents and children through community activities; and 3) align themselves with respected organisations and events in an effort to transfer their positive image attributes to their own brands. Results highlight the type of CSR strategies Big Food companies are employing. These findings serve as a guide to mapping and monitoring CSR as a specific form of marketing. © 2015 Public Health Association of Australia.

  16. SU-F-J-94: Development of a Plug-in Based Image Analysis Tool for Integration Into Treatment Planning

    Energy Technology Data Exchange (ETDEWEB)

    Owen, D; Anderson, C; Mayo, C; El Naqa, I; Ten Haken, R; Cao, Y; Balter, J; Matuszak, M [University of Michigan, Ann Arbor, MI (United States)

    2016-06-15

    Purpose: To extend the functionality of a commercial treatment planning system (TPS) to support (i) direct use of quantitative image-based metrics within treatment plan optimization and (ii) evaluation of dose-functional volume relationships to assist in functional image adaptive radiotherapy. Methods: A script was written that interfaces with a commercial TPS via an Application Programming Interface (API). The script executes a program that performs dose-functional volume analyses. Written in C#, the script reads the dose grid and correlates it with image data on a voxel-by-voxel basis through API extensions that can access registration transforms. A user interface was designed through WinForms to input parameters and display results. To test the performance of this program, image- and dose-based metrics computed from perfusion SPECT images aligned to the treatment planning CT were generated, validated, and compared. Results: The integration of image analysis information was successfully implemented as a plug-in to a commercial TPS. Perfusion SPECT images were used to validate the calculation and display of image-based metrics as well as dose-intensity metrics and histograms for defined structures on the treatment planning CT. Various biological dose correction models, custom image-based metrics, dose-intensity computations, and dose-intensity histograms were applied to analyze the image-dose profile. Conclusion: It is possible to add image analysis features to commercial TPSs through custom scripting applications. A tool was developed to enable the evaluation of image-intensity-based metrics in the context of functional targeting and avoidance. In addition to providing dose-intensity metrics and histograms that can be easily extracted from a plan database and correlated with outcomes, the system can also be extended to a plug-in optimization system, which can directly use the computed metrics for optimization of post-treatment tumor or normal tissue response

  17. Seismic zonation of Port-Au-Prince using pixel- and object-based imaging analysis methods on ASTER GDEM

    Science.gov (United States)

    Yong, Alan; Hough, Susan E.; Cox, Brady R.; Rathje, Ellen M.; Bachhuber, Jeff; Dulberg, Ranon; Hulslander, David; Christiansen, Lisa; and Abrams, Michael J.

    2011-01-01

    We report about a preliminary study to evaluate the use of semi-automated imaging analysis of remotely-sensed DEM and field geophysical measurements to develop a seismic-zonation map of Port-au-Prince, Haiti. For in situ data, VS30 values are derived from the MASW technique deployed in and around the city. For satellite imagery, we use an ASTER GDEM of Hispaniola. We apply both pixel- and object-based imaging methods on the ASTER GDEM to explore local topography (absolute elevation values) and classify terrain types such as mountains, alluvial fans and basins/near-shore regions. We assign NEHRP seismic site class ranges based on available VS30 values. A comparison of results from imagery-based methods to results from traditional geologic-based approaches reveals good overall correspondence. We conclude that image analysis of RS data provides reliable first-order site characterization results in the absence of local data and can be useful to refine detailed site maps with sparse local data.

  18. [Estimation of forest canopy chlorophyll content based on PROSPECT and SAIL models].

    Science.gov (United States)

    Yang, Xi-guang; Fan, Wen-yi; Yu, Ying

    2010-11-01

    The forest canopy chlorophyll content directly reflects the health and stress of forest. The accurate estimation of the forest canopy chlorophyll content is a significant foundation for researching forest ecosystem cycle models. In the present paper, the inversion of the forest canopy chlorophyll content was based on PROSPECT and SAIL models from the physical mechanism angle. First, leaf spectrum and canopy spectrum were simulated by PROSPECT and SAIL models respectively. And leaf chlorophyll content look-up-table was established for leaf chlorophyll content retrieval. Then leaf chlorophyll content was converted into canopy chlorophyll content by Leaf Area Index (LAD). Finally, canopy chlorophyll content was estimated from Hyperion image. The results indicated that the main effect bands of chlorophyll content were 400-900 nm, the simulation of leaf and canopy spectrum by PROSPECT and SAIL models fit better with the measured spectrum with 7.06% and 16.49% relative error respectively, the RMSE of LAI inversion was 0. 542 6 and the forest canopy chlorophyll content was estimated better by PROSPECT and SAIL models with precision = 77.02%.

  19. [Preliminarily application of content analysis to qualitative nursing data].

    Science.gov (United States)

    Liang, Shu-Yuan; Chuang, Yeu-Hui; Wu, Shu-Fang

    2012-10-01

    Content analysis is a methodology for objectively and systematically studying the content of communication in various formats. Content analysis in nursing research and nursing education is called qualitative content analysis. Qualitative content analysis is frequently applied to nursing research, as it allows researchers to determine categories inductively and deductively. This article examines qualitative content analysis in nursing research from theoretical and practical perspectives. We first describe how content analysis concepts such as unit of analysis, meaning unit, code, category, and theme are used. Next, we describe the basic steps involved in using content analysis, including data preparation, data familiarization, analysis unit identification, creating tentative coding categories, category refinement, and establishing category integrity. Finally, this paper introduces the concept of content analysis rigor, including dependability, confirmability, credibility, and transferability. This article elucidates the content analysis method in order to help professionals conduct systematic research that generates data that are informative and useful in practical application.

  20. High Content Screening: Understanding Cellular Pathway

    International Nuclear Information System (INIS)

    Mohamed Zaffar Ali Mohamed Amiroudine; Daryl Jesus Arapoc; Zainah Adam; Shafii Khamis

    2015-01-01

    High content screening (HCS) is the convergence between cell-based assays, high-resolution fluorescence imaging, phase-contrast imaging of fixed- or live-cell assays, tissues and small organisms. It has been widely adopted in the pharmaceutical and biotech industries for target identification and validation and as secondary screens to reveal potential toxicities or to elucidate a drugs mechanism of action. By using the ImageXpress® Micro XLS System HCS, the complex network of key players controlling proliferation and apoptosis can be reduced to several sentinel markers for analysis. Cell proliferation and apoptosis are two key areas in cell biology and drug discovery research. Understanding the signaling pathways in cell proliferation and apoptosis is important for new therapeutic discovery because the imbalance between these two events is predominant in the progression of many human diseases, including cancer. The DNA binding dye DAPI is used to determine the nuclear size and nuclear morphology as well as cell cycle phases by DNA content. Images together with MetaXpress® analysis results provide a convenient and easy to use solution to high volume image management. In particular, HCS platform is beginning to have an important impact on early drug discovery, basic research in systems cell biology, and is expected to play a role in personalized medicine or revealing off-target drug effects. (author)

  1. Content Analysis of Master Theses and Dissertations Based on Action Research

    Science.gov (United States)

    Durak, Gürhan; Yünkül, Eyup; Cankaya, Serkan; Akpinar, Sükran; Erten, Emine; Inam, Nazmiye; Taylan, Ufuk; Tastekin, Eray

    2016-01-01

    Action Research (AR) is becoming popular in the field of education, and according to literature, it could be stated that AR studies have positive influence on practice in education. The present study aims at conducting content analysis of action research (AR) master theses and doctoral dissertations submitted at the level of Turkish higher…

  2. NEPHRUS: model of intelligent multilayers expert system for evaluation of the renal system based on scintigraphic images analysis

    International Nuclear Information System (INIS)

    Silva, Jose W.E. da; Schirru, Roberto; Boasquevisque, Edson M.

    1997-01-01

    This work develops a prototype for the system model based on Artificial Intelligence devices able to perform functions related to scintigraphic image analysis of the urinary system. Criteria used by medical experts for analysis images obtained with 99m Tc+DTPA and/or 99m Tc+DMSA were modeled and a multi resolution diagnosis technique was implemented. Special attention was given to the programs user interface design. Human Factor Engineering techniques were considered so as to ally friendliness and robustness. Results obtained using Artificial Neural Networks for the qualitative image analysis and the knowledge model constructed shows the feasibility of Artificial Intelligence implementation that use 'inherent' abilities of each technique in the resolution of diagnosis image analysis problems. (author). 12 refs., 2 figs., 2 tabs

  3. Theoretical analysis and experimental evaluation of a CsI(Tl) based electronic portal imaging system

    International Nuclear Information System (INIS)

    Sawant, Amit; Zeman, Herbert; Samant, Sanjiv; Lovhoiden, Gunnar; Weinberg, Brent; DiBianca, Frank

    2002-01-01

    This article discusses the design and analysis of a portal imaging system based on a thick transparent scintillator. A theoretical analysis using Monte Carlo simulation was performed to calculate the x-ray quantum detection efficiency (QDE), signal to noise ratio (SNR) and the zero frequency detective quantum efficiency [DQE(0)] of the system. A prototype electronic portal imaging device (EPID) was built, using a 12.7 mm thick, 20.32 cm diameter, CsI(Tl) scintillator, coupled to a liquid nitrogen cooled CCD TV camera. The system geometry of the prototype EPID was optimized to achieve high spatial resolution. The experimental evaluation of the prototype EPID involved the determination of contrast resolution, depth of focus, light scatter and mirror glare. Images of humanoid and contrast detail phantoms were acquired using the prototype EPID and were compared with those obtained using conventional and high contrast portal film and a commercial EPID. A theoretical analysis was also carried out for a proposed full field of view system using a large area, thinned CCD camera and a 12.7 mm thick CsI(Tl) crystal. Results indicate that this proposed design could achieve DQE(0) levels up to 11%, due to its order of magnitude higher QDE compared to phosphor screen-metal plate based EPID designs, as well as significantly higher light collection compared to conventional TV camera based systems

  4. An Imaging And Graphics Workstation For Image Sequence Analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  5. Digital Correlation based on Wavelet Transform for Image Detection

    International Nuclear Information System (INIS)

    Barba, L; Vargas, L; Torres, C; Mattos, L

    2011-01-01

    In this work is presented a method for the optimization of digital correlators to improve the characteristic detection on images using wavelet transform as well as subband filtering. It is proposed an approach of wavelet-based image contrast enhancement in order to increase the performance of digital correlators. The multiresolution representation is employed to improve the high frequency content of images taken into account the input contrast measured for the original image. The energy of correlation peaks and discrimination level of several objects are improved with this technique. To demonstrate the potentiality in extracting characteristics using the wavelet transform, small objects inside reference images are detected successfully.

  6. From Pixels to Geographic Objects in Remote Sensing Image Analysis

    NARCIS (Netherlands)

    Addink, E.A.; Van Coillie, Frieke M.B.; Jong, Steven M. de

    Traditional image analysis methods are mostly pixel-based and use the spectral differences of landscape elements at the Earth surface to classify these elements or to extract element properties from the Earth Observation image. Geographic object-based image analysis (GEOBIA) has received

  7. Analysis of intra-genomic GC content homogeneity within prokaryotes

    DEFF Research Database (Denmark)

    Bohlin, J; Snipen, L; Hardy, S.P.

    2010-01-01

    the GC content varies within microbial genomes to assess whether this property can be associated with certain biological functions related to the organism's environment and phylogeny. We utilize a new quantity GCVAR, the intra-genomic GC content variability with respect to the average GC content......Bacterial genomes possess varying GC content (total guanines (Gs) and cytosines (Cs) per total of the four bases within the genome) but within a given genome, GC content can vary locally along the chromosome, with some regions significantly more or less GC rich than on average. We have examined how...... both aerobic and facultative microbes. Although an association has previously been found between mean genomic GC content and oxygen requirement, our analysis suggests that no such association exits when phylogenetic bias is accounted for. A significant association between GCVAR and mean GC content...

  8. Language Learning of Gifted Individuals: A Content Analysis Study

    Directory of Open Access Journals (Sweden)

    Beria Gokaydin

    2017-11-01

    Full Text Available This study aims to carry out a content analysis of the studies on language learning of gifted individuals and determine the trends in this field. Articles on language learning of gifted individuals published in the Scopus database were examined based on certain criteria including type of publication, year of publication, language, research discipline, countries of research, institutions of authors, key words, and resources. Data were analyzed with the content analysis method. Results showed that the number of studies on language learning of gifted individuals has increased throughout the years. Recommendations for further research and practices are provided.

  9. Bones, body parts, and sex appeal: An analysis of #thinspiration images on popular social media.

    Science.gov (United States)

    Ghaznavi, Jannath; Taylor, Laramie D

    2015-06-01

    The present study extends research on thinspiration images, visual and/or textual images intended to inspire weight loss, from pro-eating disorder websites to popular photo-sharing social media websites. The article reports on a systematic content analysis of thinspiration images (N=300) on Twitter and Pinterest. Images tended to be sexually suggestive and objectifying with a focus on ultra-thin, bony, scantily-clad women. Results indicated that particular social media channels and labels (i.e., tags) were characterized by more segmented, bony content and greater social endorsement compared to others. In light of theories of media influence, results offer insight into the potentially harmful effects of exposure to sexually suggestive and objectifying content in large online communities on body image, quality of life, and mental health. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. METHODS OF DISTANCE MEASUREMENT’S ACCURACY INCREASING BASED ON THE CORRELATION ANALYSIS OF STEREO IMAGES

    Directory of Open Access Journals (Sweden)

    V. L. Kozlov

    2018-01-01

    Full Text Available To solve the problem of increasing the accuracy of restoring a three-dimensional picture of space using two-dimensional digital images, it is necessary to use new effective techniques and algorithms for processing and correlation analysis of digital images. Actively developed tools that allow you to reduce the time costs for processing stereo images, improve the quality of the depth maps construction and automate their construction. The aim of the work is to investigate the possibilities of using various techniques for processing digital images to improve the measurements accuracy of the rangefinder based on the correlation analysis of the stereo image. The results of studies of the influence of color channel mixing techniques on the distance measurements accuracy for various functions realizing correlation processing of images are presented. Studies on the analysis of the possibility of using integral representation of images to reduce the time cost in constructing a depth map areproposed. The results of studies of the possibility of using images prefiltration before correlation processing when distance measuring by stereo imaging areproposed.It is obtained that using of uniform mixing of channels leads to minimization of the total number of measurement errors, and using of brightness extraction according to the sRGB standard leads to an increase of errors number for all of the considered correlation processing techniques. Integral representation of the image makes it possible to accelerate the correlation processing, but this method is useful for depth map calculating in images no more than 0.5 megapixels. Using of image filtration before correlation processing can provide, depending on the filter parameters, either an increasing of the correlation function value, which is useful for analyzing noisy images, or compression of the correlation function.

  11. THE SUBJECTIVAL CONTENT OF IMAGES “SUCCESSFUL MAN” AND “SUCCESSFUL WOMAN” AS A FACTOR OF PSYCHIC ADJUSTMENT OF WOMEN IN INVOLUNTARY UNEMPLOYMENT SITUATION

    Directory of Open Access Journals (Sweden)

    Ольга Геннадьевна Лопухова

    2013-09-01

    Full Text Available Purpose of the study is investigation of correlation between a content of gender appearance of image “successful person” and parameters of psychic and social adjustment of women belonging to different generations.Methodology.  Subjective image “successful person” means a stable and possibly gender differentiated element of “Ideal Me” images system. It includes cognitive component (conscious and verbalization representations of typical description of successful person, and affective component (positive or negative positions in regards to image “successful person”. We have compared results of survey and valuation of 265 women belonging to different generations and staying in different social situations: involuntary unemployment situation, employment and getting professional education. Subjective and projective methods were used in survey of cognitive and affective components of “successful person” image. Valuation of psychic adjustment based on parameters of internal conflict in comparison with manifestations of anxiety, frustration, aggression, rigidity, neurotic. Data analysis used parametric and nonparametric methods, including ANOVA/MANOVA.Results. It has been discovered that level of psychic adjustment of women depends much more on proper integration of subjective “successful person” image content with individual features of self-conception than on objective “social status” (being in involuntary unemployment situation.Practical implications are psychology consulting and correction of social or psychic unadjustment.DOI: http://dx.doi.org/10.12731/2218-7405-2013-6-41

  12. Multimedia content analysis, management and retrieval: trends and challenges

    Science.gov (United States)

    Hanjalic, Alan; Sebe, Nicu; Chang, Edward

    2006-01-01

    Recent advances in computing, communications and storage technology have made multimedia data become prevalent. Multimedia has gained enormous potential in improving the processes in a wide range of fields, such as advertising and marketing, education and training, entertainment, medicine, surveillance, wearable computing, biometrics, and remote sensing. Rich content of multimedia data, built through the synergies of the information contained in different modalities, calls for new and innovative methods for modeling, processing, mining, organizing, and indexing of this data for effective and efficient searching, retrieval, delivery, management and sharing of multimedia content, as required by the applications in the abovementioned fields. The objective of this paper is to present our views on the trends that should be followed when developing such methods, to elaborate on the related research challenges, and to introduce the new conference, Multimedia Content Analysis, Management and Retrieval, as a premium venue for presenting and discussing these methods with the scientific community. Starting from 2006, the conference will be held annually as a part of the IS&T/SPIE Electronic Imaging event.

  13. DOM-based Content Extraction of HTML Documents

    National Research Council Canada - National Science Library

    Gupta, Suhit; Kaiser, Gail; Neistadt, David; Grimm, Peter

    2005-01-01

    .... Most approaches to removing clutter or making content more readable involve changing font size or removing HTML and data components such as images, which takes away from a webpage's inherent look and feel...

  14. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    Science.gov (United States)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  15. The Religious Facebook Experience: Uses and Gratifications of Faith-Based Content

    Directory of Open Access Journals (Sweden)

    Pamela Jo Brubaker

    2017-04-01

    Full Text Available This study explores why Christians ( N  = 335 use Facebook for religious purposes and the needs engaging with religious content on Facebook gratifies. Individuals who access faith-based content on Facebook were recruited to participate in an online survey through a series of Facebook advertisements. An exploratory factor analysis revealed four primary motivations for accessing religious Facebook content: ministering, spiritual enlightenment, religious information, and entertainment. Along with identifying the uses and gratifications received from engaging with faith-based Facebook content, this research reveals how the frequency of Facebook use, the intensity of Facebook use for religious purposes, and also religiosity predict motivations for accessing this social networking site for faith-based purposes. The data revealed those who frequently use Facebook for posting, liking, commenting, and sharing faith-based content and who are more religious are more likely to minister to others. Frequent use also predicted seeking religious information. The affiliation with like-minded individuals afforded by this medium provides faith-based users with supportive content and communities that motivate the use of Facebook for obtaining spiritual guidance, for accessing religious resources, and for relaxing and being entertained.

  16. A multiplexed method for kinetic measurements of apoptosis and proliferation using live-content imaging.

    Science.gov (United States)

    Artymovich, Katherine; Appledorn, Daniel M

    2015-01-01

    In vitro cell proliferation and apoptosis assays are widely used to study cancer cell biology. Commonly used methodologies are however performed at a single, user-defined endpoint. We describe a kinetic multiplex assay incorporating the CellPlayer(TM) NucLight Red reagent to measure proliferation and the CellPlayer(TM) Caspase-3/7 reagent to measure apoptosis using the two-color, live-content imaging platform, IncuCyte(TM) ZOOM. High-definition phase-contrast images provide an additional qualitative validation of cell death based on morphological characteristics. The kinetic data generated using this strategy can be used to derive informed pharmacology measurements to screen potential cancer therapeutics.

  17. A novel genome-information content-based statistic for genome-wide association analysis designed for next-generation sequencing data.

    Science.gov (United States)

    Luo, Li; Zhu, Yun; Xiong, Momiao

    2012-06-01

    The genome-wide association studies (GWAS) designed for next-generation sequencing data involve testing association of genomic variants, including common, low frequency, and rare variants. The current strategies for association studies are well developed for identifying association of common variants with the common diseases, but may be ill-suited when large amounts of allelic heterogeneity are present in sequence data. Recently, group tests that analyze their collective frequency differences between cases and controls shift the current variant-by-variant analysis paradigm for GWAS of common variants to the collective test of multiple variants in the association analysis of rare variants. However, group tests ignore differences in genetic effects among SNPs at different genomic locations. As an alternative to group tests, we developed a novel genome-information content-based statistics for testing association of the entire allele frequency spectrum of genomic variation with the diseases. To evaluate the performance of the proposed statistics, we use large-scale simulations based on whole genome low coverage pilot data in the 1000 Genomes Project to calculate the type 1 error rates and power of seven alternative statistics: a genome-information content-based statistic, the generalized T(2), collapsing method, multivariate and collapsing (CMC) method, individual χ(2) test, weighted-sum statistic, and variable threshold statistic. Finally, we apply the seven statistics to published resequencing dataset from ANGPTL3, ANGPTL4, ANGPTL5, and ANGPTL6 genes in the Dallas Heart Study. We report that the genome-information content-based statistic has significantly improved type 1 error rates and higher power than the other six statistics in both simulated and empirical datasets.

  18. Poka Yoke system based on image analysis and object recognition

    Science.gov (United States)

    Belu, N.; Ionescu, L. M.; Misztal, A.; Mazăre, A.

    2015-11-01

    Poka Yoke is a method of quality management which is related to prevent faults from arising during production processes. It deals with “fail-sating” or “mistake-proofing”. The Poka-yoke concept was generated and developed by Shigeo Shingo for the Toyota Production System. Poka Yoke is used in many fields, especially in monitoring production processes. In many cases, identifying faults in a production process involves a higher cost than necessary cost of disposal. Usually, poke yoke solutions are based on multiple sensors that identify some nonconformities. This means the presence of different equipment (mechanical, electronic) on production line. As a consequence, coupled with the fact that the method itself is an invasive, affecting the production process, would increase its price diagnostics. The bulky machines are the means by which a Poka Yoke system can be implemented become more sophisticated. In this paper we propose a solution for the Poka Yoke system based on image analysis and identification of faults. The solution consists of a module for image acquisition, mid-level processing and an object recognition module using associative memory (Hopfield network type). All are integrated into an embedded system with AD (Analog to Digital) converter and Zync 7000 (22 nm technology).

  19. A roughly mapped terra incognita: Image of the child in adult-oriented media contents

    Directory of Open Access Journals (Sweden)

    Korać Nada M.

    2003-01-01

    Full Text Available The study analyzes the image of the child in the media contents intended for adult audiences in Serbia, considering the importance of the role media play in shaping public opinion on children, as well as the influence of such public opinion on adults' attitudes, decisions and actions concerning children. The study focuses on visibility and portrayal of children in the media, in order to determine to what extent children are present in them, and in what way. Relevant data were collected for three media - press, radio and television - mostly covering the entire territory of Serbia, over two consecutive months (April - May 2001. Content analysis revealed that children are not only underrepresented, but also misrepresented, in Serbian media.

  20. Automated discrimination of lower and higher grade gliomas based on histopathological image analysis

    Directory of Open Access Journals (Sweden)

    Hojjat Seyed Mousavi

    2015-01-01

    Full Text Available Introduction: Histopathological images have rich structural information, are multi-channel in nature and contain meaningful pathological information at various scales. Sophisticated image analysis tools that can automatically extract discriminative information from the histopathology image slides for diagnosis remain an area of significant research activity. In this work, we focus on automated brain cancer grading, specifically glioma grading. Grading of a glioma is a highly important problem in pathology and is largely done manually by medical experts based on an examination of pathology slides (images. To complement the efforts of clinicians engaged in brain cancer diagnosis, we develop novel image processing algorithms and systems to automatically grade glioma tumor into two categories: Low-grade glioma (LGG and high-grade glioma (HGG which represent a more advanced stage of the disease. Results: We propose novel image processing algorithms based on spatial domain analysis for glioma tumor grading that will complement the clinical interpretation of the tissue. The image processing techniques are developed in close collaboration with medical experts to mimic the visual cues that a clinician looks for in judging of the grade of the disease. Specifically, two algorithmic techniques are developed: (1 A cell segmentation and cell-count profile creation for identification of Pseudopalisading Necrosis, and (2 a customized operation of spatial and morphological filters to accurately identify microvascular proliferation (MVP. In both techniques, a hierarchical decision is made via a decision tree mechanism. If either Pseudopalisading Necrosis or MVP is found present in any part of the histopathology slide, the whole slide is identified as HGG, which is consistent with World Health Organization guidelines. Experimental results on the Cancer Genome Atlas database are presented in the form of: (1 Successful detection rates of pseudopalisading necrosis

  1. TV content analysis techniques and applications

    CERN Document Server

    Kompatsiaris, Yiannis

    2012-01-01

    The rapid advancement of digital multimedia technologies has not only revolutionized the production and distribution of audiovisual content, but also created the need to efficiently analyze TV programs to enable applications for content managers and consumers. Leaving no stone unturned, TV Content Analysis: Techniques and Applications provides a detailed exploration of TV program analysis techniques. Leading researchers and academics from around the world supply scientifically sound treatment of recent developments across the related subject areas--including systems, architectures, algorithms,

  2. Cultural Parallax and Content Analysis: Images of Black Women in High School History Textbooks

    Science.gov (United States)

    Woyshner, Christine; Schocker, Jessica B.

    2015-01-01

    This study investigates the representation of Black women in high school history textbooks. To examine the extent to which Black women are represented visually and to explore how they are portrayed, the authors use a mixed-methods approach that draws on analytical techniques in content analysis and from visual culture studies. Their findings…

  3. A Python-Based Open Source System for Geographic Object-Based Image Analysis (GEOBIA Utilizing Raster Attribute Tables

    Directory of Open Access Journals (Sweden)

    Daniel Clewley

    2014-06-01

    Full Text Available A modular system for performing Geographic Object-Based Image Analysis (GEOBIA, using entirely open source (General Public License compatible software, is presented based around representing objects as raster clumps and storing attributes as a raster attribute table (RAT. The system utilizes a number of libraries, developed by the authors: The Remote Sensing and GIS Library (RSGISLib, the Raster I/O Simplification (RIOS Python Library, the KEA image format and TuiView image viewer. All libraries are accessed through Python, providing a common interface on which to build processing chains. Three examples are presented, to demonstrate the capabilities of the system: (1 classification of mangrove extent and change in French Guiana; (2 a generic scheme for the classification of the UN-FAO land cover classification system (LCCS and their subsequent translation to habitat categories; and (3 a national-scale segmentation for Australia. The system presented provides similar functionality to existing GEOBIA packages, but is more flexible, due to its modular environment, capable of handling complex classification processes and applying them to larger datasets.

  4. Image processing and analysis software development

    International Nuclear Information System (INIS)

    Shahnaz, R.

    1999-01-01

    The work presented in this project is aimed at developing a software 'IMAGE GALLERY' to investigate various image processing and analysis techniques. The work was divided into two parts namely the image processing techniques and pattern recognition, which further comprised of character and face recognition. Various image enhancement techniques including negative imaging, contrast stretching, compression of dynamic, neon, diffuse, emboss etc. have been studied. Segmentation techniques including point detection, line detection, edge detection have been studied. Also some of the smoothing and sharpening filters have been investigated. All these imaging techniques have been implemented in a window based computer program written in Visual Basic Neural network techniques based on Perception model have been applied for face and character recognition. (author)

  5. A fractal-based image encryption system

    KAUST Repository

    Abd-El-Hafiz, S. K.

    2014-12-01

    This study introduces a novel image encryption system based on diffusion and confusion processes in which the image information is hidden inside the complex details of fractal images. A simplified encryption technique is, first, presented using a single-fractal image and statistical analysis is performed. A general encryption system utilising multiple fractal images is, then, introduced to improve the performance and increase the encryption key up to hundreds of bits. This improvement is achieved through several parameters: feedback delay, multiplexing and independent horizontal or vertical shifts. The effect of each parameter is studied separately and, then, they are combined to illustrate their influence on the encryption quality. The encryption quality is evaluated using different analysis techniques such as correlation coefficients, differential attack measures, histogram distributions, key sensitivity analysis and the National Institute of Standards and Technology (NIST) statistical test suite. The obtained results show great potential compared to other techniques.

  6. Teaching Content Analysis through "Harry Potter"

    Science.gov (United States)

    Messinger, Adam M.

    2012-01-01

    Content analysis is a valuable research tool for social scientists that unfortunately can prove challenging to teach to undergraduate students. Published classroom exercises designed to teach content analysis have thus far been predominantly envisioned as lengthy projects for upper-level courses. A brief and engaging exercise may be more…

  7. A content analysis of visual cancer information: prevalence and use of photographs and illustrations in printed health materials.

    Science.gov (United States)

    King, Andy J

    2015-01-01

    Researchers and practitioners have an increasing interest in visual components of health information and health communication messages. This study contributes to this evolving body of research by providing an account of the visual images and information featured in printed cancer communication materials. Using content analysis, 147 pamphlets and 858 images were examined to determine how frequently images are used in printed materials, what types of images are used, what information is conveyed visually, and whether or not current recommendations for the inclusion of visual content were being followed. Although visual messages were found to be common in printed health materials, existing recommendations about the inclusion of visual content were only partially followed. Results are discussed in terms of how relevant theoretical frameworks in the areas of behavior change and visual persuasion seem to be used in these materials, as well as how more theory-oriented research is necessary in visual messaging efforts.

  8. Seismic-zonation of Port-au-Prince using pixel- and object-based imaging analysis methods on ASTER GDEM

    Science.gov (United States)

    Yong, A.; Hough, S.E.; Cox, B.R.; Rathje, E.M.; Bachhuber, J.; Dulberg, R.; Hulslander, D.; Christiansen, L.; Abrams, M.J.

    2011-01-01

    We report about a preliminary study to evaluate the use of semi-automated imaging analysis of remotely-sensed DEM and field geophysical measurements to develop a seismic-zonation map of Port-au-Prince, Haiti. For in situ data, Vs30 values are derived from the MASW technique deployed in and around the city. For satellite imagery, we use an ASTER GDEM of Hispaniola. We apply both pixel- and object-based imaging methods on the ASTER GDEM to explore local topography (absolute elevation values) and classify terrain types such as mountains, alluvial fans and basins/near-shore regions. We assign NEHRP seismic site class ranges based on available Vs30 values. A comparison of results from imagery-based methods to results from traditional geologic-based approaches reveals good overall correspondence. We conclude that image analysis of RS data provides reliable first-order site characterization results in the absence of local data and can be useful to refine detailed site maps with sparse local data. ?? 2011 American Society for Photogrammetry and Remote Sensing.

  9. Image analysis for material characterisation

    Science.gov (United States)

    Livens, Stefan

    In this thesis, a number of image analysis methods are presented as solutions to two applications concerning the characterisation of materials. Firstly, we deal with the characterisation of corrosion images, which is handled using a multiscale texture analysis method based on wavelets. We propose a feature transformation that deals with the problem of rotation invariance. Classification is performed with a Learning Vector Quantisation neural network and with combination of outputs. In an experiment, 86,2% of the images showing either pit formation or cracking, are correctly classified. Secondly, we develop an automatic system for the characterisation of silver halide microcrystals. These are flat crystals with a triangular or hexagonal base and a thickness in the 100 to 200 nm range. A light microscope is used to image them. A novel segmentation method is proposed, which allows to separate agglomerated crystals. For the measurement of shape, the ratio between the largest and the smallest radius yields the best results. The thickness measurement is based on the interference colours that appear for light reflected at the crystals. The mean colour of different thickness populations is determined, from which a calibration curve is derived. With this, the thickness of new populations can be determined accurately.

  10. Fuzzy Logic-Based Histogram Equalization for Image Contrast Enhancement

    Directory of Open Access Journals (Sweden)

    V. Magudeeswaran

    2013-01-01

    Full Text Available Fuzzy logic-based histogram equalization (FHE is proposed for image contrast enhancement. The FHE consists of two stages. First, fuzzy histogram is computed based on fuzzy set theory to handle the inexactness of gray level values in a better way compared to classical crisp histograms. In the second stage, the fuzzy histogram is divided into two subhistograms based on the median value of the original image and then equalizes them independently to preserve image brightness. The qualitative and quantitative analyses of proposed FHE algorithm are evaluated using two well-known parameters like average information contents (AIC and natural image quality evaluator (NIQE index for various images. From the qualitative and quantitative measures, it is interesting to see that this proposed method provides optimum results by giving better contrast enhancement and preserving the local information of the original image. Experimental result shows that the proposed method can effectively and significantly eliminate washed-out appearance and adverse artifacts induced by several existing methods. The proposed method has been tested using several images and gives better visual quality as compared to the conventional methods.

  11. Image-Based Reconstruction and Analysis of Dynamic Scenes in a Landslide Simulation Facility

    Science.gov (United States)

    Scaioni, M.; Crippa, J.; Longoni, L.; Papini, M.; Zanzi, L.

    2017-12-01

    The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time) reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC) technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.

  12. IMAGE-BASED RECONSTRUCTION AND ANALYSIS OF DYNAMIC SCENES IN A LANDSLIDE SIMULATION FACILITY

    Directory of Open Access Journals (Sweden)

    M. Scaioni

    2017-12-01

    Full Text Available The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.

  13. Vaccine Images on Twitter: Analysis of What Images are Shared

    Science.gov (United States)

    Dredze, Mark

    2018-01-01

    Background Visual imagery plays a key role in health communication; however, there is little understanding of what aspects of vaccine-related images make them effective communication aids. Twitter, a popular venue for discussions related to vaccination, provides numerous images that are shared with tweets. Objective The objectives of this study were to understand how images are used in vaccine-related tweets and provide guidance with respect to the characteristics of vaccine-related images that correlate with the higher likelihood of being retweeted. Methods We collected more than one million vaccine image messages from Twitter and characterized various properties of these images using automated image analytics. We fit a logistic regression model to predict whether or not a vaccine image tweet was retweeted, thus identifying characteristics that correlate with a higher likelihood of being shared. For comparison, we built similar models for the sharing of vaccine news on Facebook and for general image tweets. Results Most vaccine-related images are duplicates (125,916/237,478; 53.02%) or taken from other sources, not necessarily created by the author of the tweet. Almost half of the images contain embedded text, and many include images of people and syringes. The visual content is highly correlated with a tweet’s textual topics. Vaccine image tweets are twice as likely to be shared as nonimage tweets. The sentiment of an image and the objects shown in the image were the predictive factors in determining whether an image was retweeted. Conclusions We are the first to study vaccine images on Twitter. Our findings suggest future directions for the study and use of vaccine imagery and may inform communication strategies around vaccination. Furthermore, our study demonstrates an effective study methodology for image analysis. PMID:29615386

  14. Vaccine Images on Twitter: Analysis of What Images are Shared.

    Science.gov (United States)

    Chen, Tao; Dredze, Mark

    2018-04-03

    Visual imagery plays a key role in health communication; however, there is little understanding of what aspects of vaccine-related images make them effective communication aids. Twitter, a popular venue for discussions related to vaccination, provides numerous images that are shared with tweets. The objectives of this study were to understand how images are used in vaccine-related tweets and provide guidance with respect to the characteristics of vaccine-related images that correlate with the higher likelihood of being retweeted. We collected more than one million vaccine image messages from Twitter and characterized various properties of these images using automated image analytics. We fit a logistic regression model to predict whether or not a vaccine image tweet was retweeted, thus identifying characteristics that correlate with a higher likelihood of being shared. For comparison, we built similar models for the sharing of vaccine news on Facebook and for general image tweets. Most vaccine-related images are duplicates (125,916/237,478; 53.02%) or taken from other sources, not necessarily created by the author of the tweet. Almost half of the images contain embedded text, and many include images of people and syringes. The visual content is highly correlated with a tweet's textual topics. Vaccine image tweets are twice as likely to be shared as nonimage tweets. The sentiment of an image and the objects shown in the image were the predictive factors in determining whether an image was retweeted. We are the first to study vaccine images on Twitter. Our findings suggest future directions for the study and use of vaccine imagery and may inform communication strategies around vaccination. Furthermore, our study demonstrates an effective study methodology for image analysis. ©Tao Chen, Mark Dredze. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 03.04.2018.

  15. Medical image security using modified chaos-based cryptography approach

    Science.gov (United States)

    Talib Gatta, Methaq; Al-latief, Shahad Thamear Abd

    2018-05-01

    The progressive development in telecommunication and networking technologies have led to the increased popularity of telemedicine usage which involve storage and transfer of medical images and related information so security concern is emerged. This paper presents a method to provide the security to the medical images since its play a major role in people healthcare organizations. The main idea in this work based on the chaotic sequence in order to provide efficient encryption method that allows reconstructing the original image from the encrypted image with high quality and minimum distortion in its content and doesn’t effect in human treatment and diagnosing. Experimental results prove the efficiency of the proposed method using some of statistical measures and robust correlation between original image and decrypted image.

  16. Image analysis for composition monitoring. Commercial blends of olive and soybean oil - doi: 10.4025/actascitechnol.v35i2.15216

    Directory of Open Access Journals (Sweden)

    Jessica Kehrig Fernandes

    2013-04-01

    Full Text Available Olive oil represents an important component of a healthy and balanced dietary. Due to commercial features, characterization of pure olive oil and commercial mixtures represents an important challenge. Reported techniques can successfully quantify components in concentrations lower than 1%, but may present long delays, too many purification steps or use expensive equipment. Image analysis represents an important characterization technique for food science and technology. By coupling image and UV-VIS spectroscopy analysis, models with linear dependence on parameters were developed and could successfully describe the mixture concentration in the range of 0-100% in mass of olive oil content. A validation sample, containing 25% in mass of olive oil, not used for parameter estimation, was also used for testing the proposed procedure, leading to a prediction of 24.8 ± 0.6. Due to image analysis results,  3-parameter-based models considering only R and G components were developed for olive oil content prediction in mixtures with up to 70% in mass of olive oil, the same test sample was used and its concentration was predicted as 24.5 ± 1.2. These results show that image analysis represents a promising technique for on-line/in-line monitoring of blending process of olive soybean oil for commercial mixtures.  

  17. Image Analysis of Eccentric Photorefraction

    Directory of Open Access Journals (Sweden)

    J. Dušek

    2004-01-01

    Full Text Available This article deals with image and data analysis of the recorded video-sequences of strabistic infants. It describes a unique noninvasive measuring system based on two measuring methods (position of I. Purkynje image with relation to the centre of the lens and eccentric photorefraction for infants. The whole process is divided into three steps. The aim of the first step is to obtain video sequences on our special system (Eye Movement Analyser. Image analysis of the recorded sequences is performed in order to obtain curves of basic eye reactions (accommodation and convergence. The last step is to calibrate of these curves to corresponding units (diopter and degrees of movement.

  18. Dictionary-based image reconstruction for superresolution in integrated circuit imaging.

    Science.gov (United States)

    Cilingiroglu, T Berkin; Uyar, Aydan; Tuysuzoglu, Ahmet; Karl, W Clem; Konrad, Janusz; Goldberg, Bennett B; Ünlü, M Selim

    2015-06-01

    Resolution improvement through signal processing techniques for integrated circuit imaging is becoming more crucial as the rapid decrease in integrated circuit dimensions continues. Although there is a significant effort to push the limits of optical resolution for backside fault analysis through the use of solid immersion lenses, higher order laser beams, and beam apodization, signal processing techniques are required for additional improvement. In this work, we propose a sparse image reconstruction framework which couples overcomplete dictionary-based representation with a physics-based forward model to improve resolution and localization accuracy in high numerical aperture confocal microscopy systems for backside optical integrated circuit analysis. The effectiveness of the framework is demonstrated on experimental data.

  19. Filtering adult image content with topic models

    OpenAIRE

    Lienhart, Rainer (Prof. Dr.); Hauke, Rudolf

    2009-01-01

    Protecting children from exposure to adult content has become a serious problem in the real world. Current statistics show that, for instance, the average age of first Internet exposure to pornography is 11 years, that the largest consumer group of Internet pornography is the age group of 12-to-17-year-olds and that 90% of the 8-to-16-year-olds have viewed porn online. To protect our children, effective algorithms for detecting adult images are needed. In this research we evaluate the use of ...

  20. Bank Image Structure: The Relationship to Consumer Behaviour

    Directory of Open Access Journals (Sweden)

    Lukasova Ruzena

    2014-03-01

    Full Text Available This paper presents the results of a study of the relationship between the bank image, its structure as a reflection in the minds of individuals and behavioural tendencies in relation to banks. Attitudinal scales were used to identify the contents of the particular banks’ image. The structure of the image was identified by means of factor analysis. The study found that the respondents’ behavioural tendencies, i.e. their willingness to be a client of or to recommend a particular bank, are related to different content components of particular banks and mainly to respondents’ needs. Based on the results, the study identifies the danger that the results of the bank image analysis can be misinterpreted if the respondents’ relationship to the bank is underestimated.

  1. Dimensionality Reduction of Hyperspectral Image with Graph-Based Discriminant Analysis Considering Spectral Similarity

    Directory of Open Access Journals (Sweden)

    Fubiao Feng

    2017-03-01

    Full Text Available Recently, graph embedding has drawn great attention for dimensionality reduction in hyperspectral imagery. For example, locality preserving projection (LPP utilizes typical Euclidean distance in a heat kernel to create an affinity matrix and projects the high-dimensional data into a lower-dimensional space. However, the Euclidean distance is not sufficiently correlated with intrinsic spectral variation of a material, which may result in inappropriate graph representation. In this work, a graph-based discriminant analysis with spectral similarity (denoted as GDA-SS measurement is proposed, which fully considers curves changing description among spectral bands. Experimental results based on real hyperspectral images demonstrate that the proposed method is superior to traditional methods, such as supervised LPP, and the state-of-the-art sparse graph-based discriminant analysis (SGDA.

  2. MR-based water content estimation in cartilage: design and validation of a method

    DEFF Research Database (Denmark)

    Shiguetomi Medina, Juan Manuel; Kristiansen, Maja Sophie; Ringgaard, Steffen

    Purpose: Design and validation of an MR-based method that allows the calculation of the water content in cartilage tissue. Methods and Materials: Cartilage tissue T1 map based water content MR sequences were used on a 37 Celsius degree stable system. The T1 map intensity signal was analyzed on 6...... cartilage samples from living animals (pig) and on 8 gelatin samples which water content was already known. For the data analysis a T1 intensity signal map software analyzer used. Finally, the method was validated after measuring and comparing 3 more cartilage samples in a living animal (pig). The obtained...... map based water content sequences can provide information that, after being analyzed using a T1-map analysis software, can be interpreted as the water contained inside a cartilage tissue. The amount of water estimated using this method was similar to the one obtained at the dry-freeze procedure...

  3. Information Theoretical Analysis of Identification based on Active Content Fingerprinting

    OpenAIRE

    Farhadzadeh, Farzad; Willems, Frans M. J.; Voloshinovskiy, Sviatoslav

    2014-01-01

    Content fingerprinting and digital watermarking are techniques that are used for content protection and distribution monitoring. Over the past few years, both techniques have been well studied and their shortcomings understood. Recently, a new content fingerprinting scheme called {\\em active content fingerprinting} was introduced to overcome these shortcomings. Active content fingerprinting aims to modify a content to extract robuster fingerprints than the conventional content fingerprinting....

  4. Frequency domain analysis of knock images

    Science.gov (United States)

    Qi, Yunliang; He, Xin; Wang, Zhi; Wang, Jianxin

    2014-12-01

    High speed imaging-based knock analysis has mainly focused on time domain information, e.g. the spark triggered flame speed, the time when end gas auto-ignition occurs and the end gas flame speed after auto-ignition. This study presents a frequency domain analysis on the knock images recorded using a high speed camera with direct photography in a rapid compression machine (RCM). To clearly visualize the pressure wave oscillation in the combustion chamber, the images were high-pass-filtered to extract the luminosity oscillation. The luminosity spectrum was then obtained by applying fast Fourier transform (FFT) to three basic colour components (red, green and blue) of the high-pass-filtered images. Compared to the pressure spectrum, the luminosity spectra better identify the resonant modes of pressure wave oscillation. More importantly, the resonant mode shapes can be clearly visualized by reconstructing the images based on the amplitudes of luminosity spectra at the corresponding resonant frequencies, which agree well with the analytical solutions for mode shapes of gas vibration in a cylindrical cavity.

  5. Mapping of crop calendar events by object-based analysis of MODIS and ASTER images

    Directory of Open Access Journals (Sweden)

    A.I. De Castro

    2014-06-01

    Full Text Available A method to generate crop calendar and phenology-related maps at a parcel level of four major irrigated crops (rice, maize, sunflower and tomato is shown. The method combines images from the ASTER and MODIS sensors in an object-based image analysis framework, as well as testing of three different fitting curves by using the TIMESAT software. Averaged estimation of calendar dates were 85%, from 92% in the estimation of emergence and harvest dates in rice to 69% in the case of harvest date in tomato.

  6. An Image Encryption Method Based on Bit Plane Hiding Technology

    Institute of Scientific and Technical Information of China (English)

    LIU Bin; LI Zhitang; TU Hao

    2006-01-01

    A novel image hiding method based on the correlation analysis of bit plane is described in this paper. Firstly, based on the correlation analysis, different bit plane of a secret image is hided in different bit plane of several different open images. And then a new hiding image is acquired by a nesting "Exclusive-OR" operation on those images obtained from the first step. At last, by employing image fusion technique, the final hiding result is achieved. The experimental result shows that the method proposed in this paper is effective.

  7. Assessment of Abdominal Adipose Tissue and Organ Fat Content by Magnetic Resonance Imaging

    Science.gov (United States)

    Hu, Houchun H.; Nayak, Krishna S.; Goran, Michael I.

    2010-01-01

    As the prevalence of obesity continues to rise, rapid and accurate tools for assessing abdominal body and organ fat quantity and distribution are critically needed to assist researchers investigating therapeutic and preventive measures against obesity and its comorbidities. Magnetic resonance imaging (MRI) is the most promising modality to address such need. It is non-invasive, utilizes no ionizing radiation, provides unmatched 3D visualization, is repeatable, and is applicable to subject cohorts of all ages. This article is aimed to provide the reader with an overview of current and state-of-the-art techniques in MRI and associated image analysis methods for fat quantification. The principles underlying traditional approaches such as T1-weighted imaging and magnetic resonance spectroscopy as well as more modern chemical-shift imaging techniques are discussed and compared. The benefits of contiguous 3D acquisitions over 2D multi-slice approaches are highlighted. Typical post-processing procedures for extracting adipose tissue depot volumes and percent organ fat content from abdominal MRI data sets are explained. Furthermore, the advantages and disadvantages of each MRI approach with respect to imaging parameters, spatial resolution, subject motion, scan time, and appropriate fat quantitative endpoints are also provided. Practical considerations in implementing these methods are also presented. PMID:21348916

  8. AN ENSEMBLE TEMPLATE MATCHING AND CONTENT-BASED IMAGE RETRIEVAL SCHEME TOWARDS EARLY STAGE DETECTION OF MELANOMA

    Directory of Open Access Journals (Sweden)

    Spiros Kostopoulos

    2016-12-01

    Full Text Available Malignant melanoma represents the most dangerous type of skin cancer. In this study we present an ensemble classification scheme, employing the mutual information, the cross-correlation and the clustering based on proximity of image features methods, for early stage assessment of melanomas on plain photography images. The proposed scheme performs two main operations. First, it retrieves the most similar, to the unknown case, image samples from an available image database with verified benign moles and malignant melanoma cases. Second, it provides an automated estimation regarding the nature of the unknown image sample based on the majority of the most similar images retrieved from the available database. Clinical material comprised 75 melanoma and 75 benign plain photography images collected from publicly available dermatological atlases. Results showed that the ensemble scheme outperformed all other methods tested in terms of accuracy with 94.9±1.5%, following an external cross-validation evaluation methodology. The proposed scheme may benefit patients by providing a second opinion consultation during the self-skin examination process and the physician by providing a second opinion estimation regarding the nature of suspicious moles that may assist towards decision making especially for ambiguous cases, safeguarding, in this way from potential diagnostic misinterpretations.

  9. Computer-based image analysis in radiological diagnostics and image-guided therapy: 3D-Reconstruction, contrast medium dynamics, surface analysis, radiation therapy and multi-modal image fusion

    International Nuclear Information System (INIS)

    Beier, J.

    2001-01-01

    This book deals with substantial subjects of postprocessing and analysis of radiological image data, a particular emphasis was put on pulmonary themes. For a multitude of purposes the developed methods and procedures can directly be transferred to other non-pulmonary applications. The work presented here is structured in 14 chapters, each describing a selected complex of research. The chapter order reflects the sequence of the processing steps starting from artefact reduction, segmentation, visualization, analysis, therapy planning and image fusion up to multimedia archiving. In particular, this includes virtual endoscopy with three different scene viewers (Chap. 6), visualizations of the lung disease bronchiectasis (Chap. 7), surface structure analysis of pulmonary tumors (Chap. 8), quantification of contrast medium dynamics from temporal 2D and 3D image sequences (Chap. 9) as well as multimodality image fusion of arbitrary tomographical data using several visualization techniques (Chap. 12). Thus, the software systems presented cover the majority of image processing applications necessary in radiology and were entirely developed, implemented and validated in the clinical routine of a university medical school. (orig.) [de

  10. Multimedia Based E-learning : Design and Integration of Multimedia Content in E-learning

    Directory of Open Access Journals (Sweden)

    Abdulaziz Omar Alsadhan

    2014-05-01

    Full Text Available The advancement in multimedia and information technologies also have impacted the way of imparting education. This advancement has led to rapid use of e learning systems and has enabled greater integration of multimedia content into e learning systems. This paper present a model for development of e learning systems based on multimedia content. The model is called “Multimedia based e learning” and is loosely based on waterfall software development model. This model consists of three distinct phases; Multimedia Content Modelling, Multimedia content Development, Multimedia content Integration. These three phases are further sub divided into 7 different activities which are analysis, design, technical requirements, content development, content production & integration, implementation and evaluation. This model defines a general framework that can be applied for the development of e learning systems across all disciplines and subjects.

  11. Partial differential equation-based approach for empirical mode decomposition: application on image analysis.

    Science.gov (United States)

    Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques

    2012-09-01

    The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.

  12. High-content image informatics of the structural nuclear protein NuMA parses trajectories for stem/progenitor cell lineages and oncogenic transformation

    International Nuclear Information System (INIS)

    Vega, Sebastián L.; Liu, Er; Arvind, Varun; Bushman, Jared; Sung, Hak-Joon; Becker, Matthew L.; Lelièvre, Sophie; Kohn, Joachim; Vidi, Pierre-Alexandre; Moghe, Prabhas V.

    2017-01-01

    Stem and progenitor cells that exhibit significant regenerative potential and critical roles in cancer initiation and progression remain difficult to characterize. Cell fates are determined by reciprocal signaling between the cell microenvironment and the nucleus; hence parameters derived from nuclear remodeling are ideal candidates for stem/progenitor cell characterization. Here we applied high-content, single cell analysis of nuclear shape and organization to examine stem and progenitor cells destined to distinct differentiation endpoints, yet undistinguishable by conventional methods. Nuclear descriptors defined through image informatics classified mesenchymal stem cells poised to either adipogenic or osteogenic differentiation, and oligodendrocyte precursors isolated from different regions of the brain and destined to distinct astrocyte subtypes. Nuclear descriptors also revealed early changes in stem cells after chemical oncogenesis, allowing the identification of a class of cancer-mitigating biomaterials. To capture the metrology of nuclear changes, we developed a simple and quantitative “imaging-derived” parsing index, which reflects the dynamic evolution of the high-dimensional space of nuclear organizational features. A comparative analysis of parsing outcomes via either nuclear shape or textural metrics of the nuclear structural protein NuMA indicates the nuclear shape alone is a weak phenotypic predictor. In contrast, variations in the NuMA organization parsed emergent cell phenotypes and discerned emergent stages of stem cell transformation, supporting a prognosticating role for this protein in the outcomes of nuclear functions. - Highlights: • High-content analysis of nuclear shape and organization classify stem and progenitor cells poised for distinct lineages. • Early oncogenic changes in mesenchymal stem cells (MSCs) are also detected with nuclear descriptors. • A new class of cancer-mitigating biomaterials was identified based on image

  13. High-content image informatics of the structural nuclear protein NuMA parses trajectories for stem/progenitor cell lineages and oncogenic transformation

    Energy Technology Data Exchange (ETDEWEB)

    Vega, Sebastián L. [Department of Chemical and Biochemical Engineering, Rutgers University, Piscataway, NJ (United States); Liu, Er; Arvind, Varun [Department of Biomedical Engineering, Rutgers University, Piscataway, NJ (United States); Bushman, Jared [Department of Chemistry and Chemical Biology, New Jersey Center for Biomaterials, Piscataway, NJ (United States); School of Pharmacy, University of Wyoming, Laramie, WY (United States); Sung, Hak-Joon [Department of Chemistry and Chemical Biology, New Jersey Center for Biomaterials, Piscataway, NJ (United States); Department of Biomedical Engineering, Vanderbilt University, Nashville, TN (United States); Becker, Matthew L. [Department of Polymer Science and Engineering, University of Akron, Akron, OH (United States); Lelièvre, Sophie [Department of Basic Medical Sciences, Purdue University, West Lafayette, IN (United States); Kohn, Joachim [Department of Chemistry and Chemical Biology, New Jersey Center for Biomaterials, Piscataway, NJ (United States); Vidi, Pierre-Alexandre, E-mail: pvidi@wakehealth.edu [Department of Cancer Biology, Wake Forest School of Medicine, Winston-Salem, NC (United States); Moghe, Prabhas V., E-mail: moghe@rutgers.edu [Department of Chemical and Biochemical Engineering, Rutgers University, Piscataway, NJ (United States); Department of Biomedical Engineering, Rutgers University, Piscataway, NJ (United States)

    2017-02-01

    Stem and progenitor cells that exhibit significant regenerative potential and critical roles in cancer initiation and progression remain difficult to characterize. Cell fates are determined by reciprocal signaling between the cell microenvironment and the nucleus; hence parameters derived from nuclear remodeling are ideal candidates for stem/progenitor cell characterization. Here we applied high-content, single cell analysis of nuclear shape and organization to examine stem and progenitor cells destined to distinct differentiation endpoints, yet undistinguishable by conventional methods. Nuclear descriptors defined through image informatics classified mesenchymal stem cells poised to either adipogenic or osteogenic differentiation, and oligodendrocyte precursors isolated from different regions of the brain and destined to distinct astrocyte subtypes. Nuclear descriptors also revealed early changes in stem cells after chemical oncogenesis, allowing the identification of a class of cancer-mitigating biomaterials. To capture the metrology of nuclear changes, we developed a simple and quantitative “imaging-derived” parsing index, which reflects the dynamic evolution of the high-dimensional space of nuclear organizational features. A comparative analysis of parsing outcomes via either nuclear shape or textural metrics of the nuclear structural protein NuMA indicates the nuclear shape alone is a weak phenotypic predictor. In contrast, variations in the NuMA organization parsed emergent cell phenotypes and discerned emergent stages of stem cell transformation, supporting a prognosticating role for this protein in the outcomes of nuclear functions. - Highlights: • High-content analysis of nuclear shape and organization classify stem and progenitor cells poised for distinct lineages. • Early oncogenic changes in mesenchymal stem cells (MSCs) are also detected with nuclear descriptors. • A new class of cancer-mitigating biomaterials was identified based on image

  14. Photons-based medical imaging - Radiology, X-ray tomography, gamma and positrons tomography, optical imaging; Imagerie medicale a base de photons - Radiologie, tomographie X, tomographie gamma et positons, imagerie optique

    Energy Technology Data Exchange (ETDEWEB)

    Fanet, H.; Dinten, J.M.; Moy, J.P.; Rinkel, J. [CEA Leti, Grenoble (France); Buvat, I. [IMNC - CNRS, Orsay (France); Da Silva, A. [Institut Fresnel, Marseille (France); Douek, P.; Peyrin, F. [INSA Lyon, Lyon Univ. (France); Frija, G. [Hopital Europeen George Pompidou, Paris (France); Trebossen, R. [CEA-Service hospitalier Frederic Joliot, Orsay (France)

    2010-07-01

    This book describes the different principles used in medical imaging. The detection aspects, the processing electronics and algorithms are detailed for the different techniques. This first tome analyses the photons-based techniques (X-rays, gamma rays and visible light). Content: 1 - physical background: radiation-matter interaction, consequences on detection and medical imaging; 2 - detectors for medical imaging; 3 - processing of numerical radiography images for quantization; 4 - X-ray tomography; 5 - positrons emission tomography: principles and applications; 6 - mono-photonic imaging; 7 - optical imaging; Index. (J.S.)

  15. Multispectral UV imaging for surface analysis of MUPS tablets with special focus on the pellet distribution

    DEFF Research Database (Denmark)

    Novikova, Anna; Carstensen, Jens Michael; Rades, Thomas

    2016-01-01

    In the present study the applicability of multispectral UV imaging in combination with multivariate image analysis for surface evaluation of MUPS tablets was investigated with respect to the differentiation of the API pellets from the excipients matrix, estimation of the drug content as well as p...... image analysis is a promising approach for the automatic quality control of MUPS tablets during the manufacturing process....

  16. Optimization of shearography image quality analysis

    International Nuclear Information System (INIS)

    Rafhayudi Jamro

    2005-01-01

    Shearography is an optical technique based on speckle pattern to measure the deformation of the object surface in which the fringe pattern is obtained through the correlation analysis from the speckle pattern. Analysis of fringe pattern for engineering application is limited for qualitative measurement. Therefore, for further analysis that lead to qualitative data, series of image processing mechanism are involved. In this paper, the fringe pattern for qualitative analysis is discussed. In principal field of applications is qualitative non-destructive testing such as detecting discontinuity, defect in the material structure, locating fatigue zones and etc and all these required image processing application. In order to performed image optimisation successfully, the noise in the fringe pattern must be minimised and the fringe pattern itself must be maximise. This can be achieved by applying a filtering method with a kernel size ranging from 2 X 2 to 7 X 7 pixels size and also applying equalizer in the image processing. (Author)

  17. Evolution-based Virtual Content Insertion with Visually Virtual Interactions in Videos

    Science.gov (United States)

    Chang, Chia-Hu; Wu, Ja-Ling

    With the development of content-based multimedia analysis, virtual content insertion has been widely used and studied for video enrichment and multimedia advertising. However, how to automatically insert a user-selected virtual content into personal videos in a less-intrusive manner, with an attractive representation, is a challenging problem. In this chapter, we present an evolution-based virtual content insertion system which can insert virtual contents into videos with evolved animations according to predefined behaviors emulating the characteristics of evolutionary biology. The videos are considered not only as carriers of message conveyed by the virtual content but also as the environment in which the lifelike virtual contents live. Thus, the inserted virtual content will be affected by the videos to trigger a series of artificial evolutions and evolve its appearances and behaviors while interacting with video contents. By inserting virtual contents into videos through the system, users can easily create entertaining storylines and turn their personal videos into visually appealing ones. In addition, it would bring a new opportunity to increase the advertising revenue for video assets of the media industry and online video-sharing websites.

  18. Operational Automatic Remote Sensing Image Understanding Systems: Beyond Geographic Object-Based and Object-Oriented Image Analysis (GEOBIA/GEOOIA. Part 2: Novel system Architecture, Information/Knowledge Representation, Algorithm Design and Implementation

    Directory of Open Access Journals (Sweden)

    Luigi Boschetti

    2012-09-01

    Full Text Available According to literature and despite their commercial success, state-of-the-art two-stage non-iterative geographic object-based image analysis (GEOBIA systems and three-stage iterative geographic object-oriented image analysis (GEOOIA systems, where GEOOIA/GEOBIA, remain affected by a lack of productivity, general consensus and research. To outperform the Quality Indexes of Operativeness (OQIs of existing GEOBIA/GEOOIA systems in compliance with the Quality Assurance Framework for Earth Observation (QA4EO guidelines, this methodological work is split into two parts. Based on an original multi-disciplinary Strengths, Weaknesses, Opportunities and Threats (SWOT analysis of the GEOBIA/GEOOIA approaches, the first part of this work promotes a shift of learning paradigm in the pre-attentive vision first stage of a remote sensing (RS image understanding system (RS-IUS, from sub-symbolic statistical model-based (inductive image segmentation to symbolic physical model-based (deductive image preliminary classification capable of accomplishing image sub-symbolic segmentation and image symbolic pre-classification simultaneously. In the present second part of this work, a novel hybrid (combined deductive and inductive RS-IUS architecture featuring a symbolic deductive pre-attentive vision first stage is proposed and discussed in terms of: (a computational theory (system design, (b information/knowledge representation, (c algorithm design and (d implementation. As proof-of-concept of symbolic physical model-based pre-attentive vision first stage, the spectral knowledge-based, operational, near real-time, multi-sensor, multi-resolution, application-independent Satellite Image Automatic Mapper™ (SIAM™ is selected from existing literature. To the best of these authors’ knowledge, this is the first time a symbolic syntactic inference system, like SIAM™, is made available to the RS community for operational use in a RS-IUS pre-attentive vision first stage

  19. Patch-based image segmentation of satellite imagery using minimum spanning tree construction

    Energy Technology Data Exchange (ETDEWEB)

    Skurikhin, Alexei N [Los Alamos National Laboratory

    2010-01-01

    We present a method for hierarchical image segmentation and feature extraction. This method builds upon the combination of the detection of image spectral discontinuities using Canny edge detection and the image Laplacian, followed by the construction of a hierarchy of segmented images of successively reduced levels of details. These images are represented as sets of polygonized pixel patches (polygons) attributed with spectral and structural characteristics. This hierarchy forms the basis for object-oriented image analysis. To build fine level-of-detail representation of the original image, seed partitions (polygons) are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on the top of the detected spectral discontinuities that form a network of constraints for the Delaunay triangulation. A polygonized image is represented as a spatial network in the form of a graph with vertices which correspond to the polygonal partitions and graph edges reflecting pairwise partitions relations. Image graph partitioning is based on the iterative graph oontraction using Boruvka's Minimum Spanning Tree algorithm. An important characteristic of the approach is that the agglomeration of partitions is constrained by the detected spectral discontinuities; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects.

  20. Pluri-IQ: Quantification of Embryonic Stem Cell Pluripotency through an Image-Based Analysis Software

    Directory of Open Access Journals (Sweden)

    Tânia Perestrelo

    2017-08-01

    Full Text Available Image-based assays, such as alkaline phosphatase staining or immunocytochemistry for pluripotent markers, are common methods used in the stem cell field to assess pluripotency. Although an increased number of image-analysis approaches have been described, there is still a lack of software availability to automatically quantify pluripotency in large images after pluripotency staining. To address this need, we developed a robust and rapid image processing software, Pluri-IQ, which allows the automatic evaluation of pluripotency in large low-magnification images. Using mouse embryonic stem cells (mESC as a model, we combined an automated segmentation algorithm with a supervised machine-learning platform to classify colonies as pluripotent, mixed, or differentiated. In addition, Pluri-IQ allows the automatic comparison between different culture conditions. This efficient user-friendly open-source software can be easily implemented in images derived from pluripotent cells or cells that express pluripotent markers (e.g., OCT4-GFP and can be routinely used, decreasing image assessment bias.

  1. Automated quantification and sizing of unbranched filamentous cyanobacteria by model based object oriented image analysis

    OpenAIRE

    Zeder, M; Van den Wyngaert, S; Köster, O; Felder, K M; Pernthaler, J

    2010-01-01

    Quantification and sizing of filamentous cyanobacteria in environmental samples or cultures are time-consuming and are often performed by using manual or semiautomated microscopic analysis. Automation of conventional image analysis is difficult because filaments may exhibit great variations in length and patchy autofluorescence. Moreover, individual filaments frequently cross each other in microscopic preparations, as deduced by modeling. This paper describes a novel approach based on object-...

  2. Nanoparticulate NaA zeolite composites for MRI: Effect of iron oxide content on image contrast

    Science.gov (United States)

    Gharehaghaji, Nahideh; Divband, Baharak; Zareei, Loghman

    2018-06-01

    In the current study, Fe3O4/NaA nanocomposites with various amounts of Fe3O4 (3.4, 6.8 & 10.2 wt%) were synthesized and characterized to study the effect of nano iron oxide content on the magnetic resonance (MR) image contrast. The cell viability of the nanocomposites was investigated by MTT assay method. T2 values as well as r2 relaxivities were determined with a 1.5 T MRI scanner. The results of the MTT assay confirmed the nanocomposites cytocompatibility up to 6.8% of the iron oxide content. Although the magnetization saturations and susceptibility values of the nanocomposites were increased as a function of the iron oxide content, their relaxivity was decreased from 921.78 mM-1 s-1 for the nanocomposite with the lowest iron oxide content to 380.16 mM-1 s-1 for the highest one. Therefore, Fe3O4/NaA nanocomposite with 3.4% iron oxide content led to the best MR image contrast. Nano iron oxide content and dispersion in the nanocomposites structure have important role in the nanocomposite r2 relaxivity and the MR image contrast. Aggregation of the iron oxide nanoparticles is a limiting factor in using of the high iron oxide content nanocomposites.

  3. An explorative childhood pneumonia analysis based on ultrasonic imaging texture features

    Science.gov (United States)

    Zenteno, Omar; Diaz, Kristians; Lavarello, Roberto; Zimic, Mirko; Correa, Malena; Mayta, Holger; Anticona, Cynthia; Pajuelo, Monica; Oberhelman, Richard; Checkley, William; Gilman, Robert H.; Figueroa, Dante; Castañeda, Benjamín.

    2015-12-01

    According to World Health Organization, pneumonia is the respiratory disease with the highest pediatric mortality rate accounting for 15% of all deaths of children under 5 years old worldwide. The diagnosis of pneumonia is commonly made by clinical criteria with support from ancillary studies and also laboratory findings. Chest imaging is commonly done with chest X-rays and occasionally with a chest CT scan. Lung ultrasound is a promising alternative for chest imaging; however, interpretation is subjective and requires adequate training. In the present work, a two-class classification algorithm based on four Gray-level co-occurrence matrix texture features (i.e., Contrast, Correlation, Energy and Homogeneity) extracted from lung ultrasound images from children aged between six months and five years is presented. Ultrasound data was collected using a L14-5/38 linear transducer. The data consisted of 22 positive- and 68 negative-diagnosed B-mode cine-loops selected by a medical expert and captured in the facilities of the Instituto Nacional de Salud del Niño (Lima, Peru), for a total number of 90 videos obtained from twelve children diagnosed with pneumonia. The classification capacity of each feature was explored independently and the optimal threshold was selected by a receiver operator characteristic (ROC) curve analysis. In addition, a principal component analysis was performed to evaluate the combined performance of all the features. Contrast and correlation resulted the two more significant features. The classification performance of these two features by principal components was evaluated. The results revealed 82% sensitivity, 76% specificity, 78% accuracy and 0.85 area under the ROC.

  4. Evaluating the spatio-temporal performance of sky-imager-based solar irradiance analysis and forecasts

    Science.gov (United States)

    Schmidt, Thomas; Kalisch, John; Lorenz, Elke; Heinemann, Detlev

    2016-03-01

    Clouds are the dominant source of small-scale variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the worldwide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a very short term global horizontal irradiance (GHI) forecast experiment based on hemispheric sky images. A 2-month data set with images from one sky imager and high-resolution GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series into different cloud scenarios. Overall, the sky-imager-based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depends strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1-2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability, which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.

  5. Evaluating the spatio-temporal performance of sky-imager-based solar irradiance analysis and forecasts

    Directory of Open Access Journals (Sweden)

    T. Schmidt

    2016-03-01

    Full Text Available Clouds are the dominant source of small-scale variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the worldwide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a very short term global horizontal irradiance (GHI forecast experiment based on hemispheric sky images. A 2-month data set with images from one sky imager and high-resolution GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series into different cloud scenarios. Overall, the sky-imager-based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depends strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1–2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability, which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.

  6. Evaluating the spatio-temporal performance of sky imager based solar irradiance analysis and forecasts

    Science.gov (United States)

    Schmidt, T.; Kalisch, J.; Lorenz, E.; Heinemann, D.

    2015-10-01

    Clouds are the dominant source of variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the world-wide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a shortest-term global horizontal irradiance (GHI) forecast experiment based on hemispheric sky images. A two month dataset with images from one sky imager and high resolutive GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series in different cloud scenarios. Overall, the sky imager based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depend strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1-2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.

  7. Data Extraction Based on Page Structure Analysis

    Directory of Open Access Journals (Sweden)

    Ren Yichao

    2017-01-01

    Full Text Available The information we need has some confusing problems such as dispersion and different organizational structure. In addition, because of the existence of unstructured data like natural language and images, extracting local content pages is extremely difficult. In the light of of the problems above, this article will apply a method combined with page structure analysis algorithm and page data extraction algorithm to accomplish the gathering of network data. In this way, the problem that traditional complex extraction model behave poorly when dealing with large-scale data is perfectly solved and the page data extraction efficiency is also boosted to a new level. In the meantime, the article will also make a comparison about pages and content of different types between the methods of DOM structure based on the page and HTML regularities of distribution. After all of those, we may find a more efficient extract method.

  8. Ultrasonic image analysis and image-guided interventions.

    Science.gov (United States)

    Noble, J Alison; Navab, Nassir; Becher, H

    2011-08-06

    The fields of medical image analysis and computer-aided interventions deal with reducing the large volume of digital images (X-ray, computed tomography, magnetic resonance imaging (MRI), positron emission tomography and ultrasound (US)) to more meaningful clinical information using software algorithms. US is a core imaging modality employed in these areas, both in its own right and used in conjunction with the other imaging modalities. It is receiving increased interest owing to the recent introduction of three-dimensional US, significant improvements in US image quality, and better understanding of how to design algorithms which exploit the unique strengths and properties of this real-time imaging modality. This article reviews the current state of art in US image analysis and its application in image-guided interventions. The article concludes by giving a perspective from clinical cardiology which is one of the most advanced areas of clinical application of US image analysis and describing some probable future trends in this important area of ultrasonic imaging research.

  9. Hand-Based Biometric Analysis

    Science.gov (United States)

    Bebis, George (Inventor); Amayeh, Gholamreza (Inventor)

    2015-01-01

    Hand-based biometric analysis systems and techniques are described which provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an in put image. Additionally, the analysis utilizes re-use of commonly-seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.

  10. A Survey on Portuguese Lexical Knowledge Bases: Contents, Comparison and Combination

    Directory of Open Access Journals (Sweden)

    Hugo Gonçalo Oliveira

    2018-02-01

    Full Text Available In the last decade, several lexical-semantic knowledge bases (LKBs were developed for Portuguese, by different teams and following different approaches. Most of them are open and freely available for the community. Those LKBs are briefly analysed here, with a focus on size, structure, and overlapping contents. However, we go further and exploit all of the analysed LKBs in the creation of new LKBs, based on the redundant contents. Both original and redundancy-based LKBs are then compared, indirectly, based on the performance of automatic procedures that exploit them for solving four different semantic analysis tasks. In addition to conclusions on the performance of the original LKBs, results show that, instead of selecting a single LKB to use, it is generally worth combining the contents of all the open Portuguese LKBs, towards better results.

  11. A method for volume determination of the orbit and its contents by high resolution axial tomography and quantitative digital image analysis.

    Science.gov (United States)

    Cooper, W C

    1985-01-01

    The various congenital and acquired conditions which alter orbital volume are reviewed. Previous investigative work to determine orbital capacity is summarized. Since these studies were confined to postmortem evaluations, the need for a technique to measure orbital volume in the living state is presented. A method for volume determination of the orbit and its contents by high-resolution axial tomography and quantitative digital image analysis is reported. This procedure has proven to be accurate (the discrepancy between direct and computed measurements ranged from 0.2% to 4%) and reproducible (greater than 98%). The application of this method to representative clinical problems is presented and discussed. The establishment of a diagnostic system versatile enough to expand the usefulness of computerized axial tomography and polytomography should add a new dimension to ophthalmic investigation and treatment.

  12. A Key Gene, PLIN1, Can Affect Porcine Intramuscular Fat Content Based on Transcriptome Analysis.

    Science.gov (United States)

    Li, Bojiang; Weng, Qiannan; Dong, Chao; Zhang, Zengkai; Li, Rongyang; Liu, Jingge; Jiang, Aiwen; Li, Qifa; Jia, Chao; Wu, Wangjun; Liu, Honglin

    2018-04-04

    Intramuscular fat (IMF) content is an important indicator for meat quality evaluation. However, the key genes and molecular regulatory mechanisms affecting IMF deposition remain unclear. In the present study, we identified 75 differentially expressed genes (DEGs) between the higher (H) and lower (L) IMF content of pigs using transcriptome analysis, of which 27 were upregulated and 48 were downregulated. Notably, Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis indicated that the DEG perilipin-1 ( PLIN1 ) was significantly enriched in the fat metabolism-related peroxisome proliferator-activated receptor (PPAR) signaling pathway. Furthermore, we determined the expression patterns and functional role of porcine PLIN1. Our results indicate that PLIN1 was highly expressed in porcine adipose tissue, and its expression level was significantly higher in the H IMF content group when compared with the L IMF content group, and expression was increased during adipocyte differentiation. Additionally, our results confirm that PLIN1 knockdown decreases the triglyceride (TG) level and lipid droplet (LD) size in porcine adipocytes. Overall, our data identify novel candidate genes affecting IMF content and provide new insight into PLIN1 in porcine IMF deposition and adipocyte differentiation.

  13. A Key Gene, PLIN1, Can Affect Porcine Intramuscular Fat Content Based on Transcriptome Analysis

    Directory of Open Access Journals (Sweden)

    Bojiang Li

    2018-04-01

    Full Text Available Intramuscular fat (IMF content is an important indicator for meat quality evaluation. However, the key genes and molecular regulatory mechanisms affecting IMF deposition remain unclear. In the present study, we identified 75 differentially expressed genes (DEGs between the higher (H and lower (L IMF content of pigs using transcriptome analysis, of which 27 were upregulated and 48 were downregulated. Notably, Kyoto Encyclopedia of Genes and Genomes (KEGG enrichment analysis indicated that the DEG perilipin-1 (PLIN1 was significantly enriched in the fat metabolism-related peroxisome proliferator-activated receptor (PPAR signaling pathway. Furthermore, we determined the expression patterns and functional role of porcine PLIN1. Our results indicate that PLIN1 was highly expressed in porcine adipose tissue, and its expression level was significantly higher in the H IMF content group when compared with the L IMF content group, and expression was increased during adipocyte differentiation. Additionally, our results confirm that PLIN1 knockdown decreases the triglyceride (TG level and lipid droplet (LD size in porcine adipocytes. Overall, our data identify novel candidate genes affecting IMF content and provide new insight into PLIN1 in porcine IMF deposition and adipocyte differentiation.

  14. Quantitative image of bone mineral content

    International Nuclear Information System (INIS)

    Katoh, Tsuguhisa

    1990-01-01

    A dual energy subtraction system was constructed on an experimental basis for the quantitative image of bone mineral content. The system consists of a radiographing system and an image processor. Two radiograms were taken with dual x-ray energy in a single exposure using an x-ray beam dichromized by a tin filter. In this system, a film cassette was used where a low speed film-screen system, a copper filter and a high speed film-screen system were layered on top of each other. The images were read by a microdensitometer and processed by a personal computer. The image processing included the corrections of the film characteristics and heterogeneity in the x-ray field, and the dual energy subtraction in which the effect of the high energy component of the dichromized beam on the tube side image was corrected. In order to determine the accuracy of the system, experiments using wedge phantoms made of mixtures of epoxy resin and bone mineral-equivalent materials in various fractions were performed for various tube potentials and film processing conditions. The results indicated that the relative precision of the system was within ±4% and that the propagation of the film noise was within ±11 mg/cm 2 for the 0.2 mm pixels. The results also indicated that the system response was independent of the tube potential and the film processing condition. The bone mineral weight in each phalanx of the freshly dissected hand of a rhesus monkey was measured by this system and compared with the ash weight. The results showed an error of ±10%, slightly larger than that of phantom experiments, which is probably due to the effect of fat and the variation of focus-object distance. The air kerma in free air at the object was approximately 0.5 mGy for one exposure. The results indicate that this system is applicable to clinical use and provides useful information for evaluating a time-course of localized bone disease. (author)

  15. Quantifying biodiversity using digital cameras and automated image analysis.

    Science.gov (United States)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  16. Multi-Connection Pattern Analysis: Decoding the representational content of neural communication.

    Science.gov (United States)

    Li, Yuanning; Richardson, Robert Mark; Ghuman, Avniel Singh

    2017-11-15

    The lack of multivariate methods for decoding the representational content of interregional neural communication has left it difficult to know what information is represented in distributed brain circuit interactions. Here we present Multi-Connection Pattern Analysis (MCPA), which works by learning mappings between the activity patterns of the populations as a factor of the information being processed. These maps are used to predict the activity from one neural population based on the activity from the other population. Successful MCPA-based decoding indicates the involvement of distributed computational processing and provides a framework for probing the representational structure of the interaction. Simulations demonstrate the efficacy of MCPA in realistic circumstances. In addition, we demonstrate that MCPA can be applied to different signal modalities to evaluate a variety of hypothesis associated with information coding in neural communications. We apply MCPA to fMRI and human intracranial electrophysiological data to provide a proof-of-concept of the utility of this method for decoding individual natural images and faces in functional connectivity data. We further use a MCPA-based representational similarity analysis to illustrate how MCPA may be used to test computational models of information transfer among regions of the visual processing stream. Thus, MCPA can be used to assess the information represented in the coupled activity of interacting neural circuits and probe the underlying principles of information transformation between regions. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Automated image-based assay for evaluation of HIV neutralization and cell-to-cell fusion inhibition.

    Science.gov (United States)

    Sheik-Khalil, Enas; Bray, Mark-Anthony; Özkaya Şahin, Gülsen; Scarlatti, Gabriella; Jansson, Marianne; Carpenter, Anne E; Fenyö, Eva Maria

    2014-08-30

    Standardized techniques to detect HIV-neutralizing antibody responses are of great importance in the search for an HIV vaccine. Here, we present a high-throughput, high-content automated plaque reduction (APR) assay based on automated microscopy and image analysis that allows evaluation of neutralization and inhibition of cell-cell fusion within the same assay. Neutralization of virus particles is measured as a reduction in the number of fluorescent plaques, and inhibition of cell-cell fusion as a reduction in plaque area. We found neutralization strength to be a significant factor in the ability of virus to form syncytia. Further, we introduce the inhibitory concentration of plaque area reduction (ICpar) as an additional measure of antiviral activity, i.e. fusion inhibition. We present an automated image based high-throughput, high-content HIV plaque reduction assay. This allows, for the first time, simultaneous evaluation of neutralization and inhibition of cell-cell fusion within the same assay, by quantifying the reduction in number of plaques and mean plaque area, respectively. Inhibition of cell-to-cell fusion requires higher quantities of inhibitory reagent than inhibition of virus neutralization.

  18. Automated microscopy for high-content RNAi screening

    Science.gov (United States)

    2010-01-01

    Fluorescence microscopy is one of the most powerful tools to investigate complex cellular processes such as cell division, cell motility, or intracellular trafficking. The availability of RNA interference (RNAi) technology and automated microscopy has opened the possibility to perform cellular imaging in functional genomics and other large-scale applications. Although imaging often dramatically increases the content of a screening assay, it poses new challenges to achieve accurate quantitative annotation and therefore needs to be carefully adjusted to the specific needs of individual screening applications. In this review, we discuss principles of assay design, large-scale RNAi, microscope automation, and computational data analysis. We highlight strategies for imaging-based RNAi screening adapted to different library and assay designs. PMID:20176920

  19. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...

  20. Radiosurgical treatment planning for intracranial AVM based on images generated by principal component analysis. A simulation study

    International Nuclear Information System (INIS)

    Kawaguchi, Osamu; Kunieda, Etsuo; Nyui, Yoshiyuki

    2009-01-01

    One of the most important factors in stereotactic radiosurgery (SRS) for intracranial arteriovenous malformation (AVM) is to determine accurate target delineation of the nidus. However, since intracranial AVMs are complicated in structure, it is often difficult to clearly determine the target delineation. The purpose of this study was to investigate the usefulness of principal component analysis (PCA) on intra-arterial contrast enhanced dynamic CT (IADCT) images as a tool for delineating accurate target volumes for stereotactic radiosurgery of AVMs. IADCT and intravenous contrast-enhanced CT (IVCT) were used to examine 4 randomly selected cases of AVM. PCA images were generated from the IADCT data. The first component images were considered feeding artery predominant, the second component images were considered draining vein predominant, and the third component images were considered background. Target delineations were first carried out from IVCT, and then again while referring to the first and second components of the PCA images. Dose calculation simulations for radiosurgical treatment plans with IVCT and PCA images were performed. Dose volume histograms of the vein areas as well as the target volumes were compared. In all cases, the calculated target volumes based on IVCT images were larger than those based on PCA images, and the irradiation doses for the vein areas were reduced. In this study, we simulated radiosurgical treatment planning for intracranial AVM based on PCA images. By using PCA images, the irradiation doses for the vein areas were substantially reduced. (author)

  1. Ash content of lignites - radiometric analysis

    International Nuclear Information System (INIS)

    Leonhardt, J.; Thuemmel, H.W.

    1986-01-01

    The quality of lignites is governed by the ash content varying in dependence upon the geologic conditions. Setup and function of the radiometric devices being used for ash content analysis in the GDR are briefly described

  2. A data grid for imaging-based clinical trials

    Science.gov (United States)

    Zhou, Zheng; Chao, Sander S.; Lee, Jasper; Liu, Brent; Documet, Jorge; Huang, H. K.

    2007-03-01

    Clinical trials play a crucial role in testing new drugs or devices in modern medicine. Medical imaging has also become an important tool in clinical trials because images provide a unique and fast diagnosis with visual observation and quantitative assessment. A typical imaging-based clinical trial consists of: 1) A well-defined rigorous clinical trial protocol, 2) a radiology core that has a quality control mechanism, a biostatistics component, and a server for storing and distributing data and analysis results; and 3) many field sites that generate and send image studies to the radiology core. As the number of clinical trials increases, it becomes a challenge for a radiology core servicing multiple trials to have a server robust enough to administrate and quickly distribute information to participating radiologists/clinicians worldwide. The Data Grid can satisfy the aforementioned requirements of imaging based clinical trials. In this paper, we present a Data Grid architecture for imaging-based clinical trials. A Data Grid prototype has been implemented in the Image Processing and Informatics (IPI) Laboratory at the University of Southern California to test and evaluate performance in storing trial images and analysis results for a clinical trial. The implementation methodology and evaluation protocol of the Data Grid are presented.

  3. A novel automatic quantification method for high-content screening analysis of DNA double strand-break response.

    Science.gov (United States)

    Feng, Jingwen; Lin, Jie; Zhang, Pengquan; Yang, Songnan; Sa, Yu; Feng, Yuanming

    2017-08-29

    High-content screening is commonly used in studies of the DNA damage response. The double-strand break (DSB) is one of the most harmful types of DNA damage lesions. The conventional method used to quantify DSBs is γH2AX foci counting, which requires manual adjustment and preset parameters and is usually regarded as imprecise, time-consuming, poorly reproducible, and inaccurate. Therefore, a robust automatic alternative method is highly desired. In this manuscript, we present a new method for quantifying DSBs which involves automatic image cropping, automatic foci-segmentation and fluorescent intensity measurement. Furthermore, an additional function was added for standardizing the measurement of DSB response inhibition based on co-localization analysis. We tested the method with a well-known inhibitor of DSB response. The new method requires only one preset parameter, which effectively minimizes operator-dependent variations. Compared with conventional methods, the new method detected a higher percentage difference of foci formation between different cells, which can improve measurement accuracy. The effects of the inhibitor on DSB response were successfully quantified with the new method (p = 0.000). The advantages of this method in terms of reliability, automation and simplicity show its potential in quantitative fluorescence imaging studies and high-content screening for compounds and factors involved in DSB response.

  4. Silhouette-based approach of 3D image reconstruction for automated image acquisition using robotic arm

    Science.gov (United States)

    Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.

    2017-06-01

    This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.

  5. Interpretation of medical images by model guided analysis

    International Nuclear Information System (INIS)

    Karssemeijer, N.

    1989-01-01

    Progress in the development of digital pictorial information systems stimulates a growing interest in the use of image analysis techniques in medicine. Especially when precise quantitative information is required the use of fast and reproducable computer analysis may be more appropriate than relying on visual judgement only. Such quantitative information can be valuable, for instance, in diagnostics or in irradiation therapy planning. As medical images are mostly recorded in a prescribed way, human anatomy guarantees a common image structure for each particular type of exam. In this thesis it is investigated how to make use of this a priori knowledge to guide image analysis. For that purpose models are developed which are suited to capture common image structure. The first part of this study is devoted to an analysis of nuclear medicine images of myocardial perfusion. In ch. 2 a model of these images is designed in order to represent characteristic image properties. It is shown that for these relatively simple images a compact symbolic description can be achieved, without significant loss of diagnostically importance of several image properties. Possibilities for automatic interpretation of more complex images is investigated in the following chapters. The central topic is segmentation of organs. Two methods are proposed and tested on a set of abdominal X-ray CT scans. Ch. 3 describes a serial approach based on a semantic network and the use of search areas. Relational constraints are used to guide the image processing and to classify detected image segments. In teh ch.'s 4 and 5 a more general parallel approach is utilized, based on a markov random field image model. A stochastic model used to represent prior knowledge about the spatial arrangement of organs is implemented as an external field. (author). 66 refs.; 27 figs.; 6 tabs

  6. Content Analysis of the Concept of Addiction in High School Textbooks of Iran.

    Science.gov (United States)

    Mirzamohammadi, Mohammad Hasan; Mousavi, Sayedeh Zainab; Massah, Omid; Farhoudian, Ali

    2017-01-01

    This research sought to determine how well the causes of addiction, addiction harms, and prevention of addiction have been noticed in high school textbooks. We used descriptive method to select the main related components of the addiction concept and content analysis method for analyzing the content of textbooks. The study population comprised 61 secondary school curriculum textbooks and study sample consisted of 14 secondary school textbooks selected by purposeful sampling method. The tools for collecting data were "content analysis inventory" which its validity was confirmed by educational and social sciences experts and its reliability has been found to be 91%. About 67 components were prepared for content analysis and were divided to 3 categories of causes, harms, and prevention of addiction. The analysis units in this study comprised phrases, topics, examples, course topics, words, poems, images, questions, tables, and exercises. Results of the study showed that the components of the addiction concept have presented with 212 remarks in the textbooks. Also, the degree of attention given to any of the 3 main components of the addiction concept were presented as follows: causes with 52 (24.52%) remarks, harm with 89 (41.98%) remarks, and prevention with 71 (33.49%) remarks. In high school textbooks, little attention has been paid to the concept of addiction and mostly its biological dimension were addressed while social, personal, familial, and religious dimensions of addiction have been neglected.

  7. Page Layout Analysis of the Document Image Based on the Region Classification in a Decision Hierarchical Structure

    Directory of Open Access Journals (Sweden)

    Hossein Pourghassem

    2010-10-01

    Full Text Available The conversion of document image to its electronic version is a very important problem in the saving, searching and retrieval application in the official automation system. For this purpose, analysis of the document image is necessary. In this paper, a hierarchical classification structure based on a two-stage segmentation algorithm is proposed. In this structure, image is segmented using the proposed two-stage segmentation algorithm. Then, the type of the image regions such as document and non-document image is determined using multiple classifiers in the hierarchical classification structure. The proposed segmentation algorithm uses two algorithms based on wavelet transform and thresholding. Texture features such as correlation, homogeneity and entropy that extracted from co-occurrenc matrix and also two new features based on wavelet transform are used to classifiy and lable the regions of the image. The hierarchical classifier is consisted of two Multilayer Perceptron (MLP classifiers and a Support Vector Machine (SVM classifier. The proposed algorithm is evaluated on a database consisting of document and non-document images that provides from Internet. The experimental results show the efficiency of the proposed approach in the region segmentation and classification. The proposed algorithm provides accuracy rate of 97.5% on classification of the regions.

  8. Image seedling analysis to evaluate tomato seed physiological potential

    Directory of Open Access Journals (Sweden)

    Vanessa Neumann Silva

    Full Text Available Computerized seedling image analysis are one of the most recently techniques to detect differences of vigor between seed lots. The aim of this study was verify the hability of computerized seedling image analysis by SVIS® to detect differences of vigor between tomato seed lots as information provided by traditionally vigor tests. Ten lots of tomato seeds, cultivar Santa Clara, were stored for 12 months in controlled environment at 20 ± 1 ºC and 45-50% of relative humidity of the air. The moisture content of the seeds was monitored and the physiological potential tested at 0, 6 and 12 months after storage, with germination test, first count of germination, traditional accelerated ageing and with saturated salt solution, electrical conductivity, seedling emergence and with seed vigor imaging system (SVIS®. A completely randomized experimental design was used with four replications. The parameters obtained by the computerized seedling analysis (seedling length and indexes of vigor and seedling growth with software SVIS® are efficient to detect differences between tomato seed lots of high and low vigor.

  9. Performance Evaluation of Frequency Transform Based Block Classification of Compound Image Segmentation Techniques

    Science.gov (United States)

    Selwyn, Ebenezer Juliet; Florinabel, D. Jemi

    2018-04-01

    Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.

  10. Object-based image analysis and data mining for building ontology of informal urban settlements

    Science.gov (United States)

    Khelifa, Dejrriri; Mimoun, Malki

    2012-11-01

    During recent decades, unplanned settlements have been appeared around the big cities in most developing countries and as consequence, numerous problems have emerged. Thus the identification of different kinds of settlements is a major concern and challenge for authorities of many countries. Very High Resolution (VHR) Remotely Sensed imagery has proved to be a very promising way to detect different kinds of settlements, especially through the using of new objectbased image analysis (OBIA). The most important key is in understanding what characteristics make unplanned settlements differ from planned ones, where most experts characterize unplanned urban areas by small building sizes at high densities, no orderly road arrangement and Lack of green spaces. Knowledge about different kinds of settlements can be captured as a domain ontology that has the potential to organize knowledge in a formal, understandable and sharable way. In this work we focus on extracting knowledge from VHR images and expert's knowledge. We used an object based strategy by segmenting a VHR image taken over urban area into regions of homogenous pixels at adequate scale level and then computing spectral, spatial and textural attributes for each region to create objects. A genetic-based data mining was applied to generate high predictive and comprehensible classification rules based on selected samples from the OBIA result. Optimized intervals of relevant attributes are found, linked with land use types for forming classification rules. The unplanned areas were separated from the planned ones, through analyzing of the line segments detected from the input image. Finally a simple ontology was built based on the previous processing steps. The approach has been tested to VHR images of one of the biggest Algerian cities, that has grown considerably in recent decades.

  11. Image analysis for gene expression based phenotype characterization in yeast cells

    NARCIS (Netherlands)

    Tleis, M.

    2016-01-01

    Image analysis of objects in the microscope scale requires accuracy so that measurements can be used to differentiate between groups of objects that are being studied. This thesis deals with measurements in yeast biology that are obtained through microscope images. We study the algorithms and

  12. Image formation and image analysis in electron microscopy

    International Nuclear Information System (INIS)

    Heel, M. van.

    1981-01-01

    This thesis covers various aspects of image formation and image analysis in electron microscopy. The imaging of relatively strong objects in partially coherent illumination, the coherence properties of thermionic emission sources and the detection of objects in quantum noise limited images are considered. IMAGIC, a fast, flexible and friendly image analysis software package is described. Intelligent averaging of molecular images is discussed. (C.F.)

  13. GRAIN-SIZE MEASUREMENTS OF FLUVIAL GRAVEL BARS USING OBJECT-BASED IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Pedro Castro

    2018-01-01

    Full Text Available Traditional techniques for classifying the average grain size in gravel bars require manual measurements of each grain diameter. Aiming productivity, more efficient methods have been developed by applying remote sensing techniques and digital image processing. This research proposes an Object-Based Image Analysis methodology to classify gravel bars in fluvial channels. First, the study evaluates the performance of multiresolution segmentation algorithm (available at the software eCognition Developer in performing shape recognition. The linear regression model was applied to assess the correlation between the gravels’ reference delineation and the gravels recognized by the segmentation algorithm. Furthermore, the supervised classification was validated by comparing the results with field data using the t-statistic test and the kappa index. Afterwards, the grain size distribution in gravel bars along the upper Bananeiras River, Brazil was mapped. The multiresolution segmentation results did not prove to be consistent with all the samples. Nonetheless, the P01 sample showed an R2 =0.82 for the diameter estimation and R2=0.45 the recognition of the eliptical ft. The t-statistic showed no significant difference in the efficiencies of the grain size classifications by the field survey data and the Object-based supervised classification (t = 2.133 for a significance level of 0.05. However, the kappa index was 0.54. The analysis of the both segmentation and classification results did not prove to be replicable.

  14. Teenage girls' experience of the determinants of physical activity promotion: A theory-based qualitative content analysis.

    Science.gov (United States)

    Borhani, Mahboobe; Sadeghi, Roya; Shojaeizadeh, Davoud; Harandi, Tayebeh Fasihi; Vakili, Mohammad Ali

    2017-08-01

    The progress of technology in developed countries has changed lifestyles to sedentary and has increased non-communicable diseases. Identifying factors affecting patterns of physical activity among adolescents is valuable and it is important to change these pattern. This study aimed to explore teenage girls' experiences regarding the determinants of physical activity promotion based on Pender's Health Promotion Model. This qualitative study is a content analysis research on the girls of three high schools in Minoodasht city for six months from September 2015 until the end of February 2016. The data were obtained by focused group discussions and semi-structured in-depth interviews from 48 girls ranging from 15 to 18 years old and six teachers. Data analysis was done using theory-driven qualitative content analysis. Data analysis resulted in a total number of 53 primary codes which were classified in the six predetermined classifications of Pender's Health Promotion Model (Perceived benefits, perceived barriers, perceived self-efficacy of physical activity behavior, feelings related to physical activity behavior, interpersonal and situational influencers). The results showed that two classifications (perceived barriers, and situational influencers) were considered more important than other classifications in reducing levels of physical activity in adolescent girls and also high self-efficacy for promoting physical activity in adolescents. The results obtained from this study specified the determinants affecting the promotion of physical activity among adolescent girls and can help the planners to choose the most appropriate methods and strategies in order to promote physical activity among adolescent girls and to prevent chronic non-communicable diseases in this age group and gender.

  15. Improved Ordinary Measure and Image Entropy Theory based intelligent Copy Detection Method

    Directory of Open Access Journals (Sweden)

    Dengpan Ye

    2011-10-01

    Full Text Available Nowadays, more and more multimedia websites appear in social network. It brings some security problems, such as privacy, piracy, disclosure of sensitive contents and so on. Aiming at copyright protection, the copy detection technology of multimedia contents becomes a hot topic. In our previous work, a new computer-based copyright control system used to detect the media has been proposed. Based on this system, this paper proposes an improved media feature matching measure and an entropy based copy detection method. The Levenshtein Distance was used to enhance the matching degree when using for feature matching measure in copy detection. For entropy based copy detection, we make a fusion of the two features of entropy matrix of the entropy feature we extracted. Firstly,we extract the entropy matrix of the image and normalize it. Then, we make a fusion of the eigenvalue feature and the transfer matrix feature of the entropy matrix. The fused features will be used for image copy detection. The experiments show that compared to use these two kinds of features for image detection singly, using feature fusion matching method is apparent robustness and effectiveness. The fused feature has a high detection for copy images which have been received some attacks such as noise, compression, zoom, rotation and so on. Comparing with referred methods, the method proposed is more intelligent and can be achieved good performance.

  16. Wavelet-Based Bayesian Methods for Image Analysis and Automatic Target Recognition

    National Research Council Canada - National Science Library

    Nowak, Robert

    2001-01-01

    .... We have developed two new techniques. First, we have develop a wavelet-based approach to image restoration and deconvolution problems using Bayesian image models and an alternating-maximation method...

  17. Content Analysis of Trends in Print Magazine Tobacco Advertisements.

    Science.gov (United States)

    Banerjee, Smita; Shuk, Elyse; Greene, Kathryn; Ostroff, Jamie

    2015-07-01

    To provide a descriptive and comparative content analysis of tobacco print magazine ads, with a focus on rhetorical and persuasive themes. Print tobacco ads for cigarettes, cigars, e-cigarettes, moist snuff, and snus (N = 171) were content analyzed for the physical composition/ad format (e.g., size of ad, image, setting, branding, warning label) and the content of the ad (e.g., rhetorical themes, persuasive themes). The theme of pathos (that elicits an emotional response) was most frequently utilized for cigarette (61%), cigar (50%), and moist snuff (50%) ads, and the theme of logos (use of logic or facts to support position) was most frequently used for e-cigarette (85%) ads. Additionally, comparative claims were most frequently used for snus (e.g., "spit-free," "smoke-free") and e-cigarette ads (e.g., "no tobacco smoke, only vapor," "no odor, no ash"). Comparative claims were also used in cigarette ads, primarily to highlight availability in different flavors (e.g., "bold," "menthol"). This study has implications for tobacco product marketing regulation, particularly around limiting tobacco advertising in publications with a large youth readership and prohibiting false or misleading labels, labeling, and advertising for tobacco products, such as modified risk (unless approved by the FDA) or therapeutic claims.

  18. Mesoscale brain explorer, a flexible python-based image analysis and visualization tool.

    Science.gov (United States)

    Haupt, Dirk; Vanni, Matthieu P; Bolanos, Federico; Mitelut, Catalin; LeDue, Jeffrey M; Murphy, Tim H

    2017-07-01

    Imaging of mesoscale brain activity is used to map interactions between brain regions. This work has benefited from the pioneering studies of Grinvald et al., who employed optical methods to image brain function by exploiting the properties of intrinsic optical signals and small molecule voltage-sensitive dyes. Mesoscale interareal brain imaging techniques have been advanced by cell targeted and selective recombinant indicators of neuronal activity. Spontaneous resting state activity is often collected during mesoscale imaging to provide the basis for mapping of connectivity relationships using correlation. However, the information content of mesoscale datasets is vast and is only superficially presented in manuscripts given the need to constrain measurements to a fixed set of frequencies, regions of interest, and other parameters. We describe a new open source tool written in python, termed mesoscale brain explorer (MBE), which provides an interface to process and explore these large datasets. The platform supports automated image processing pipelines with the ability to assess multiple trials and combine data from different animals. The tool provides functions for temporal filtering, averaging, and visualization of functional connectivity relations using time-dependent correlation. Here, we describe the tool and show applications, where previously published datasets were reanalyzed using MBE.

  19. Ad-hoc Content-based Queries and Data Analysis for Virtual Observatories, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Aquilent, Inc. proposes to support ad-hoc, content-based query and data retrieval from virtual observatories (VxO) by developing 1) Higher Order Query Services that...

  20. Language Learning of Gifted Individuals: A Content Analysis Study

    Science.gov (United States)

    Gokaydin, Beria; Baglama, Basak; Uzunboylu, Huseyin

    2017-01-01

    This study aims to carry out a content analysis of the studies on language learning of gifted individuals and determine the trends in this field. Articles on language learning of gifted individuals published in the Scopus database were examined based on certain criteria including type of publication, year of publication, language, research…