WorldWideScience

Sample records for state image analysis

  1. Chemical imaging and solid state analysis at compact surfaces using UV imaging

    DEFF Research Database (Denmark)

    Wu, Jian X.; Rehder, Sönke; van den Berg, Frans

    2014-01-01

    and excipients in a non-invasive way, as well as mapping the glibenclamide solid state form. An exploratory data analysis supported the critical evaluation of the mapping results and the selection of model parameters for the chemical mapping. The present study demonstrated that the multi-wavelength UV imaging......Fast non-destructive multi-wavelength UV imaging together with multivariate image analysis was utilized to visualize distribution of chemical components and their solid state form at compact surfaces. Amorphous and crystalline solid forms of the antidiabetic compound glibenclamide...

  2. Can state-of-the-art HVS-based objective image quality criteria be used for image reconstruction techniques based on ROI analysis?

    Science.gov (United States)

    Dostal, P.; Krasula, L.; Klima, M.

    2012-06-01

    Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.

  3. Ultrasonic image analysis and image-guided interventions.

    Science.gov (United States)

    Noble, J Alison; Navab, Nassir; Becher, H

    2011-08-06

    The fields of medical image analysis and computer-aided interventions deal with reducing the large volume of digital images (X-ray, computed tomography, magnetic resonance imaging (MRI), positron emission tomography and ultrasound (US)) to more meaningful clinical information using software algorithms. US is a core imaging modality employed in these areas, both in its own right and used in conjunction with the other imaging modalities. It is receiving increased interest owing to the recent introduction of three-dimensional US, significant improvements in US image quality, and better understanding of how to design algorithms which exploit the unique strengths and properties of this real-time imaging modality. This article reviews the current state of art in US image analysis and its application in image-guided interventions. The article concludes by giving a perspective from clinical cardiology which is one of the most advanced areas of clinical application of US image analysis and describing some probable future trends in this important area of ultrasonic imaging research.

  4. Spinal imaging and image analysis

    CERN Document Server

    Yao, Jianhua

    2015-01-01

    This book is instrumental to building a bridge between scientists and clinicians in the field of spine imaging by introducing state-of-the-art computational methods in the context of clinical applications.  Spine imaging via computed tomography, magnetic resonance imaging, and other radiologic imaging modalities, is essential for noninvasively visualizing and assessing spinal pathology. Computational methods support and enhance the physician’s ability to utilize these imaging techniques for diagnosis, non-invasive treatment, and intervention in clinical practice. Chapters cover a broad range of topics encompassing radiological imaging modalities, clinical imaging applications for common spine diseases, image processing, computer-aided diagnosis, quantitative analysis, data reconstruction and visualization, statistical modeling, image-guided spine intervention, and robotic surgery. This volume serves a broad audience as  contributions were written by both clinicians and researchers, which reflects the inte...

  5. Preliminary analysis of the forest health state based on multispectral images acquired by Unmanned Aerial Vehicle

    Directory of Open Access Journals (Sweden)

    Czapski Paweł

    2015-09-01

    Full Text Available The main purpose of this publication is to present the current progress of the work associated with the use of a lightweight unmanned platforms for various environmental studies. Current development in information technology, electronics and sensors miniaturisation allows mounting multispectral cameras and scanners on unmanned aerial vehicle (UAV that could only be used on board aircraft and satellites. Remote Sensing Division in the Institute of Aviation carries out innovative researches using multisensory platform and lightweight unmanned vehicle to evaluate the health state of forests in Wielkopolska province. In this paper, applicability of multispectral images analysis acquired several times during the growing season from low altitude (up to 800m is presented. We present remote sensing indicators computed by our software and common methods for assessing state of trees health. The correctness of applied methods is verified using analysis of satellite scenes acquired by Landsat 8 OLI instrument (Operational Land Imager.

  6. Resting-state functional magnetic resonance imaging: the impact of regression analysis.

    Science.gov (United States)

    Yeh, Chia-Jung; Tseng, Yu-Sheng; Lin, Yi-Ru; Tsai, Shang-Yueh; Huang, Teng-Yi

    2015-01-01

    To investigate the impact of regression methods on resting-state functional magnetic resonance imaging (rsfMRI). During rsfMRI preprocessing, regression analysis is considered effective for reducing the interference of physiological noise on the signal time course. However, it is unclear whether the regression method benefits rsfMRI analysis. Twenty volunteers (10 men and 10 women; aged 23.4 ± 1.5 years) participated in the experiments. We used node analysis and functional connectivity mapping to assess the brain default mode network by using five combinations of regression methods. The results show that regressing the global mean plays a major role in the preprocessing steps. When a global regression method is applied, the values of functional connectivity are significantly lower (P ≤ .01) than those calculated without a global regression. This step increases inter-subject variation and produces anticorrelated brain areas. rsfMRI data processed using regression should be interpreted carefully. The significance of the anticorrelated brain areas produced by global signal removal is unclear. Copyright © 2014 by the American Society of Neuroimaging.

  7. Outpatient Imaging Efficiency - State

    Data.gov (United States)

    U.S. Department of Health & Human Services — Use of medical imaging - state data. These measures give you information about hospitals' use of medical imaging tests for outpatients. Examples of medical imaging...

  8. Machine Learning Applications to Resting-State Functional MR Imaging Analysis.

    Science.gov (United States)

    Billings, John M; Eder, Maxwell; Flood, William C; Dhami, Devendra Singh; Natarajan, Sriraam; Whitlow, Christopher T

    2017-11-01

    Machine learning is one of the most exciting and rapidly expanding fields within computer science. Academic and commercial research entities are investing in machine learning methods, especially in personalized medicine via patient-level classification. There is great promise that machine learning methods combined with resting state functional MR imaging will aid in diagnosis of disease and guide potential treatment for conditions thought to be impossible to identify based on imaging alone, such as psychiatric disorders. We discuss machine learning methods and explore recent advances. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Resting state fMRI: A review on methods in resting state connectivity analysis and resting state networks.

    Science.gov (United States)

    Smitha, K A; Akhil Raja, K; Arun, K M; Rajesh, P G; Thomas, Bejoy; Kapilamoorthy, T R; Kesavadas, Chandrasekharan

    2017-08-01

    The inquisitiveness about what happens in the brain has been there since the beginning of humankind. Functional magnetic resonance imaging is a prominent tool which helps in the non-invasive examination, localisation as well as lateralisation of brain functions such as language, memory, etc. In recent years, there is an apparent shift in the focus of neuroscience research to studies dealing with a brain at 'resting state'. Here the spotlight is on the intrinsic activity within the brain, in the absence of any sensory or cognitive stimulus. The analyses of functional brain connectivity in the state of rest have revealed different resting state networks, which depict specific functions and varied spatial topology. However, different statistical methods have been introduced to study resting state functional magnetic resonance imaging connectivity, yet producing consistent results. In this article, we introduce the concept of resting state functional magnetic resonance imaging in detail, then discuss three most widely used methods for analysis, describe a few of the resting state networks featuring the brain regions, associated cognitive functions and clinical applications of resting state functional magnetic resonance imaging. This review aims to highlight the utility and importance of studying resting state functional magnetic resonance imaging connectivity, underlining its complementary nature to the task-based functional magnetic resonance imaging.

  10. Retinal Imaging and Image Analysis

    Science.gov (United States)

    Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

    2011-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:22275207

  11. Ultra-high performance, solid-state, autoradiographic image digitization and analysis system

    International Nuclear Information System (INIS)

    Lear, J.L.; Pratt, J.P.; Ackermann, R.F.; Plotnick, J.; Rumley, S.

    1990-01-01

    We developed a Macintosh II-based, charge-coupled device (CCD), image digitization and analysis system for high-speed, high-resolution quantification of autoradiographic image data. A linear CCD array with 3,500 elements was attached to a precision drive assembly and mounted behind a high-uniformity lens. The drive assembly was used to sweep the array perpendicularly to its axis so that an entire 20 x 25-cm autoradiographic image-containing film could be digitized into 256 gray levels at 50-microns resolution in less than 30 sec. The scanner was interfaced to a Macintosh II computer through a specially constructed NuBus circuit board and software was developed for autoradiographic data analysis. The system was evaluated by scanning individual films multiple times, then measuring the variability of the digital data between the different scans. Image data were found to be virtually noise free. The coefficient of variation averaged less than 1%, a value significantly exceeding the accuracy of both high-speed, low-resolution, video camera (VC) systems and low-speed, high-resolution, rotating drum densitometers (RDD). Thus, the CCD scanner-Macintosh computer analysis system offers the advantage over VC systems of the ability to digitize entire films containing many autoradiograms, but with much greater speed and accuracy than achievable with RDD scanners

  12. Effect of phase-encoding direction on group analysis of resting-state functional magnetic resonance imaging.

    Science.gov (United States)

    Mori, Yasuo; Miyata, Jun; Isobe, Masanori; Son, Shuraku; Yoshihara, Yujiro; Aso, Toshihiko; Kouchiyama, Takanori; Murai, Toshiya; Takahashi, Hidehiko

    2018-05-17

    Echo-planar imaging is a common technique used in functional magnetic resonance imaging (fMRI), however it suffers from image distortion and signal loss because of large susceptibility effects that are related to the phase-encoding direction of the scan. Despite this relationship, the majority of neuroimaging studies have not considered the influence of phase-encoding direction. Here, we aimed to clarify how phase-encoding direction can affect the outcome of an fMRI connectivity study of schizophrenia. Resting-state fMRI using anterior to posterior (A-P) and posterior to anterior (P-A) directions was used to examine 25 patients with schizophrenia (SC) and 37 matched healthy controls (HC). We conducted a functional connectivity analysis using independent component analysis and performed three group comparisons: A-P vs. P-A (all participants), SC vs. HC for the A-P and P-A datasets, and the interaction between phase-encoding direction and participant group. The estimated functional connectivity differed between the two phase-encoding directions in areas that were more extensive than those where signal loss has been reported. Although functional connectivity in the SC group was lower than that in the HC group for both directions, the A-P and P-A conditions did not exhibit the same specific pattern of differences. Further, we observed an interaction between participant group and the phase-encoding direction in the left temporo-parietal junction and left fusiform gyrus. Phase-encoding direction can influence the results of functional connectivity studies. Thus, appropriate selection and documentation of phase-encoding direction will be important in future resting-state fMRI studies. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  13. Political Parties’ Welfare Image, Electoral Punishment and Welfare State Retrenchment

    DEFF Research Database (Denmark)

    Schumacher, Gijs; Vis, Barbara; van Kersbergen, Kees

    2013-01-01

    of voters supports the welfare state, the usual assumption is that retrenchment backfires equally on all political parties. This study contributes to an emerging body of research that demonstrates that this assumption is incorrect. On the basis of a regression analysis of the electoral fate of the governing...... parties of 14 OECD countries between 1970 and 2002, we show that most parties with a positive welfare image lose after they implemented cutbacks, whereas most parties with a negative welfare image do not. In addition, we show that positive welfare image parties in opposition gain votes, at the expense...... of those positive welfare image parties in government that implemented welfare state retrenchment. Comparative European Politics (2013) 11, 1-21. doi:10.1057/cep.2012.5; published online 11 June 2012...

  14. Image analysis

    International Nuclear Information System (INIS)

    Berman, M.; Bischof, L.M.; Breen, E.J.; Peden, G.M.

    1994-01-01

    This paper provides an overview of modern image analysis techniques pertinent to materials science. The usual approach in image analysis contains two basic steps: first, the image is segmented into its constituent components (e.g. individual grains), and second, measurement and quantitative analysis are performed. Usually, the segmentation part of the process is the harder of the two. Consequently, much of the paper concentrates on this aspect, reviewing both fundamental segmentation tools (commonly found in commercial image analysis packages) and more advanced segmentation tools. There is also a review of the most widely used quantitative analysis methods for measuring the size, shape and spatial arrangements of objects. Many of the segmentation and analysis methods are demonstrated using complex real-world examples. Finally, there is a discussion of hardware and software issues. 42 refs., 17 figs

  15. A finite state model for respiratory motion analysis in image guided radiation therapy

    International Nuclear Information System (INIS)

    Wu Huanmei; Sharp, Gregory C; Salzberg, Betty; Kaeli, David; Shirato, Hiroki; Jiang, Steve B

    2004-01-01

    Effective image guided radiation treatment of a moving tumour requires adequate information on respiratory motion characteristics. For margin expansion, beam tracking and respiratory gating, the tumour motion must be quantified for pretreatment planning and monitored on-line. We propose a finite state model for respiratory motion analysis that captures our natural understanding of breathing stages. In this model, a regular breathing cycle is represented by three line segments, exhale, end-of-exhale and inhale, while abnormal breathing is represented by an irregular breathing state. In addition, we describe an on-line implementation of this model in one dimension. We found this model can accurately characterize a wide variety of patient breathing patterns. This model was used to describe the respiratory motion for 23 patients with peak-to-peak motion greater than 7 mm. The average root mean square error over all patients was less than 1 mm and no patient has an error worse than 1.5 mm. Our model provides a convenient tool to quantify respiratory motion characteristics, such as patterns of frequency changes and amplitude changes, and can be applied to internal or external motion, including internal tumour position, abdominal surface, diaphragm, spirometry and other surrogates

  16. A finite state model for respiratory motion analysis in image guided radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Wu Huanmei [College of Computer and Information Science, Northeastern University, Boston, MA 02115 (United States); Sharp, Gregory C [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 (United States); Salzberg, Betty [College of Computer and Information Science, Northeastern University, Boston, MA 02115 (United States); Kaeli, David [Department of Electrical and Computer Engineering, Northeastern University, Boston, MA 02115 (United States); Shirato, Hiroki [Department of Radiation Medicine, Hokkaido University School of Medicine, Sapporo (Japan); Jiang, Steve B [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 (United States)

    2004-12-07

    Effective image guided radiation treatment of a moving tumour requires adequate information on respiratory motion characteristics. For margin expansion, beam tracking and respiratory gating, the tumour motion must be quantified for pretreatment planning and monitored on-line. We propose a finite state model for respiratory motion analysis that captures our natural understanding of breathing stages. In this model, a regular breathing cycle is represented by three line segments, exhale, end-of-exhale and inhale, while abnormal breathing is represented by an irregular breathing state. In addition, we describe an on-line implementation of this model in one dimension. We found this model can accurately characterize a wide variety of patient breathing patterns. This model was used to describe the respiratory motion for 23 patients with peak-to-peak motion greater than 7 mm. The average root mean square error over all patients was less than 1 mm and no patient has an error worse than 1.5 mm. Our model provides a convenient tool to quantify respiratory motion characteristics, such as patterns of frequency changes and amplitude changes, and can be applied to internal or external motion, including internal tumour position, abdominal surface, diaphragm, spirometry and other surrogates.

  17. Photoionization of image states around metallic nanotubes

    Energy Technology Data Exchange (ETDEWEB)

    Segui, Silvina; Arista, Nestor R; Gervasoni, Juana L [Centro Atomico Bariloche (CNEA) 8400, Rio Negro (Argentina); Bocan, Gisela A, E-mail: segui@cab.cnea.gov.a, E-mail: gbocan@iafe.uba.a, E-mail: arista@cab.cnea.gov.a, E-mail: gervason@cab.cnea.gov.a [Institute de AstronomIa y Fisica del Espacio, CC 67, Sue 28, 1428, Ciudad Universitaria, Buenos Aires (Argentina)

    2009-11-01

    In this work we study a theoretical approach to the ionization of electrons bound in an image state around a metallic nanotube by the impact of photons. In a close analogy to the already studied case of ionization by electron impact [1], we calculate and analyze photoionization cross sections of tubular image states [2] within a first Born approximation. We consider various situations, including different energies and polarizations of the incident photon, ejection directions of the outgoing electron, and angular momenta of the image state.

  18. Image registration based on virtual frame sequence analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chen, H.; Ng, W.S. [Nanyang Technological University, Computer Integrated Medical Intervention Laboratory, School of Mechanical and Aerospace Engineering, Singapore (Singapore); Shi, D. (Nanyang Technological University, School of Computer Engineering, Singapore, Singpore); Wee, S.B. [Tan Tock Seng Hospital, Department of General Surgery, Singapore (Singapore)

    2007-08-15

    This paper is to propose a new framework for medical image registration with large nonrigid deformations, which still remains one of the biggest challenges for image fusion and further analysis in many medical applications. Registration problem is formulated as to recover a deformation process with the known initial state and final state. To deal with large nonlinear deformations, virtual frames are proposed to be inserted to model the deformation process. A time parameter is introduced and the deformation between consecutive frames is described with a linear affine transformation. Experiments are conducted with simple geometric deformation as well as complex deformations presented in MRI and ultrasound images. All the deformations are characterized with nonlinearity. The positive results demonstrated the effectiveness of this algorithm. The framework proposed in this paper is feasible to register medical images with large nonlinear deformations and is especially useful for sequential images. (orig.)

  19. A Learning State-Space Model for Image Retrieval

    Directory of Open Access Journals (Sweden)

    Lee Greg C

    2007-01-01

    Full Text Available This paper proposes an approach based on a state-space model for learning the user concepts in image retrieval. We first design a scheme of region-based image representation based on concept units, which are integrated with different types of feature spaces and with different region scales of image segmentation. The design of the concept units aims at describing similar characteristics at a certain perspective among relevant images. We present the details of our proposed approach based on a state-space model for interactive image retrieval, including likelihood and transition models, and we also describe some experiments that show the efficacy of our proposed model. This work demonstrates the feasibility of using a state-space model to estimate the user intuition in image retrieval.

  20. Deep Learning in Medical Image Analysis.

    Science.gov (United States)

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2017-06-21

    This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.

  1. Cnn Based Retinal Image Upscaling Using Zero Component Analysis

    Science.gov (United States)

    Nasonov, A.; Chesnakov, K.; Krylov, A.

    2017-05-01

    The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.

  2. Image formation and image analysis in electron microscopy

    International Nuclear Information System (INIS)

    Heel, M. van.

    1981-01-01

    This thesis covers various aspects of image formation and image analysis in electron microscopy. The imaging of relatively strong objects in partially coherent illumination, the coherence properties of thermionic emission sources and the detection of objects in quantum noise limited images are considered. IMAGIC, a fast, flexible and friendly image analysis software package is described. Intelligent averaging of molecular images is discussed. (C.F.)

  3. Excited-state imaging of cold atoms

    NARCIS (Netherlands)

    Sheludko, D.V.; Bell, S.C.; Vredenbregt, E.J.D.; Scholten, R.E.; Deshmukh, P.C.; Chakraborty, P.; Williams, J.F.

    2007-01-01

    We have investigated state-selective diffraction contrast imaging (DCI) of cold 85Rb atoms in the first excited (52P3/2) state. Excited-state DCI requires knowledge of the complex refractive index of the atom cloud, which was calculated numerically using a semi-classical model. The Autler-Townes

  4. Nonlinear Denoising and Analysis of Neuroimages With Kernel Principal Component Analysis and Pre-Image Estimation

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Abrahamsen, Trine Julie; Madsen, Kristoffer Hougaard

    2012-01-01

    We investigate the use of kernel principal component analysis (PCA) and the inverse problem known as pre-image estimation in neuroimaging: i) We explore kernel PCA and pre-image estimation as a means for image denoising as part of the image preprocessing pipeline. Evaluation of the denoising...... procedure is performed within a data-driven split-half evaluation framework. ii) We introduce manifold navigation for exploration of a nonlinear data manifold, and illustrate how pre-image estimation can be used to generate brain maps in the continuum between experimentally defined brain states/classes. We...

  5. Image Analysis for X-ray Imaging of Food

    DEFF Research Database (Denmark)

    Einarsdottir, Hildur

    for quality and safety evaluation of food products. In this effort the fields of statistics, image analysis and statistical learning are combined, to provide analytical tools for determining the aforementioned food traits. The work demonstrated includes a quantitative analysis of heat induced changes......X-ray imaging systems are increasingly used for quality and safety evaluation both within food science and production. They offer non-invasive and nondestructive penetration capabilities to image the inside of food. This thesis presents applications of a novel grating-based X-ray imaging technique...... and defect detection in food. Compared to the complex three dimensional analysis of microstructure, here two dimensional images are considered, making the method applicable for an industrial setting. The advantages obtained by grating-based imaging are compared to conventional X-ray imaging, for both foreign...

  6. Transcriptome States Reflect Imaging of Aging States.

    Science.gov (United States)

    Eckley, D Mark; Coletta, Christopher E; Orlov, Nikita V; Wilson, Mark A; Iser, Wendy; Bastian, Paul; Lehrmann, Elin; Zhang, Yonqing; Becker, Kevin G; Goldberg, Ilya G

    2018-06-14

    In this study, we describe a morphological biomarker that detects multiple discrete subpopulations (or "age-states") at several chronological ages in a population of nematodes (Caenorhabditis elegans). We determined the frequencies of three healthy adult states and the timing of the transitions between them across the lifespan. We used short-lived and long-lived strains to confirm the general applicability of the state classifier and to monitor state progression. This exploration revealed healthy and unhealthy states, the former being favored in long-lived strains and the latter showing delayed onset. Short-lived strains rapidly transitioned through the putative healthy state. We previously found that age-matched animals in different age-states have distinct transcriptome profiles. We isolated animals at the beginning and end of each identified state and performed microarray analysis (principal component analysis, relative sample to sample distance measurements, and gene set enrichment analysis). In some comparisons, chronologically identical individuals were farther apart than morphologically identical individuals isolated on different days. The age-state biomarker allowed assessment of aging in a novel manner, complementary to chronological age progression. We found hsp70 and some small heat shock protein genes are expressed later in adulthood, consistent with the proteostasis collapse model.

  7. Shape analysis in medical image analysis

    CERN Document Server

    Tavares, João

    2014-01-01

    This book contains thirteen contributions from invited experts of international recognition addressing important issues in shape analysis in medical image analysis, including techniques for image segmentation, registration, modelling and classification, and applications in biology, as well as in cardiac, brain, spine, chest, lung and clinical practice. This volume treats topics such as, anatomic and functional shape representation and matching; shape-based medical image segmentation; shape registration; statistical shape analysis; shape deformation; shape-based abnormity detection; shape tracking and longitudinal shape analysis; machine learning for shape modeling and analysis; shape-based computer-aided-diagnosis; shape-based medical navigation; benchmark and validation of shape representation, analysis and modeling algorithms. This work will be of interest to researchers, students, and manufacturers in the fields of artificial intelligence, bioengineering, biomechanics, computational mechanics, computationa...

  8. Retinal imaging and image analysis

    NARCIS (Netherlands)

    Abramoff, M.D.; Garvin, Mona K.; Sonka, Milan

    2010-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of

  9. Fan fault diagnosis based on symmetrized dot pattern analysis and image matching

    Science.gov (United States)

    Xu, Xiaogang; Liu, Haixiao; Zhu, Hao; Wang, Songling

    2016-07-01

    To detect the mechanical failure of fans, a new diagnostic method based on the symmetrized dot pattern (SDP) analysis and image matching is proposed. Vibration signals of 13 kinds of running states are acquired on a centrifugal fan test bed and reconstructed by the SDP technique. The SDP pattern templates of each running state are established. An image matching method is performed to diagnose the fault. In order to improve the diagnostic accuracy, the single template, multiple templates and clustering fault templates are used to perform the image matching.

  10. State of the art magnetic resonance imaging

    International Nuclear Information System (INIS)

    Weissman, J.D.

    1987-01-01

    In less than a decade Magnetic Resonance Imaging (MRI) has evolved from a laboratory demonstration to a safe and effective technique for clinical diagnosis. This evolutionary process continues. At this time 2-D and 3-D imaging of the head and body is firmly established in clinical use. Surface coil imaging, two-component chemical shift imaging, in-vivo spectroscopy and flow imaging are currently in various stages of development. The present state of the art of MRI is a function of an array of technologies: magnet, Rf coil, Rf pulse amplifier, gradient coil and driver, pulse programmer, A/D converter, computer system architecture, array processors and mass storage (both magnetic and optical). The overall product design is the result of a complex process which balances the advantages and disadvantages of each component for optimal system performance and flexibility. The author discusses the organization of a state-of-the-art MRI system. Several examples of the kinds of system interactions affecting design choices are given. (Auth.)

  11. State-selective imaging of cold atoms

    NARCIS (Netherlands)

    Sheludko, D.V.; Bell, S.C.; Anderson, R.; Hofmann, C.S.; Vredenbregt, E.J.D.; Scholten, R.E.

    2008-01-01

    Atomic coherence phenomena are usually investigated using single beam techniques without spatial resolution. Here we demonstrate state-selective imaging of cold 85Rb atoms in a three-level ladder system, where the atomic refractive index is sensitive to the quantum coherence state of the atoms. We

  12. Hyperspectral image analysis. A tutorial

    International Nuclear Information System (INIS)

    Amigo, José Manuel; Babamoradi, Hamid; Elcoroaristizabal, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares – Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case. - Highlights: • Comprehensive tutorial of Hyperspectral Image analysis. • Hierarchical discrimination of six classes of plastics containing flame retardant. • Step by step guidelines to perform class-modeling on hyperspectral images. • Fusion of multivariate data analysis and digital image processing methods. • Promising methodology for real-time detection of plastics containing flame retardant.

  13. Medical image registration for analysis

    International Nuclear Information System (INIS)

    Petrovic, V.

    2006-01-01

    Full text: Image registration techniques represent a rich family of image processing and analysis tools that aim to provide spatial correspondences across sets of medical images of similar and disparate anatomies and modalities. Image registration is a fundamental and usually the first step in medical image analysis and this paper presents a number of advanced techniques as well as demonstrates some of the advanced medical image analysis techniques they make possible. A number of both rigid and non-rigid medical image alignment algorithms of equivalent and merely consistent anatomical structures respectively are presented. The algorithms are compared in terms of their practical aims, inputs, computational complexity and level of operator (e.g. diagnostician) interaction. In particular, the focus of the methods discussion is placed on the applications and practical benefits of medical image registration. Results of medical image registration on a number of different imaging modalities and anatomies are presented demonstrating the accuracy and robustness of their application. Medical image registration is quickly becoming ubiquitous in medical imaging departments with the results of such algorithms increasingly used in complex medical image analysis and diagnostics. This paper aims to demonstrate at least part of the reason why

  14. Dynamic Functional Connectivity States Between the Dorsal and Ventral Sensorimotor Networks Revealed by Dynamic Conditional Correlation Analysis of Resting-State Functional Magnetic Resonance Imaging.

    Science.gov (United States)

    Syed, Maleeha F; Lindquist, Martin A; Pillai, Jay J; Agarwal, Shruti; Gujar, Sachin K; Choe, Ann S; Caffo, Brian; Sair, Haris I

    2017-12-01

    Functional connectivity in resting-state functional magnetic resonance imaging (rs-fMRI) has received substantial attention since the initial findings of Biswal et al. Traditional network correlation metrics assume that the functional connectivity in the brain remains stationary over time. However, recent studies have shown that robust temporal fluctuations of functional connectivity among as well as within functional networks exist, challenging this assumption. In this study, these dynamic correlation differences were investigated between the dorsal and ventral sensorimotor networks by applying the dynamic conditional correlation model to rs-fMRI data of 20 healthy subjects. k-Means clustering was used to determine an optimal number of discrete connectivity states (k = 10) of the sensorimotor system across all subjects. Our analysis confirms the existence of differences in dynamic correlation between the dorsal and ventral networks, with highest connectivity found within the ventral motor network.

  15. Automatic selection of resting-state networks with functional magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Silvia Francesca eStorti

    2013-05-01

    Full Text Available Functional magnetic resonance imaging (fMRI during a resting-state condition can reveal the co-activation of specific brain regions in distributed networks, called resting-state networks, which are selected by independent component analysis (ICA of the fMRI data. One of the major difficulties with component analysis is the automatic selection of the ICA features related to brain activity. In this study we describe a method designed to automatically select networks of potential functional relevance, specifically, those regions known to be involved in motor function, visual processing, executive functioning, auditory processing, memory, and the default-mode network. To do this, image analysis was based on probabilistic ICA as implemented in FSL software. After decomposition, the optimal number of components was selected by applying a novel algorithm which takes into account, for each component, Pearson's median coefficient of skewness of the spatial maps generated by FSL, followed by clustering, segmentation, and spectral analysis. To evaluate the performance of the approach, we investigated the resting-state networks in 25 subjects. For each subject, three resting-state scans were obtained with a Siemens Allegra 3 T scanner (NYU data set. Comparison of the visually and the automatically identified neuronal networks showed that the algorithm had high accuracy (first scan: 95%, second scan: 95%, third scan: 93% and precision (90%, 90%, 84%. The reproducibility of the networks for visual and automatic selection was very close: it was highly consistent in each subject for the default-mode network (≥ 92% and the occipital network, which includes the medial visual cortical areas (≥ 94%, and consistent for the attention network (≥ 80%, the right and/or left lateralized frontoparietal attention networks, and the temporal-motor network (≥ 80%. The automatic selection method may be used to detect neural networks and reduce subjectivity in ICA

  16. Multimodal Nonlinear Optical Imaging for Sensitive Detection of Multiple Pharmaceutical Solid-State Forms and Surface Transformations.

    Science.gov (United States)

    Novakovic, Dunja; Saarinen, Jukka; Rojalin, Tatu; Antikainen, Osmo; Fraser-Miller, Sara J; Laaksonen, Timo; Peltonen, Leena; Isomäki, Antti; Strachan, Clare J

    2017-11-07

    Two nonlinear imaging modalities, coherent anti-Stokes Raman scattering (CARS) and sum-frequency generation (SFG), were successfully combined for sensitive multimodal imaging of multiple solid-state forms and their changes on drug tablet surfaces. Two imaging approaches were used and compared: (i) hyperspectral CARS combined with principal component analysis (PCA) and SFG imaging and (ii) simultaneous narrowband CARS and SFG imaging. Three different solid-state forms of indomethacin-the crystalline gamma and alpha forms, as well as the amorphous form-were clearly distinguished using both approaches. Simultaneous narrowband CARS and SFG imaging was faster, but hyperspectral CARS and SFG imaging has the potential to be applied to a wider variety of more complex samples. These methodologies were further used to follow crystallization of indomethacin on tablet surfaces under two storage conditions: 30 °C/23% RH and 30 °C/75% RH. Imaging with (sub)micron resolution showed that the approach allowed detection of very early stage surface crystallization. The surfaces progressively crystallized to predominantly (but not exclusively) the gamma form at lower humidity and the alpha form at higher humidity. Overall, this study suggests that multimodal nonlinear imaging is a highly sensitive, solid-state (and chemically) specific, rapid, and versatile imaging technique for understanding and hence controlling (surface) solid-state forms and their complex changes in pharmaceuticals.

  17. The Digital Image Processing And Quantitative Analysis In Microscopic Image Characterization

    International Nuclear Information System (INIS)

    Ardisasmita, M. Syamsa

    2000-01-01

    Many electron microscopes although have produced digital images, but not all of them are equipped with a supporting unit to process and analyse image data quantitatively. Generally the analysis of image has to be made visually and the measurement is realized manually. The development of mathematical method for geometric analysis and pattern recognition, allows automatic microscopic image analysis with computer. Image processing program can be used for image texture and structure periodic analysis by the application of Fourier transform. Because the development of composite materials. Fourier analysis in frequency domain become important for measure the crystallography orientation. The periodic structure analysis and crystal orientation are the key to understand many material properties like mechanical strength. stress, heat conductivity, resistance, capacitance and other material electric and magnetic properties. In this paper will be shown the application of digital image processing in microscopic image characterization and analysis in microscopic image

  18. Oncological image analysis.

    Science.gov (United States)

    Brady, Sir Michael; Highnam, Ralph; Irving, Benjamin; Schnabel, Julia A

    2016-10-01

    Cancer is one of the world's major healthcare challenges and, as such, an important application of medical image analysis. After a brief introduction to cancer, we summarise some of the major developments in oncological image analysis over the past 20 years, but concentrating those in the authors' laboratories, and then outline opportunities and challenges for the next decade. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Towards native-state imaging in biological context in the electron microscope

    Science.gov (United States)

    Weston, Anne E.; Armer, Hannah E. J.

    2009-01-01

    Modern cell biology is reliant on light and fluorescence microscopy for analysis of cells, tissues and protein localisation. However, these powerful techniques are ultimately limited in resolution by the wavelength of light. Electron microscopes offer much greater resolution due to the shorter effective wavelength of electrons, allowing direct imaging of sub-cellular architecture. The harsh environment of the electron microscope chamber and the properties of the electron beam have led to complex chemical and mechanical preparation techniques, which distance biological samples from their native state and complicate data interpretation. Here we describe recent advances in sample preparation and instrumentation, which push the boundaries of high-resolution imaging. Cryopreparation, cryoelectron microscopy and environmental scanning electron microscopy strive to image samples in near native state. Advances in correlative microscopy and markers enable high-resolution localisation of proteins. Innovation in microscope design has pushed the boundaries of resolution to atomic scale, whilst automatic acquisition of high-resolution electron microscopy data through large volumes is finally able to place ultrastructure in biological context. PMID:19916039

  20. Gabor Analysis for Imaging

    DEFF Research Database (Denmark)

    Christensen, Ole; Feichtinger, Hans G.; Paukner, Stephan

    2015-01-01

    , it characterizes a function by its transform over phase space, which is the time–frequency plane (TF-plane) in a musical context or the location–wave-number domain in the context of image processing. Since the transition from the signal domain to the phase space domain introduces an enormous amount of data...... of the generalities relevant for an understanding of Gabor analysis of functions on Rd. We pay special attention to the case d = 2, which is the most important case for image processing and image analysis applications. The chapter is organized as follows. Section 2 presents central tools from functional analysis......, the application of Gabor expansions to image representation is considered in Sect. 6....

  1. Artificial intelligence and medical imaging. Expert systems and image analysis

    International Nuclear Information System (INIS)

    Wackenheim, A.; Zoellner, G.; Horviller, S.; Jacqmain, T.

    1987-01-01

    This paper gives an overview on the existing systems for automated image analysis and interpretation in medical imaging, especially in radiology. The example of ORFEVRE, the system for the analysis of CAT-scan images of the cervical triplet (c3-c5) by image analysis and subsequent expert-system is given and discussed in detail. Possible extensions are described [fr

  2. The Galileo Solid-State Imaging experiment

    Science.gov (United States)

    Belton, M.J.S.; Klaasen, K.P.; Clary, M.C.; Anderson, J.L.; Anger, C.D.; Carr, M.H.; Chapman, C.R.; Davies, M.E.; Greeley, R.; Anderson, D.; Bolef, L.K.; Townsend, T.E.; Greenberg, R.; Head, J. W.; Neukum, G.; Pilcher, C.B.; Veverka, J.; Gierasch, P.J.; Fanale, F.P.; Ingersoll, A.P.; Masursky, H.; Morrison, D.; Pollack, James B.

    1992-01-01

    . The dynamic range is spread over 3 gain states and an exposure range from 4.17 ms to 51.2 s. A low-level of radial, third-order, geometric distortion has been measured in the raw images that is entirely due to the optical design. The distortion is of the pincushion type and amounts to about 1.2 pixels in the corners of the images. It is expected to be very stable. We discuss the measurement objectives of the SSI experiment in the Jupiter system and emphasize their relationships to those of other experiments in the Galileo project. We outline objectives for Jupiter atmospheric science, noting the relationship of SSI data to that to be returned by experiments on the atmospheric entry Probe. We also outline SSI objectives for satellite surfaces, ring structure, and 'darkside' (e.g., aurorae, lightning, etc.) experiments. Proposed cruise measurement objectives that relate to encounters at Venus, Moon, Earth, Gaspra, and, possibly, Ida are also briefly outlined. The article concludes with a description of a 'fully distributed' data analysis system (HIIPS) that SSI team members intend to use at their home institutions. We also list the nature of systematic data products that will become available to the scientific community. Finally, we append a short 'historical' note outlining the responsibilities and roles of institutions and individuals that have been involved in the 14 year development of the SSI experiment so far. ?? 1992 Kluwer Academic Publishers.

  3. Resting-State Functional MR Imaging for Determining Language Laterality in Intractable Epilepsy.

    Science.gov (United States)

    DeSalvo, Matthew N; Tanaka, Naoaki; Douw, Linda; Leveroni, Catherine L; Buchbinder, Bradley R; Greve, Douglas N; Stufflebeam, Steven M

    2016-10-01

    Purpose To measure the accuracy of resting-state functional magnetic resonance (MR) imaging in determining hemispheric language dominance in patients with medically intractable focal epilepsies against the results of an intracarotid amobarbital procedure (IAP). Materials and Methods This study was approved by the institutional review board, and all subjects gave signed informed consent. Data in 23 patients with medically intractable focal epilepsy were retrospectively analyzed. All 23 patients were candidates for epilepsy surgery and underwent both IAP and resting-state functional MR imaging as part of presurgical evaluation. Language dominance was determined from functional MR imaging data by calculating a laterality index (LI) after using independent component analysis. The accuracy of this method was assessed against that of IAP by using a variety of thresholds. Sensitivity and specificity were calculated by using leave-one-out cross validation. Spatial maps of language components were qualitatively compared among each hemispheric language dominance group. Results Measurement of hemispheric language dominance with resting-state functional MR imaging was highly concordant with IAP results, with up to 96% (22 of 23) accuracy, 96% (22 of 23) sensitivity, and 96% (22 of 23) specificity. Composite language component maps in patients with typical language laterality consistently included classic language areas such as the inferior frontal gyrus, the posterior superior temporal gyrus, and the inferior parietal lobule, while those of patients with atypical language laterality also included non-classical language areas such as the superior and middle frontal gyri, the insula, and the occipital cortex. Conclusion Resting-state functional MR imaging can be used to measure language laterality in patients with medically intractable focal epilepsy. (©) RSNA, 2016 Online supplemental material is available for this article.

  4. Imaging quasiperiodic electronic states in a synthetic Penrose tiling

    Science.gov (United States)

    Collins, Laura C.; Witte, Thomas G.; Silverman, Rochelle; Green, David B.; Gomes, Kenjiro K.

    2017-06-01

    Quasicrystals possess long-range order but lack the translational symmetry of crystalline solids. In solid state physics, periodicity is one of the fundamental properties that prescribes the electronic band structure in crystals. In the absence of periodicity and the presence of quasicrystalline order, the ways that electronic states change remain a mystery. Scanning tunnelling microscopy and atomic manipulation can be used to assemble a two-dimensional quasicrystalline structure mapped upon the Penrose tiling. Here, carbon monoxide molecules are arranged on the surface of Cu(111) one at a time to form the potential landscape that mimics the ionic potential of atoms in natural materials by constraining the electrons in the two-dimensional surface state of Cu(111). The real-space images reveal the presence of the quasiperiodic order in the electronic wave functions and the Fourier analysis of our results links the energy of the resonant states to the local vertex structure of the quasicrystal.

  5. New approaches in intelligent image analysis techniques, methodologies and applications

    CERN Document Server

    Nakamatsu, Kazumi

    2016-01-01

    This book presents an Introduction and 11 independent chapters, which are devoted to various new approaches of intelligent image processing and analysis. The book also presents new methods, algorithms and applied systems for intelligent image processing, on the following basic topics: Methods for Hierarchical Image Decomposition; Intelligent Digital Signal Processing and Feature Extraction; Data Clustering and Visualization via Echo State Networks; Clustering of Natural Images in Automatic Image Annotation Systems; Control System for Remote Sensing Image Processing; Tissue Segmentation of MR Brain Images Sequence; Kidney Cysts Segmentation in CT Images; Audio Visual Attention Models in Mobile Robots Navigation; Local Adaptive Image Processing; Learning Techniques for Intelligent Access Control; Resolution Improvement in Acoustic Maps. Each chapter is self-contained with its own references. Some of the chapters are devoted to the theoretical aspects while the others are presenting the practical aspects and the...

  6. Microscopy image segmentation tool: Robust image data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Valmianski, Ilya, E-mail: ivalmian@ucsd.edu; Monton, Carlos; Schuller, Ivan K. [Department of Physics and Center for Advanced Nanoscience, University of California San Diego, 9500 Gilman Drive, La Jolla, California 92093 (United States)

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  7. Microscopy image segmentation tool: Robust image data analysis

    Science.gov (United States)

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.

    2014-03-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  8. Microscopy image segmentation tool: Robust image data analysis

    International Nuclear Information System (INIS)

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.

    2014-01-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy

  9. Return to sender - American Images of the Nordic Welfare States and Nordic Welfare State Branding

    DEFF Research Database (Denmark)

    Marklund, C.; Petersen, Klaus

    2013-01-01

    In this article, we study the relationship between the United States of America and Norden, first showing how images of the Nordic model were constructed and reproduced in the United States from the 1920s until the 1960s. We find both utopias and dystopias in these narratives. Second, the article...... argues that these American images, narratives, and stereotypes did not only fulfill a function in the American debate, but were also relayed back to Norden, and affected debate, nation-branding strategies, and self-understandings there. During the Cold War, furthermore, the Nordic welfare state image...

  10. Hyperspectral image analysis. A tutorial

    DEFF Research Database (Denmark)

    Amigo Rubio, Jose Manuel; Babamoradi, Hamid; Elcoroaristizabal Martin, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processi...... to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares - Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case....... will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology...

  11. Functional connectivity analysis of the brain network using resting-state fMRI

    International Nuclear Information System (INIS)

    Hayashi, Toshihiro

    2011-01-01

    Spatial patterns of spontaneous fluctuations in blood oxygenation level-dependent (BOLD) signals reflect the underlying neural architecture. The study of the brain network based on these self-organized patterns is termed resting-state functional MRI (fMRI). This review article aims at briefly reviewing a basic concept of this technology and discussing its implications for neuropsychological studies. First, the technical aspects of resting-state fMRI, including signal sources, physiological artifacts, image acquisition, and analytical methods such as seed-based correlation analysis and independent component analysis, are explained, followed by a discussion on the major resting-state networks, including the default mode network. In addition, the structure-function correlation studied using diffuse tensor imaging and resting-state fMRI is briefly discussed. Second, I have discussed the reservations and potential pitfalls of 2 major imaging methods: voxel-based lesion-symptom mapping and task fMRI. Problems encountered with voxel-based lesion-symptom mapping can be overcome by using resting-state fMRI and evaluating undamaged brain networks in patients. Regarding task fMRI in patients, I have also emphasized the importance of evaluating the baseline brain activity because the amplitude of activation in BOLD fMRI is hard to interpret as the same baseline cannot be assumed for both patient and normal groups. (author)

  12. Imaging mass spectrometry statistical analysis.

    Science.gov (United States)

    Jones, Emrys A; Deininger, Sören-Oliver; Hogendoorn, Pancras C W; Deelder, André M; McDonnell, Liam A

    2012-08-30

    Imaging mass spectrometry is increasingly used to identify new candidate biomarkers. This clinical application of imaging mass spectrometry is highly multidisciplinary: expertise in mass spectrometry is necessary to acquire high quality data, histology is required to accurately label the origin of each pixel's mass spectrum, disease biology is necessary to understand the potential meaning of the imaging mass spectrometry results, and statistics to assess the confidence of any findings. Imaging mass spectrometry data analysis is further complicated because of the unique nature of the data (within the mass spectrometry field); several of the assumptions implicit in the analysis of LC-MS/profiling datasets are not applicable to imaging. The very large size of imaging datasets and the reporting of many data analysis routines, combined with inadequate training and accessible reviews, have exacerbated this problem. In this paper we provide an accessible review of the nature of imaging data and the different strategies by which the data may be analyzed. Particular attention is paid to the assumptions of the data analysis routines to ensure that the reader is apprised of their correct usage in imaging mass spectrometry research. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. A survey of MRI-based medical image analysis for brain tumor studies

    Science.gov (United States)

    Bauer, Stefan; Wiest, Roland; Nolte, Lutz-P.; Reyes, Mauricio

    2013-07-01

    MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While the pioneering approaches applying automated methods for the analysis of brain tumor images date back almost two decades, the current methods are becoming more mature and coming closer to routine clinical application. This review aims to provide a comprehensive overview by giving a brief introduction to brain tumors and imaging of brain tumors first. Then, we review the state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas. The objective in the segmentation is outlining the tumor including its sub-compartments and surrounding tissues, while the main challenge in registration and modeling is the handling of morphological changes caused by the tumor. The qualities of different approaches are discussed with a focus on methods that can be applied on standard clinical imaging protocols. Finally, a critical assessment of the current state is performed and future developments and trends are addressed, giving special attention to recent developments in radiological tumor assessment guidelines.

  14. A survey of MRI-based medical image analysis for brain tumor studies

    International Nuclear Information System (INIS)

    Bauer, Stefan; Nolte, Lutz-P; Reyes, Mauricio; Wiest, Roland

    2013-01-01

    MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While the pioneering approaches applying automated methods for the analysis of brain tumor images date back almost two decades, the current methods are becoming more mature and coming closer to routine clinical application. This review aims to provide a comprehensive overview by giving a brief introduction to brain tumors and imaging of brain tumors first. Then, we review the state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas. The objective in the segmentation is outlining the tumor including its sub-compartments and surrounding tissues, while the main challenge in registration and modeling is the handling of morphological changes caused by the tumor. The qualities of different approaches are discussed with a focus on methods that can be applied on standard clinical imaging protocols. Finally, a critical assessment of the current state is performed and future developments and trends are addressed, giving special attention to recent developments in radiological tumor assessment guidelines. (topical review)

  15. Image analysis and modeling in medical image computing. Recent developments and advances.

    Science.gov (United States)

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body

  16. Semi-automated analysis of three-dimensional track images

    International Nuclear Information System (INIS)

    Meesen, G.; Poffijn, A.

    2001-01-01

    In the past, three-dimensional (3-d) track images in solid state detectors were difficult to obtain. With the introduction of the confocal scanning laser microscope it is now possible to record 3-d track images in a non-destructive way. These 3-d track images can latter be used to measure typical track parameters. Preparing the detectors and recording the 3-d images however is only the first step. The second step in this process is enhancing the image quality by means of deconvolution techniques to obtain the maximum possible resolution. The third step is extracting the typical track parameters. This can be done on-screen by an experienced operator. For large sets of data however, this manual technique is not desirable. This paper will present some techniques to analyse 3-d track data in an automated way by means of image analysis routines. Advanced thresholding techniques guarantee stable results in different recording situations. By using pre-knowledge about the track shape, reliable object identification is obtained. In case of ambiguity, manual intervention is possible

  17. Quantitative image analysis of synovial tissue

    NARCIS (Netherlands)

    van der Hall, Pascal O.; Kraan, Maarten C.; Tak, Paul Peter

    2007-01-01

    Quantitative image analysis is a form of imaging that includes microscopic histological quantification, video microscopy, image analysis, and image processing. Hallmarks are the generation of reliable, reproducible, and efficient measurements via strict calibration and step-by-step control of the

  18. MEG source imaging method using fast L1 minimum-norm and its applications to signals with brain noise and human resting-state source amplitude images.

    Science.gov (United States)

    Huang, Ming-Xiong; Huang, Charles W; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L; Baker, Dewleen G; Song, Tao; Harrington, Deborah L; Theilmann, Rebecca J; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M; Edgar, J Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T; Drake, Angela; Lee, Roland R

    2014-01-01

    The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL's performance was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL's performance was then examined in the analysis of human median-nerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer's problems of signal leaking and distorted source time-courses. © 2013.

  19. A survey on deep learning in medical image analysis.

    Science.gov (United States)

    Litjens, Geert; Kooi, Thijs; Bejnordi, Babak Ehteshami; Setio, Arnaud Arindra Adiyoso; Ciompi, Francesco; Ghafoorian, Mohsen; van der Laak, Jeroen A W M; van Ginneken, Bram; Sánchez, Clara I

    2017-12-01

    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskeletal. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Image Analysis Technique for Material Behavior Evaluation in Civil Structures

    Science.gov (United States)

    Moretti, Michele; Rossi, Gianluca

    2017-01-01

    The article presents a hybrid monitoring technique for the measurement of the deformation field. The goal is to obtain information about crack propagation in existing structures, for the purpose of monitoring their state of health. The measurement technique is based on the capture and analysis of a digital image set. Special markers were used on the surface of the structures that can be removed without damaging existing structures as the historical masonry. The digital image analysis was done using software specifically designed in Matlab to follow the tracking of the markers and determine the evolution of the deformation state. The method can be used in any type of structure but is particularly suitable when it is necessary not to damage the surface of structures. A series of experiments carried out on masonry walls of the Oliverian Museum (Pesaro, Italy) and Palazzo Silvi (Perugia, Italy) have allowed the validation of the procedure elaborated by comparing the results with those derived from traditional measuring techniques. PMID:28773129

  1. Resting-state functional magnetic resonance imaging for surgical planning in pediatric patients: a preliminary experience.

    Science.gov (United States)

    Roland, Jarod L; Griffin, Natalie; Hacker, Carl D; Vellimana, Ananth K; Akbari, S Hassan; Shimony, Joshua S; Smyth, Matthew D; Leuthardt, Eric C; Limbrick, David D

    2017-12-01

    OBJECTIVE Cerebral mapping for surgical planning and operative guidance is a challenging task in neurosurgery. Pediatric patients are often poor candidates for many modern mapping techniques because of inability to cooperate due to their immature age, cognitive deficits, or other factors. Resting-state functional MRI (rs-fMRI) is uniquely suited to benefit pediatric patients because it is inherently noninvasive and does not require task performance or significant cooperation. Recent advances in the field have made mapping cerebral networks possible on an individual basis for use in clinical decision making. The authors present their initial experience translating rs-fMRI into clinical practice for surgical planning in pediatric patients. METHODS The authors retrospectively reviewed cases in which the rs-fMRI analysis technique was used prior to craniotomy in pediatric patients undergoing surgery in their institution. Resting-state analysis was performed using a previously trained machine-learning algorithm for identification of resting-state networks on an individual basis. Network maps were uploaded to the clinical imaging and surgical navigation systems. Patient demographic and clinical characteristics, including need for sedation during imaging and use of task-based fMRI, were also recorded. RESULTS Twenty patients underwent rs-fMRI prior to craniotomy between December 2013 and June 2016. Their ages ranged from 1.9 to 18.4 years, and 12 were male. Five of the 20 patients also underwent task-based fMRI and one underwent awake craniotomy. Six patients required sedation to tolerate MRI acquisition, including resting-state sequences. Exemplar cases are presented including anatomical and resting-state functional imaging. CONCLUSIONS Resting-state fMRI is a rapidly advancing field of study allowing for whole brain analysis by a noninvasive modality. It is applicable to a wide range of patients and effective even under general anesthesia. The nature of resting-state

  2. Stochastic geometry for image analysis

    CERN Document Server

    Descombes, Xavier

    2013-01-01

    This book develops the stochastic geometry framework for image analysis purpose. Two main frameworks are  described: marked point process and random closed sets models. We derive the main issues for defining an appropriate model. The algorithms for sampling and optimizing the models as well as for estimating parameters are reviewed.  Numerous applications, covering remote sensing images, biological and medical imaging, are detailed.  This book provides all the necessary tools for developing an image analysis application based on modern stochastic modeling.

  3. Neural imaging to track mental states while using an intelligent tutoring system.

    Science.gov (United States)

    Anderson, John R; Betts, Shawn; Ferris, Jennifer L; Fincham, Jon M

    2010-04-13

    Hemodynamic measures of brain activity can be used to interpret a student's mental state when they are interacting with an intelligent tutoring system. Functional magnetic resonance imaging (fMRI) data were collected while students worked with a tutoring system that taught an algebra isomorph. A cognitive model predicted the distribution of solution times from measures of problem complexity. Separately, a linear discriminant analysis used fMRI data to predict whether or not students were engaged in problem solving. A hidden Markov algorithm merged these two sources of information to predict the mental states of students during problem-solving episodes. The algorithm was trained on data from 1 day of interaction and tested with data from a later day. In terms of predicting what state a student was in during a 2-s period, the algorithm achieved 87% accuracy on the training data and 83% accuracy on the test data. The results illustrate the importance of integrating the bottom-up information from imaging data with the top-down information from a cognitive model.

  4. Transfer function analysis of radiographic imaging systems

    International Nuclear Information System (INIS)

    Metz, C.E.; Doi, K.

    1979-01-01

    The theoretical and experimental aspects of the techniques of transfer function analysis used in radiographic imaging systems are reviewed. The mathematical principles of transfer function analysis are developed for linear, shift-invariant imaging systems, for the relation between object and image and for the image due to a sinusoidal plane wave object. The other basic mathematical principle discussed is 'Fourier analysis' and its application to an input function. Other aspects of transfer function analysis included are alternative expressions for the 'optical transfer function' of imaging systems and expressions are derived for both serial and parallel transfer image sub-systems. The applications of transfer function analysis to radiographic imaging systems are discussed in relation to the linearisation of the radiographic imaging system, the object, the geometrical unsharpness, the screen-film system unsharpness, other unsharpness effects and finally noise analysis. It is concluded that extensive theoretical, computer simulation and experimental studies have demonstrated that the techniques of transfer function analysis provide an accurate and reliable means for predicting and understanding the effects of various radiographic imaging system components in most practical diagnostic medical imaging situations. (U.K.)

  5. REST: a toolkit for resting-state functional magnetic resonance imaging data processing.

    Directory of Open Access Journals (Sweden)

    Xiao-Wei Song

    Full Text Available Resting-state fMRI (RS-fMRI has been drawing more and more attention in recent years. However, a publicly available, systematically integrated and easy-to-use tool for RS-fMRI data processing is still lacking. We developed a toolkit for the analysis of RS-fMRI data, namely the RESting-state fMRI data analysis Toolkit (REST. REST was developed in MATLAB with graphical user interface (GUI. After data preprocessing with SPM or AFNI, a few analytic methods can be performed in REST, including functional connectivity analysis based on linear correlation, regional homogeneity, amplitude of low frequency fluctuation (ALFF, and fractional ALFF. A few additional functions were implemented in REST, including a DICOM sorter, linear trend removal, bandpass filtering, time course extraction, regression of covariates, image calculator, statistical analysis, and slice viewer (for result visualization, multiple comparison correction, etc.. REST is an open-source package and is freely available at http://www.restfmri.net.

  6. Tridimensional ultrasonic images analysis for the in service inspection of fast breeder reactors

    International Nuclear Information System (INIS)

    Dancre, M.

    1999-11-01

    Tridimensional image analysis provides a set of methods for the intelligent extraction of information in order to visualize, recognize or inspect objects in volumetric images. In this field of research, we are interested in algorithmic and methodological aspects to extract surface visual information embedded in volume ultrasonic images. The aim is to help a non-acoustician operator, possibly the system itself, to inspect surfaces of vessel and internals in Fast Breeder Reactors (FBR). Those surfaces are immersed in liquid metal, what justifies the ultrasonic technology choice. We expose firstly a state of the art on the visualization of volume ultrasonic images, the methods of noise analysis, the geometrical modelling for surface analysis and finally curves and surfaces matching. These four points are then inserted in a global analysis strategy that relies on an acoustical analysis (echoes recognition), an object analysis (object recognition and reconstruction) and a surface analysis (surface defects detection). Few literature can be found on ultrasonic echoes recognition through image analysis. We suggest an original method that can be generalized to all images with structured and non-structured noise. From a technical point of view, this methodology applied to echoes recognition turns out to be a cooperative approach between morphological mathematics and snakes (active contours). An entropy maximization technique is required for volumetric data binarization. (author)

  7. Resting-State Functional Magnetic Resonance Imaging for Language Preoperative Planning

    Science.gov (United States)

    Branco, Paulo; Seixas, Daniela; Deprez, Sabine; Kovacs, Silvia; Peeters, Ronald; Castro, São L.; Sunaert, Stefan

    2016-01-01

    Functional magnetic resonance imaging (fMRI) is a well-known non-invasive technique for the study of brain function. One of its most common clinical applications is preoperative language mapping, essential for the preservation of function in neurosurgical patients. Typically, fMRI is used to track task-related activity, but poor task performance and movement artifacts can be critical limitations in clinical settings. Recent advances in resting-state protocols open new possibilities for pre-surgical mapping of language potentially overcoming these limitations. To test the feasibility of using resting-state fMRI instead of conventional active task-based protocols, we compared results from fifteen patients with brain lesions while performing a verb-to-noun generation task and while at rest. Task-activity was measured using a general linear model analysis and independent component analysis (ICA). Resting-state networks were extracted using ICA and further classified in two ways: manually by an expert and by using an automated template matching procedure. The results revealed that the automated classification procedure correctly identified language networks as compared to the expert manual classification. We found a good overlay between task-related activity and resting-state language maps, particularly within the language regions of interest. Furthermore, resting-state language maps were as sensitive as task-related maps, and had higher specificity. Our findings suggest that resting-state protocols may be suitable to map language networks in a quick and clinically efficient way. PMID:26869899

  8. Bayesian network analysis revealed the connectivity difference of the default mode network from the resting-state to task-state

    Science.gov (United States)

    Wu, Xia; Yu, Xinyu; Yao, Li; Li, Rui

    2014-01-01

    Functional magnetic resonance imaging (fMRI) studies have converged to reveal the default mode network (DMN), a constellation of regions that display co-activation during resting-state but co-deactivation during attention-demanding tasks in the brain. Here, we employed a Bayesian network (BN) analysis method to construct a directed effective connectivity model of the DMN and compared the organizational architecture and interregional directed connections under both resting-state and task-state. The analysis results indicated that the DMN was consistently organized into two closely interacting subsystems in both resting-state and task-state. The directed connections between DMN regions, however, changed significantly from the resting-state to task-state condition. The results suggest that the DMN intrinsically maintains a relatively stable structure whether at rest or performing tasks but has different information processing mechanisms under varied states. PMID:25309414

  9. Progress in clinical research and application of resting state functional brain imaging

    International Nuclear Information System (INIS)

    Long Miaomiao; Ni Hongyan

    2013-01-01

    Resting state functional brain imaging experimental design is free of stimulus task and offers various parametric maps through different data-driven post processing methods with endogenous BOLD signal changes as the source of imaging. Mechanism of resting state brain activities could be extensively studied with improved patient compliance and clinical application compared with task related functional brain imaging. Also resting state functional brain imaging can be used as a method of data acquisition, with implicit neuronal activity as a kind of experimental design, to reveal characteristic brain activities of epileptic patient. Even resting state functional brain imaging data processing method can be used to analyze task related functional MRI data, opening new horizons of task related functional MRI study. (authors)

  10. Image Analysis

    DEFF Research Database (Denmark)

    The 19th Scandinavian Conference on Image Analysis was held at the IT University of Copenhagen in Denmark during June 15-17, 2015. The SCIA conference series has been an ongoing biannual event for more than 30 years and over the years it has nurtured a world-class regional research and development...... area within the four participating Nordic countries. It is a regional meeting of the International Association for Pattern Recognition (IAPR). We would like to thank all authors who submitted works to this year’s SCIA, the invited speakers, and our Program Committee. In total 67 papers were submitted....... The topics of the accepted papers range from novel applications of vision systems, pattern recognition, machine learning, feature extraction, segmentation, 3D vision, to medical and biomedical image analysis. The papers originate from all the Scandinavian countries and several other European countries...

  11. MR image analysis: Longitudinal cardiac motion influences left ventricular measurements

    International Nuclear Information System (INIS)

    Berkovic, Patrick; Hemmink, Maarten; Parizel, Paul M.; Vrints, Christiaan J.; Paelinck, Bernard P.

    2010-01-01

    Background: Software for the analysis of left ventricular (LV) volumes and mass using border detection in short-axis images only, is hampered by through-plane cardiac motion. Therefore we aimed to evaluate software that involves longitudinal cardiac motion. Methods: Twenty-three consecutive patients underwent 1.5-Tesla cine magnetic resonance (MR) imaging of the entire heart in the long-axis and short-axis orientation with breath-hold steady-state free precession imaging. Offline analysis was performed using software that uses short-axis images (Medis MASS) and software that includes two-chamber and four-chamber images to involve longitudinal LV expansion and shortening (CAAS-MRV). Intraobserver and interobserver reproducibility was assessed by using Bland-Altman analysis. Results: Compared with MASS software, CAAS-MRV resulted in significantly smaller end-diastolic (156 ± 48 ml versus 167 ± 52 ml, p = 0.001) and end-systolic LV volumes (79 ± 48 ml versus 94 ± 52 ml, p < 0.001). In addition, CAAS-MRV resulted in higher LV ejection fraction (52 ± 14% versus 46 ± 13%, p < 0.001) and calculated LV mass (154 ± 52 g versus 142 ± 52 g, p = 0.004). Intraobserver and interobserver limits of agreement were similar for both methods. Conclusion: MR analysis of LV volumes and mass involving long-axis LV motion is a highly reproducible method, resulting in smaller LV volumes, higher ejection fraction and calculated LV mass.

  12. CMOS image sensors: State-of-the-art

    Science.gov (United States)

    Theuwissen, Albert J. P.

    2008-09-01

    This paper gives an overview of the state-of-the-art of CMOS image sensors. The main focus is put on the shrinkage of the pixels : what is the effect on the performance characteristics of the imagers and on the various physical parameters of the camera ? How is the CMOS pixel architecture optimized to cope with the negative performance effects of the ever-shrinking pixel size ? On the other hand, the smaller dimensions in CMOS technology allow further integration on column level and even on pixel level. This will make CMOS imagers even smarter that they are already.

  13. Positron emission tomography: Physics, instrumentation, and image analysis

    International Nuclear Information System (INIS)

    Porenta, G.

    1994-01-01

    Positron emission tomography (PET) is a noninvasive diagnostic technique that permits reconstruction of cross-sectional images of the human body which depict the biodistribution of PET tracer substances. A large variety of physiological PET tracers, mostly based on isotopes of carbon, nitrogen, oxygen, and fluorine is available and allows the in vivo investigation of organ perfusion, metabolic pathways and biomolecular processes in normal and diseased states. PET cameras utilize the physical characteristics of positron decay to derive quantitative measurements of tracer concentrations, a capability that has so far been elusive for conventional SPECT (single photon emission computed tomography) imaging techniques. Due to the short half lives of most PET isotopes, an on-site cyclotron and a radiochemistry unit are necessary to provide an adequate supply of PET tracers. While operating a PET center in the past was a complex procedure restricted to few academic centers with ample resources. PET technology has rapidly advanced in recent years and has entered the commercial nuclear medicine market. To date, the availability of compact cyclotrons with remote computer control, automated synthesis units for PET radiochemistry, high-performance PET cameras, and userfriendly analysis workstations permits installation of a clinical PET center within most nuclear medicine facilities. This review provides simple descriptions of important aspects concerning physics, instrumentation, and image analysis in PET imaging which should be understood by medical personnel involved in the clinical operation of a PET imaging center. (author)

  14. Image-potential states and work function of graphene

    International Nuclear Information System (INIS)

    Niesner, Daniel; Fauster, Thomas

    2014-01-01

    Image-potential states of graphene on various substrates have been investigated by two-photon photoemission and scanning tunneling spectroscopy. They are used as a probe for the graphene-substrate interaction and resulting changes in the (local) work function. The latter is driven by the work function difference between graphene and the substrate. This results in a charge transfer which also contributes to core-level shifts in x-ray photoemission. In this review article, we give an overview over the theoretical models and the experimental data for image-potential states and work function of graphene on various substrates. (topical review)

  15. Digital image analysis

    DEFF Research Database (Denmark)

    Riber-Hansen, Rikke; Vainer, Ben; Steiniche, Torben

    2012-01-01

    Digital image analysis (DIA) is increasingly implemented in histopathological research to facilitate truly quantitative measurements, decrease inter-observer variation and reduce hands-on time. Originally, efforts were made to enable DIA to reproduce manually obtained results on histological slides...... reproducibility, application of stereology-based quantitative measurements, time consumption, optimization of histological slides, regions of interest selection and recent developments in staining and imaging techniques....

  16. Strongly Localized Image States of Spherical Graphitic Particles

    Directory of Open Access Journals (Sweden)

    Godfrey Gumbs

    2014-01-01

    Full Text Available We investigate the localization of charged particles by the image potential of spherical shells, such as fullerene buckyballs. These spherical image states exist within surface potentials formed by the competition between the attractive image potential and the repulsive centripetal force arising from the angular motion. The image potential has a power law rather than a logarithmic behavior. This leads to fundamental differences in the nature of the effective potential for the two geometries. Our calculations have shown that the captured charge is more strongly localized closest to the surface for fullerenes than for cylindrical nanotube.

  17. Multimodal Imaging Brain Connectivity Analysis (MIBCA toolbox

    Directory of Open Access Journals (Sweden)

    Andre Santos Ribeiro

    2015-07-01

    Full Text Available Aim. In recent years, connectivity studies using neuroimaging data have increased the understanding of the organization of large-scale structural and functional brain networks. However, data analysis is time consuming as rigorous procedures must be assured, from structuring data and pre-processing to modality specific data procedures. Until now, no single toolbox was able to perform such investigations on truly multimodal image data from beginning to end, including the combination of different connectivity analyses. Thus, we have developed the Multimodal Imaging Brain Connectivity Analysis (MIBCA toolbox with the goal of diminishing time waste in data processing and to allow an innovative and comprehensive approach to brain connectivity.Materials and Methods. The MIBCA toolbox is a fully automated all-in-one connectivity toolbox that offers pre-processing, connectivity and graph theoretical analyses of multimodal image data such as diffusion-weighted imaging, functional magnetic resonance imaging (fMRI and positron emission tomography (PET. It was developed in MATLAB environment and pipelines well-known neuroimaging softwares such as Freesurfer, SPM, FSL, and Diffusion Toolkit. It further implements routines for the construction of structural, functional and effective or combined connectivity matrices, as well as, routines for the extraction and calculation of imaging and graph-theory metrics, the latter using also functions from the Brain Connectivity Toolbox. Finally, the toolbox performs group statistical analysis and enables data visualization in the form of matrices, 3D brain graphs and connectograms. In this paper the MIBCA toolbox is presented by illustrating its capabilities using multimodal image data from a group of 35 healthy subjects (19–73 years old with volumetric T1-weighted, diffusion tensor imaging, and resting state fMRI data, and 10 subjets with 18F-Altanserin PET data also.Results. It was observed both a high inter

  18. Athermal electron distribution probed by femtosecond multiphoton photoemission from image potential states

    International Nuclear Information System (INIS)

    Ferrini, Gabriele; Giannetti, Claudio; Pagliara, Stefania; Banfi, Francesco; Galimberti, Gianluca; Parmigiani, Fulvio

    2005-01-01

    Image potential states are populated through indirect, scattering-mediated multiphoton absorption induced by femtosecond laser pulses and revealed by single-photon photoemission. The measured effective mass is significantly different from that obtained with direct, resonant population. These features reveal a strong coupling of the electrons residing in the image potential state, outside the solid, with the underlying hot electron population created by the laser pulse. The coupling is mediated by a many-body scattering interaction between the image potential state electrons and bulk electrons in highly excited states

  19. Image sequence analysis

    CERN Document Server

    1981-01-01

    The processing of image sequences has a broad spectrum of important applica­ tions including target tracking, robot navigation, bandwidth compression of TV conferencing video signals, studying the motion of biological cells using microcinematography, cloud tracking, and highway traffic monitoring. Image sequence processing involves a large amount of data. However, because of the progress in computer, LSI, and VLSI technologies, we have now reached a stage when many useful processing tasks can be done in a reasonable amount of time. As a result, research and development activities in image sequence analysis have recently been growing at a rapid pace. An IEEE Computer Society Workshop on Computer Analysis of Time-Varying Imagery was held in Philadelphia, April 5-6, 1979. A related special issue of the IEEE Transactions on Pattern Anal­ ysis and Machine Intelligence was published in November 1980. The IEEE Com­ puter magazine has also published a special issue on the subject in 1981. The purpose of this book ...

  20. A new analysis of archival images of comet 29P/Schwassmann-Wachmann 1 to constrain the rotation state of and active regions on its nucleus

    Science.gov (United States)

    Schambeau, C.; Fernández, Y.; Samarasinha, N.; Mueller, B.; Woodney, L.; Lisse, C.; Kelley, M.; Meech, K.

    2014-07-01

    Introduction: 29P/Schwassmann-Wachmann 1 (SW1) is a unique comet (and Centaur) with an almost circular orbit just outside the orbit of Jupiter. This orbit results in SW1 receiving a nearly constant insolation, thus giving a simpler environment in which to study thermal properties and behaviors of this comet's nucleus. Such knowledge is crucial for improving our understanding of coma morphology, nuclear thermal evolution, and nuclear structure. To this end, our overarching goal is to develop a thermophysical model of SW1's nucleus that makes use of realistic physical and structural properties as inputs. This model will help to explain the highly variable gas- and dust-production rates of this comet; SW1 is well known for its frequent but stochastic outbursts of mass loss [1,2,3]. Here we will report new constraints on the effective radius, beaming parameter, spin state, and location of active regions on the nucleus of SW1. Results: The analysis completed so far consists of a re-analysis of Spitzer Space Telescope thermal-IR images of SW1 from UT 2003 November 21 and 24, when SW1 was observed outside of outburst. The images are from Spitzer's IRAC 5.8-μm and 8.0-μm bands and MIPS 24.0-μm and 70-μm bands. This analysis is similar to that of Stansberry et al. [4, 5], but with data products generated from the latest Spitzer pipeline. Also, analysis of the 5.8-μm image had not been reported before. Coma removal techniques (e.g., Fernández et al. [6]) were applied to each image letting us measure the nuclear point-source contribution to each image. The measured flux densities for each band were fit with a Near Earth Asteroid Thermal Model (NEATM, [7]) and resulted in values for the effective radius of SW1's nucleus, constraints on the thermal inertia, and an IR beaming-parameter value. Current efforts have shifted to constraining the spin properties of SW1's nucleus and surface areas of activity through use of an existing Monte Carlo model [8, 9] to reproduce

  1. Multifractal analysis of 2D gray soil images

    Science.gov (United States)

    González-Torres, Ivan; Losada, Juan Carlos; Heck, Richard; Tarquis, Ana M.

    2015-04-01

    Soil structure, understood as the spatial arrangement of soil pores, is one of the key factors in soil modelling processes. Geometric properties of individual and interpretation of the morphological parameters of pores can be estimated from thin sections or 3D Computed Tomography images (Tarquis et al., 2003), but there is no satisfactory method to binarized these images and quantify the complexity of their spatial arrangement (Tarquis et al., 2008, Tarquis et al., 2009; Baveye et al., 2010). The objective of this work was to apply a multifractal technique, their singularities (α) and f(α) spectra, to quantify it without applying any threshold (Gónzalez-Torres, 2014). Intact soil samples were collected from four horizons of an Argisol, formed on the Tertiary Barreiras group of formations in Pernambuco state, Brazil (Itapirema Experimental Station). The natural vegetation of the region is tropical, coastal rainforest. From each horizon, showing different porosities and spatial arrangements, three adjacent samples were taken having a set of twelve samples. The intact soil samples were imaged using an EVS (now GE Medical. London, Canada) MS-8 MicroCT scanner with 45 μm pixel-1 resolution (256x256 pixels). Though some samples required paring to fit the 64 mm diameter imaging tubes, field orientation was maintained. References Baveye, P.C., M. Laba, W. Otten, L. Bouckaert, P. Dello, R.R. Goswami, D. Grinev, A. Houston, Yaoping Hu, Jianli Liu, S. Mooney, R. Pajor, S. Sleutel, A. Tarquis, Wei Wang, Qiao Wei, Mehmet Sezgin. Observer-dependent variability of the thresholding step in the quantitative analysis of soil images and X-ray microtomography data. Geoderma, 157, 51-63, 2010. González-Torres, Iván. Theory and application of multifractal analysis methods in images for the study of soil structure. Master thesis, UPM, 2014. Tarquis, A.M., R.J. Heck, J.B. Grau; J. Fabregat, M.E. Sanchez and J.M. Antón. Influence of Thresholding in Mass and Entropy Dimension of 3-D

  2. An image analysis system for near-infrared (NIR) fluorescence lymph imaging

    Science.gov (United States)

    Zhang, Jingdan; Zhou, Shaohua Kevin; Xiang, Xiaoyan; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2011-03-01

    Quantitative analysis of lymphatic function is crucial for understanding the lymphatic system and diagnosing the associated diseases. Recently, a near-infrared (NIR) fluorescence imaging system is developed for real-time imaging lymphatic propulsion by intradermal injection of microdose of a NIR fluorophore distal to the lymphatics of interest. However, the previous analysis software3, 4 is underdeveloped, requiring extensive time and effort to analyze a NIR image sequence. In this paper, we develop a number of image processing techniques to automate the data analysis workflow, including an object tracking algorithm to stabilize the subject and remove the motion artifacts, an image representation named flow map to characterize lymphatic flow more reliably, and an automatic algorithm to compute lymph velocity and frequency of propulsion. By integrating all these techniques to a system, the analysis workflow significantly reduces the amount of required user interaction and improves the reliability of the measurement.

  3. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code...

  4. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  5. Multimodality image analysis work station

    International Nuclear Information System (INIS)

    Ratib, O.; Huang, H.K.

    1989-01-01

    The goal of this project is to design and implement a PACS (picture archiving and communication system) workstation for quantitative analysis of multimodality images. The Macintosh II personal computer was selected for its friendly user interface, its popularity among the academic and medical community, and its low cost. The Macintosh operates as a stand alone workstation where images are imported from a central PACS server through a standard Ethernet network and saved on a local magnetic or optical disk. A video digitizer board allows for direct acquisition of images from sonograms or from digitized cine angiograms. The authors have focused their project on the exploration of new means of communicating quantitative data and information through the use of an interactive and symbolic user interface. The software developed includes a variety of image analysis, algorithms for digitized angiograms, sonograms, scintigraphic images, MR images, and CT scans

  6. Rapid Analysis and Exploration of Fluorescence Microscopy Images

    OpenAIRE

    Pavie, Benjamin; Rajaram, Satwik; Ouyang, Austin; Altschuler, Jason; Steininger, Robert J; Wu, Lani; Altschuler, Steven

    2014-01-01

    Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard.

  7. Short TR imaging with refocusing of the steady-state transverse magnetization

    International Nuclear Information System (INIS)

    Zur, Y.; Stokar, S.; Bendel, P.

    1987-01-01

    Repetitive application of a sequence with repetition time (TR) shorter than T2 results in a steady state in which the transverse magnetization Mt reaches a nonzero value at the end of the sequence. This value depends on the TR and flip angle as well as on the frequency offset ν of each spin isochromat. The authors present a detailed analysis of the time domain and image domain signals for sequences with short TR that employ gradient reversal echoes. Because of the dependence of Mt on ν, two distinct echos appear in the time domain. With proper adjustment of the view gradients, each echo can be sampled separately. Image intensities derived for spins in a liquid (i.e., T1 -- T2) suggest enhanced signal intensity for the cerebrospinal fluid. This was confirmed experimentally

  8. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  9. Solar Image Analysis and Visualization

    CERN Document Server

    Ireland, J

    2009-01-01

    This volume presents a selection of papers on the state of the art of image enhancement, automated feature detection, machine learning, and visualization tools in support of solar physics that focus on the challenges presented by new ground-based and space-based instrumentation. The articles and topics were inspired by the Third Solar Image Processing Workshop, held at Trinity College Dublin, Ireland but contributions from other experts have been included as well. This book is mainly aimed at researchers and graduate students working on image processing and compter vision in astronomy and solar physics.

  10. Quantitative analysis of receptor imaging

    International Nuclear Information System (INIS)

    Fu Zhanli; Wang Rongfu

    2004-01-01

    Model-based methods for quantitative analysis of receptor imaging, including kinetic, graphical and equilibrium methods, are introduced in detail. Some technical problem facing quantitative analysis of receptor imaging, such as the correction for in vivo metabolism of the tracer and the radioactivity contribution from blood volume within ROI, and the estimation of the nondisplaceable ligand concentration, is also reviewed briefly

  11. Dynamic motion analysis of fetuses with central nervous system disorders by cine magnetic resonance imaging using fast imaging employing steady-state acquisition and parallel imaging: a preliminary result.

    Science.gov (United States)

    Guo, Wan-Yuo; Ono, Shigeki; Oi, Shizuo; Shen, Shu-Huei; Wong, Tai-Tong; Chung, Hsiao-Wen; Hung, Jeng-Hsiu

    2006-08-01

    The authors present a novel cine magnetic resonance (MR) imaging, two-dimensional (2D) fast imaging employing steady-state acquisition (FIESTA) technique with parallel imaging. It achieves temporal resolution at less than half a second as well as high spatial resolution cine imaging free of motion artifacts for evaluating the dynamic motion of fetuses in utero. The information obtained is used to predict postnatal outcome. Twenty-five fetuses with anomalies were studied. Ultrasonography demonstrated severe abnormalities in five of the fetuses; the other 20 fetuses constituted a control group. The cine fetal MR imaging demonstrated fetal head, neck, trunk, extremity, and finger as well as swallowing motions. Imaging findings were evaluated and compared in fetuses with major central nervous system (CNS) anomalies in five cases and minor CNS, non-CNS, or no anomalies in 20 cases. Normal motility was observed in the latter group. For fetuses in the former group, those with abnormal motility failed to survive after delivery, whereas those with normal motility survived with functioning preserved. The power deposition of radiofrequency, presented as specific absorption rate (SAR), was calculated. The SAR of FIESTA was approximately 13 times lower than that of conventional MR imaging of fetuses obtained using single-shot fast spin echo sequences. The following conclusions are drawn: 1) Fetal motion is no longer a limitation for prenatal imaging after the implementation of parallel imaging with 2D FIESTA, 2) Cine MR imaging illustrates fetal motion in utero with high clinical reliability, 3) For cases involving major CNS anomalies, cine MR imaging provides information on extremity motility in fetuses and serves as a prognostic indicator of postnatal outcome, and 4) The cine MR used to observe fetal activity is technically 2D and conceptually three-dimensional. It provides four-dimensional information for making proper and timely obstetrical and/or postnatal management

  12. Computer-Assisted Digital Image Analysis of Plus Disease in Retinopathy of Prematurity.

    Science.gov (United States)

    Kemp, Pavlina S; VanderVeen, Deborah K

    2016-01-01

    The objective of this study is to review the current state and role of computer-assisted analysis in diagnosis of plus disease in retinopathy of prematurity. Diagnosis and documentation of retinopathy of prematurity are increasingly being supplemented by digital imaging. The incorporation of computer-aided techniques has the potential to add valuable information and standardization regarding the presence of plus disease, an important criterion in deciding the necessity of treatment of vision-threatening retinopathy of prematurity. A review of literature found that several techniques have been published examining the process and role of computer aided analysis of plus disease in retinopathy of prematurity. These techniques use semiautomated image analysis techniques to evaluate retinal vascular dilation and tortuosity, using calculated parameters to evaluate presence or absence of plus disease. These values are then compared with expert consensus. The study concludes that computer-aided image analysis has the potential to use quantitative and objective criteria to act as a supplemental tool in evaluating for plus disease in the setting of retinopathy of prematurity.

  13. An automatic analyzer of solid state nuclear track detectors using an optic RAM as image sensor

    International Nuclear Information System (INIS)

    Staderini, E.M.; Castellano, A.

    1986-01-01

    An optic RAM is a conventional digital random access read/write dynamic memory device featuring a quartz windowed package and memory cells regularly ordered on the chip. Such a device is used as an image sensor because each cell retains data stored in it for a time depending on the intensity of the light incident on the cell itself. The authors have developed a system which uses an optic RAM to acquire and digitize images from electrochemically etched CR39 solid state nuclear track detectors (SSNTD) in the track count rate up to 5000 cm -2 . On the digital image so obtained, a microprocessor, with appropriate software, performs image analysis, filtering, tracks counting and evaluation. (orig.)

  14. Information granules in image histogram analysis.

    Science.gov (United States)

    Wieclawek, Wojciech

    2018-04-01

    A concept of granular computing employed in intensity-based image enhancement is discussed. First, a weighted granular computing idea is introduced. Then, the implementation of this term in the image processing area is presented. Finally, multidimensional granular histogram analysis is introduced. The proposed approach is dedicated to digital images, especially to medical images acquired by Computed Tomography (CT). As the histogram equalization approach, this method is based on image histogram analysis. Yet, unlike the histogram equalization technique, it works on a selected range of the pixel intensity and is controlled by two parameters. Performance is tested on anonymous clinical CT series. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    This book is a result of a collaboration between DTU Informatics at the Technical University of Denmark and the Laboratory of Computer Vision and Media Technology at Aalborg University. It is partly based on the book ”Image and Video Processing”, second edition by Thomas Moeslund. The aim...... of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code...

  16. Femtoelectron-Based Terahertz Imaging of Hydration State in a Proton Exchange Membrane Fuel Cell

    Science.gov (United States)

    Buaphad, P.; Thamboon, P.; Kangrang, N.; Rhodes, M. W.; Thongbai, C.

    2015-08-01

    Imbalanced water management in a proton exchange membrane (PEM) fuel cell significantly reduces the cell performance and durability. Visualization of water distribution and transport can provide greater comprehension toward optimization of the PEM fuel cell. In this work, we are interested in water flooding issues that occurred in flow channels on cathode side of the PEM fuel cell. The sample cell was fabricated with addition of a transparent acrylic window allowing light access and observed the process of flooding formation (in situ) via a CCD camera. We then explore potential use of terahertz (THz) imaging, consisting of femtoelectron-based THz source and off-angle reflective-mode imaging, to identify water presence in the sample cell. We present simulations of two hydration states (water and nonwater area), which are in agreement with the THz image results. A line-scan plot is utilized for quantitative analysis and for defining spatial resolution of the image. Implementing metal mesh filtering can improve spatial resolution of our THz imaging system.

  17. Energy-Looping Nanoparticles: Harnessing Excited-State Absorption for Deep-Tissue Imaging.

    Science.gov (United States)

    Levy, Elizabeth S; Tajon, Cheryl A; Bischof, Thomas S; Iafrati, Jillian; Fernandez-Bravo, Angel; Garfield, David J; Chamanzar, Maysamreza; Maharbiz, Michel M; Sohal, Vikaas S; Schuck, P James; Cohen, Bruce E; Chan, Emory M

    2016-09-27

    Near infrared (NIR) microscopy enables noninvasive imaging in tissue, particularly in the NIR-II spectral range (1000-1400 nm) where attenuation due to tissue scattering and absorption is minimized. Lanthanide-doped upconverting nanocrystals are promising deep-tissue imaging probes due to their photostable emission in the visible and NIR, but these materials are not efficiently excited at NIR-II wavelengths due to the dearth of lanthanide ground-state absorption transitions in this window. Here, we develop a class of lanthanide-doped imaging probes that harness an energy-looping mechanism that facilitates excitation at NIR-II wavelengths, such as 1064 nm, that are resonant with excited-state absorption transitions but not ground-state absorption. Using computational methods and combinatorial screening, we have identified Tm(3+)-doped NaYF4 nanoparticles as efficient looping systems that emit at 800 nm under continuous-wave excitation at 1064 nm. Using this benign excitation with standard confocal microscopy, energy-looping nanoparticles (ELNPs) are imaged in cultured mammalian cells and through brain tissue without autofluorescence. The 1 mm imaging depths and 2 μm feature sizes are comparable to those demonstrated by state-of-the-art multiphoton techniques, illustrating that ELNPs are a promising class of NIR probes for high-fidelity visualization in cells and tissue.

  18. Multispectral analysis of multimodal images

    Energy Technology Data Exchange (ETDEWEB)

    Kvinnsland, Yngve; Brekke, Njaal (Dept. of Surgical Sciences, Univ. of Bergen, Bergen (Norway)); Taxt, Torfinn M.; Gruener, Renate (Dept. of Biomedicine, Univ. of Bergen, Bergen (Norway))

    2009-02-15

    An increasing number of multimodal images represent a valuable increase in available image information, but at the same time it complicates the extraction of diagnostic information across the images. Multispectral analysis (MSA) has the potential to simplify this problem substantially as unlimited number of images can be combined, and tissue properties across the images can be extracted automatically. Materials and methods. We have developed a software solution for MSA containing two algorithms for unsupervised classification, an EM-algorithm finding multinormal class descriptions and the k-means clustering algorithm, and two for supervised classification, a Bayesian classifier using multinormal class descriptions and a kNN-algorithm. The software has an efficient user interface for the creation and manipulation of class descriptions, and it has proper tools for displaying the results. Results. The software has been tested on different sets of images. One application is to segment cross-sectional images of brain tissue (T1- and T2-weighted MR images) into its main normal tissues and brain tumors. Another interesting set of images are the perfusion maps and diffusion maps, derived images from raw MR images. The software returns segmentation that seem to be sensible. Discussion. The MSA software appears to be a valuable tool for image analysis with multimodal images at hand. It readily gives a segmentation of image volumes that visually seems to be sensible. However, to really learn how to use MSA, it will be necessary to gain more insight into what tissues the different segments contain, and the upcoming work will therefore be focused on examining the tissues through for example histological sections.

  19. An Imaging And Graphics Workstation For Image Sequence Analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  20. Brain-inspired algorithms for retinal image analysis

    NARCIS (Netherlands)

    ter Haar Romeny, B.M.; Bekkers, E.J.; Zhang, J.; Abbasi-Sureshjani, S.; Huang, F.; Duits, R.; Dasht Bozorg, Behdad; Berendschot, T.T.J.M.; Smit-Ockeloen, I.; Eppenhof, K.A.J.; Feng, J.; Hannink, J.; Schouten, J.; Tong, M.; Wu, H.; van Triest, J.W.; Zhu, S.; Chen, D.; He, W.; Xu, L.; Han, P.; Kang, Y.

    2016-01-01

    Retinal image analysis is a challenging problem due to the precise quantification required and the huge numbers of images produced in screening programs. This paper describes a series of innovative brain-inspired algorithms for automated retinal image analysis, recently developed for the RetinaCheck

  1. The cumulative verification image analysis tool for offline evaluation of portal images

    International Nuclear Information System (INIS)

    Wong, John; Yan Di; Michalski, Jeff; Graham, Mary; Halverson, Karen; Harms, William; Purdy, James

    1995-01-01

    Purpose: Daily portal images acquired using electronic portal imaging devices contain important information about the setup variation of the individual patient. The data can be used to evaluate the treatment and to derive correction for the individual patient. The large volume of images also require software tools for efficient analysis. This article describes the approach of cumulative verification image analysis (CVIA) specifically designed as an offline tool to extract quantitative information from daily portal images. Methods and Materials: The user interface, image and graphics display, and algorithms of the CVIA tool have been implemented in ANSCI C using the X Window graphics standards. The tool consists of three major components: (a) definition of treatment geometry and anatomical information; (b) registration of portal images with a reference image to determine setup variation; and (c) quantitative analysis of all setup variation measurements. The CVIA tool is not automated. User interaction is required and preferred. Successful alignment of anatomies on portal images at present remains mostly dependent on clinical judgment. Predefined templates of block shapes and anatomies are used for image registration to enhance efficiency, taking advantage of the fact that much of the tool's operation is repeated in the analysis of daily portal images. Results: The CVIA tool is portable and has been implemented on workstations with different operating systems. Analysis of 20 sequential daily portal images can be completed in less than 1 h. The temporal information is used to characterize setup variation in terms of its systematic, random and time-dependent components. The cumulative information is used to derive block overlap isofrequency distributions (BOIDs), which quantify the effective coverage of the prescribed treatment area throughout the course of treatment. Finally, a set of software utilities is available to facilitate feedback of the information for

  2. Enhanced 2D-image upconversion using solid-state lasers

    DEFF Research Database (Denmark)

    Pedersen, Christian; Karamehmedovic, Emir; Dam, Jeppe Seidelin

    2009-01-01

    the image inside a nonlinear PPKTP crystal located in the high intra-cavity field of a 1342 nm solid-state Nd:YVO4 laser, an upconverted image at 488 nm is generated. We have experimentally achieved an upconversion efficiency of 40% under CW conditions. The proposed technique can be further adapted for high...

  3. Operation States Analysis of the Series-Parallel resonant Converter Working Above Resonance Frequency

    Directory of Open Access Journals (Sweden)

    Peter Dzurko

    2007-01-01

    Full Text Available Operation states analysis of a series-parallel converter working above resonance frequency is described in the paper. Principal equations are derived for individual operation states. On the basis of them the diagrams are made out. The diagrams give the complex image of the converter behaviour for individual circuit parameters. The waveforms may be utilised at designing the inverter individual parts.

  4. Personalizing Medicine Through Hybrid Imaging and Medical Big Data Analysis

    Directory of Open Access Journals (Sweden)

    Laszlo Papp

    2018-06-01

    Full Text Available Medical imaging has evolved from a pure visualization tool to representing a primary source of analytic approaches toward in vivo disease characterization. Hybrid imaging is an integral part of this approach, as it provides complementary visual and quantitative information in the form of morphological and functional insights into the living body. As such, non-invasive imaging modalities no longer provide images only, but data, as stated recently by pioneers in the field. Today, such information, together with other, non-imaging medical data creates highly heterogeneous data sets that underpin the concept of medical big data. While the exponential growth of medical big data challenges their processing, they inherently contain information that benefits a patient-centric personalized healthcare. Novel machine learning approaches combined with high-performance distributed cloud computing technologies help explore medical big data. Such exploration and subsequent generation of knowledge require a profound understanding of the technical challenges. These challenges increase in complexity when employing hybrid, aka dual- or even multi-modality image data as input to big data repositories. This paper provides a general insight into medical big data analysis in light of the use of hybrid imaging information. First, hybrid imaging is introduced (see further contributions to this special Research Topic, also in the context of medical big data, then the technological background of machine learning as well as state-of-the-art distributed cloud computing technologies are presented, followed by the discussion of data preservation and data sharing trends. Joint data exploration endeavors in the context of in vivo radiomics and hybrid imaging will be presented. Standardization challenges of imaging protocol, delineation, feature engineering, and machine learning evaluation will be detailed. Last, the paper will provide an outlook into the future role of hybrid

  5. Image sequence analysis workstation for multipoint motion analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  6. Image acquisitions, processing and analysis in the process of obtaining characteristics of horse navicular bone

    Science.gov (United States)

    Zaborowicz, M.; Włodarek, J.; Przybylak, A.; Przybył, K.; Wojcieszak, D.; Czekała, W.; Ludwiczak, A.; Boniecki, P.; Koszela, K.; Przybył, J.; Skwarcz, J.

    2015-07-01

    The aim of this study was investigate the possibility of using methods of computer image analysis for the assessment and classification of morphological variability and the state of health of horse navicular bone. Assumption was that the classification based on information contained in the graphical form two-dimensional digital images of navicular bone and information of horse health. The first step in the research was define the classes of analyzed bones, and then using methods of computer image analysis for obtaining characteristics from these images. This characteristics were correlated with data concerning the animal, such as: side of hooves, number of navicular syndrome (scale 0-3), type, sex, age, weight, information about lace, information about heel. This paper shows the introduction to the study of use the neural image analysis in the diagnosis of navicular bone syndrome. Prepared method can provide an introduction to the study of non-invasive way to assess the condition of the horse navicular bone.

  7. The design and imaging characteristics of dynamic, solid-state, flat-panel x-ray image detectors for digital fluoroscopy and fluorography

    International Nuclear Information System (INIS)

    Cowen, A.R.; Davies, A.G.; Sivananthan, M.U.

    2008-01-01

    Dynamic, flat-panel, solid-state, x-ray image detectors for use in digital fluoroscopy and fluorography emerged at the turn of the millennium. This new generation of dynamic detectors utilize a thin layer of x-ray absorptive material superimposed upon an electronic active matrix array fabricated in a film of hydrogenated amorphous silicon (a-Si:H). Dynamic solid-state detectors come in two basic designs, the indirect-conversion (x-ray scintillator based) and the direct-conversion (x-ray photoconductor based). This review explains the underlying principles and enabling technologies associated with these detector designs, and evaluates their physical imaging characteristics, comparing their performance against the long established x-ray image intensifier television (TV) system. Solid-state detectors afford a number of physical imaging benefits compared with the latter. These include zero geometrical distortion and vignetting, immunity from blooming at exposure highlights and negligible contrast loss (due to internal scatter). They also exhibit a wider dynamic range and maintain higher spatial resolution when imaging over larger fields of view. The detective quantum efficiency of indirect-conversion, dynamic, solid-state detectors is superior to that of both x-ray image intensifier TV systems and direct-conversion detectors. Dynamic solid-state detectors are playing a burgeoning role in fluoroscopy-guided diagnosis and intervention, leading to the displacement of x-ray image intensifier TV-based systems. Future trends in dynamic, solid-state, digital fluoroscopy detectors are also briefly considered. These include the growth in associated three-dimensional (3D) visualization techniques and potential improvements in dynamic detector design

  8. Complex network analysis of resting-state fMRI of the brain.

    Science.gov (United States)

    Anwar, Abdul Rauf; Hashmy, Muhammad Yousaf; Imran, Bilal; Riaz, Muhammad Hussnain; Mehdi, Sabtain Muhammad Muntazir; Muthalib, Makii; Perrey, Stephane; Deuschl, Gunther; Groppa, Sergiu; Muthuraman, Muthuraman

    2016-08-01

    Due to the fact that the brain activity hardly ever diminishes in healthy individuals, analysis of resting state functionality of the brain seems pertinent. Various resting state networks are active inside the idle brain at any time. Based on various neuro-imaging studies, it is understood that various structurally distant regions of the brain could be functionally connected. Regions of the brain, that are functionally connected, during rest constitutes to the resting state network. In the present study, we employed the complex network measures to estimate the presence of community structures within a network. Such estimate is named as modularity. Instead of using a traditional correlation matrix, we used a coherence matrix taken from the causality measure between different nodes. Our results show that in prolonged resting state the modularity starts to decrease. This decrease was observed in all the resting state networks and on both sides of the brain. Our study highlights the usage of coherence matrix instead of correlation matrix for complex network analysis.

  9. Image analysis enhancement and interpretation

    International Nuclear Information System (INIS)

    Glauert, A.M.

    1978-01-01

    The necessary practical and mathematical background are provided for the analysis of an electron microscope image in order to extract the maximum amount of structural information. Instrumental methods of image enhancement are described, including the use of the energy-selecting electron microscope and the scanning transmission electron microscope. The problems of image interpretation are considered with particular reference to the limitations imposed by radiation damage and specimen thickness. A brief survey is given of the methods for producing a three-dimensional structure from a series of two-dimensional projections, although emphasis is really given on the analysis, processing and interpretation of the two-dimensional projection of a structure. (Auth.)

  10. Data Analysis Strategies in Medical Imaging.

    Science.gov (United States)

    Parmar, Chintan; Barry, Joseph D; Hosny, Ahmed; Quackenbush, John; Aerts, Hugo Jwl

    2018-03-26

    Radiographic imaging continues to be one of the most effective and clinically useful tools within oncology. Sophistication of artificial intelligence (AI) has allowed for detailed quantification of radiographic characteristics of tissues using predefined engineered algorithms or deep learning methods. Precedents in radiology as well as a wealth of research studies hint at the clinical relevance of these characteristics. However, there are critical challenges associated with the analysis of medical imaging data. While some of these challenges are specific to the imaging field, many others like reproducibility and batch effects are generic and have already been addressed in other quantitative fields such as genomics. Here, we identify these pitfalls and provide recommendations for analysis strategies of medical imaging data including data normalization, development of robust models, and rigorous statistical analyses. Adhering to these recommendations will not only improve analysis quality, but will also enhance precision medicine by allowing better integration of imaging data with other biomedical data sources. Copyright ©2018, American Association for Cancer Research.

  11. The ImageJ ecosystem: An open platform for biomedical image analysis.

    Science.gov (United States)

    Schindelin, Johannes; Rueden, Curtis T; Hiner, Mark C; Eliceiri, Kevin W

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available-from commercial to academic, special-purpose to Swiss army knife, small to large-but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on the life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts the life sciences, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. © 2015 Wiley Periodicals, Inc.

  12. Optimization of shearography image quality analysis

    International Nuclear Information System (INIS)

    Rafhayudi Jamro

    2005-01-01

    Shearography is an optical technique based on speckle pattern to measure the deformation of the object surface in which the fringe pattern is obtained through the correlation analysis from the speckle pattern. Analysis of fringe pattern for engineering application is limited for qualitative measurement. Therefore, for further analysis that lead to qualitative data, series of image processing mechanism are involved. In this paper, the fringe pattern for qualitative analysis is discussed. In principal field of applications is qualitative non-destructive testing such as detecting discontinuity, defect in the material structure, locating fatigue zones and etc and all these required image processing application. In order to performed image optimisation successfully, the noise in the fringe pattern must be minimised and the fringe pattern itself must be maximise. This can be achieved by applying a filtering method with a kernel size ranging from 2 X 2 to 7 X 7 pixels size and also applying equalizer in the image processing. (Author)

  13. Dynamic Chest Image Analysis: Model-Based Perfusion Analysis in Dynamic Pulmonary Imaging

    Directory of Open Access Journals (Sweden)

    Kiuru Aaro

    2003-01-01

    Full Text Available The "Dynamic Chest Image Analysis" project aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the dynamic pulmonary imaging technique. We have proposed and evaluated a multiresolutional method with an explicit ventilation model for ventilation analysis. This paper presents a new model-based method for pulmonary perfusion analysis. According to perfusion properties, we first devise a novel mathematical function to form a perfusion model. A simple yet accurate approach is further introduced to extract cardiac systolic and diastolic phases from the heart, so that this cardiac information may be utilized to accelerate the perfusion analysis and improve its sensitivity in detecting pulmonary perfusion abnormalities. This makes perfusion analysis not only fast but also robust in computation; consequently, perfusion analysis becomes computationally feasible without using contrast media. Our clinical case studies with 52 patients show that this technique is effective for pulmonary embolism even without using contrast media, demonstrating consistent correlations with computed tomography (CT and nuclear medicine (NM studies. This fluoroscopical examination takes only about 2 seconds for perfusion study with only low radiation dose to patient, involving no preparation, no radioactive isotopes, and no contrast media.

  14. Pocket Size Solid State FLASH and iPOD Drives for gigabyte storage, display and transfer of digital medical images: an introduction

    International Nuclear Information System (INIS)

    Sankaran, A.

    2008-01-01

    The transition of radiological imaging from analog to digital was closely followed by the development of the Picture Archiving and Communication (PACS) system. Concomitantly, multidimensional imaging ( 4D and 5D, for motion and functional studies on 3D images) have presented new challenges, particularly in handling gigabyte size images from CT, MRI and PET scanners, which generate thousands of images. The storage and analysis of these images necessitate expensive image workstations. This paper highlights the recent innovations in mass storage, display and transfer of images, using miniature/pocket size solid state FLASH and iPOD drives

  15. Dynamic functional connectivity analysis reveals transient states of dysconnectivity in schizophrenia

    Directory of Open Access Journals (Sweden)

    E. Damaraju

    2014-01-01

    Full Text Available Schizophrenia is a psychotic disorder characterized by functional dysconnectivity or abnormal integration between distant brain regions. Recent functional imaging studies have implicated large-scale thalamo-cortical connectivity as being disrupted in patients. However, observed connectivity differences in schizophrenia have been inconsistent between studies, with reports of hyperconnectivity and hypoconnectivity between the same brain regions. Using resting state eyes-closed functional imaging and independent component analysis on a multi-site data that included 151 schizophrenia patients and 163 age- and gender matched healthy controls, we decomposed the functional brain data into 100 components and identified 47 as functionally relevant intrinsic connectivity networks. We subsequently evaluated group differences in functional network connectivity, both in a static sense, computed as the pairwise Pearson correlations between the full network time courses (5.4 minutes in length, and a dynamic sense, computed using sliding windows (44 s in length and k-means clustering to characterize five discrete functional connectivity states. Static connectivity analysis revealed that compared to healthy controls, patients show significantly stronger connectivity, i.e., hyperconnectivity, between the thalamus and sensory networks (auditory, motor and visual, as well as reduced connectivity (hypoconnectivity between sensory networks from all modalities. Dynamic analysis suggests that (1, on average, schizophrenia patients spend much less time than healthy controls in states typified by strong, large-scale connectivity, and (2, that abnormal connectivity patterns are more pronounced during these connectivity states. In particular, states exhibiting cortical–subcortical antagonism (anti-correlations and strong positive connectivity between sensory networks are those that show the group differences of thalamic hyperconnectivity and sensory hypoconnectivity

  16. Some developments in multivariate image analysis

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    be up to several million. The main MIA tool for exploratory analysis is score density plot – all pixels are projected into principal component space and on the corresponding scores plots are colorized according to their density (how many pixels are crowded in the unit area of the plot). Looking...... for and analyzing patterns on these plots and the original image allow to do interactive analysis, to get some hidden information, build a supervised classification model, and much more. In the present work several alternative methods to original principal component analysis (PCA) for building the projection......Multivariate image analysis (MIA), one of the successful chemometric applications, now is used widely in different areas of science and industry. Introduced in late 80s it has became very popular with hyperspectral imaging, where MIA is one of the most efficient tools for exploratory analysis...

  17. Rapid analysis and exploration of fluorescence microscopy images.

    Science.gov (United States)

    Pavie, Benjamin; Rajaram, Satwik; Ouyang, Austin; Altschuler, Jason M; Steininger, Robert J; Wu, Lani F; Altschuler, Steven J

    2014-03-19

    Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard. Here we present an alternate, cell-segmentation-free workflow based on PhenoRipper, an open-source software platform designed for the rapid analysis and exploration of microscopy images. The pipeline presented here is optimized for immunofluorescence microscopy images of cell cultures and requires minimal user intervention. Within half an hour, PhenoRipper can analyze data from a typical 96-well experiment and generate image profiles. Users can then visually explore their data, perform quality control on their experiment, ensure response to perturbations and check reproducibility of replicates. This facilitates a rapid feedback cycle between analysis and experiment, which is crucial during assay optimization. This protocol is useful not just as a first pass analysis for quality control, but also may be used as an end-to-end solution, especially for screening. The workflow described here scales to large data sets such as those generated by high-throughput screens, and has been shown to group experimental conditions by phenotype accurately over a wide range of biological systems. The PhenoBrowser interface provides an intuitive framework to explore the phenotypic space and relate image properties to biological annotations. Taken together, the protocol described here will lower the barriers to adopting quantitative analysis of image based screens.

  18. UV imaging in pharmaceutical analysis

    DEFF Research Database (Denmark)

    Østergaard, Jesper

    2018-01-01

    UV imaging provides spatially and temporally resolved absorbance measurements, which are highly useful in pharmaceutical analysis. Commercial UV imaging instrumentation was originally developed as a detector for separation sciences, but the main use is in the area of in vitro dissolution...

  19. Image Analysis of Eccentric Photorefraction

    Directory of Open Access Journals (Sweden)

    J. Dušek

    2004-01-01

    Full Text Available This article deals with image and data analysis of the recorded video-sequences of strabistic infants. It describes a unique noninvasive measuring system based on two measuring methods (position of I. Purkynje image with relation to the centre of the lens and eccentric photorefraction for infants. The whole process is divided into three steps. The aim of the first step is to obtain video sequences on our special system (Eye Movement Analyser. Image analysis of the recorded sequences is performed in order to obtain curves of basic eye reactions (accommodation and convergence. The last step is to calibrate of these curves to corresponding units (diopter and degrees of movement.

  20. Knowledge-based image analysis: some aspects on the analysis of images using other types of information

    Energy Technology Data Exchange (ETDEWEB)

    Eklundh, J O

    1982-01-01

    The computer vision approach to image analysis is discussed from two aspects. First, this approach is constrasted to the pattern recognition approach. Second, how external knowledge and information and models from other fields of science and engineering can be used for image and scene analysis is discussed. In particular, the connections between computer vision and computer graphics are pointed out.

  1. Psychological factors of the image of the state in the students’ perception

    Directory of Open Access Journals (Sweden)

    Ruslana Karkovska

    2014-09-01

    Full Text Available The article is dedicated to the study of the psychological determination of the state’s image in the students’ perception. The results of empirical research of the influence of the students’ adaptation and orientation to their vision of the state are described in the article. The state image in the perception of student youth was analyzed according to the parameters of the state’s accomplishment assessment, the feeling of a sense of belonging to the state, the sense of vision of the state existence and the assessment of the state power as competent, effective, credible, and strong.

  2. Malware analysis using visualized image matrices.

    Science.gov (United States)

    Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.

  3. Malware Analysis Using Visualized Image Matrices

    Directory of Open Access Journals (Sweden)

    KyoungSoo Han

    2014-01-01

    Full Text Available This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.

  4. Quantifying biodiversity using digital cameras and automated image analysis.

    Science.gov (United States)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  5. Rapid, low-cost, image analysis through video processing

    International Nuclear Information System (INIS)

    Levinson, R.A.; Marrs, R.W.; Grantham, D.G.

    1976-01-01

    Remote Sensing now provides the data necessary to solve many resource problems. However, many of the complex image processing and analysis functions used in analysis of remotely-sensed data are accomplished using sophisticated image analysis equipment. High cost of this equipment places many of these techniques beyond the means of most users. A new, more economical, video system capable of performing complex image analysis has now been developed. This report describes the functions, components, and operation of that system. Processing capability of the new video image analysis system includes many of the tasks previously accomplished with optical projectors and digital computers. Video capabilities include: color separation, color addition/subtraction, contrast stretch, dark level adjustment, density analysis, edge enhancement, scale matching, image mixing (addition and subtraction), image ratioing, and construction of false-color composite images. Rapid input of non-digital image data, instantaneous processing and display, relatively low initial cost, and low operating cost gives the video system a competitive advantage over digital equipment. Complex pre-processing, pattern recognition, and statistical analyses must still be handled through digital computer systems. The video system at the University of Wyoming has undergone extensive testing, comparison to other systems, and has been used successfully in practical applications ranging from analysis of x-rays and thin sections to production of color composite ratios of multispectral imagery. Potential applications are discussed including uranium exploration, petroleum exploration, tectonic studies, geologic mapping, hydrology sedimentology and petrography, anthropology, and studies on vegetation and wildlife habitat

  6. Variability in "1"8F-FDG PET/CT methodology of acquisition, reconstruction and analysis for oncologic imaging: state survey

    International Nuclear Information System (INIS)

    Fischer, Andreia C.F. da S.; Druzian, Aline C.; Bacelar, Alexandre; Pianta, Diego B.; Silva, Ana M. Marques da

    2016-01-01

    The SUV in "1"8F-FDG PET/CT oncological imaging is useful for cancer diagnosis, staging and treatment assessment. There are, however, several factors that can give rise to bias in SUV measurements. When using SUV as a diagnostic tool, one needs to minimize the variability in this measurement by standardization of patient preparation, acquisition and reconstruction parameters. The aim of this study is to evaluate the methodological variability in PET/CT acquisition in Rio Grande do Sul State. For that, in each department, a questionnaire was applied to survey technical information from PET/CT systems and about the acquisitions and analysis methods utilized. All departments implement quality assurance programs consistent with (inter)national recommendations. However, the acquisition and reconstruction methods of acquired PET data differ. The implementation of a harmonized strategy for quantifying the SUV is suggested, in order to obtain greater reproducibility and repeatability. (author)

  7. State Variation in Medical Imaging: Despite Great Variation, the Medicare Spending Decline Continues.

    Science.gov (United States)

    Rosenkrantz, Andrew B; Hughes, Danny R; Duszak, Richard

    2015-10-01

    The purpose of this study was to assess state-level trends in per beneficiary Medicare spending on medical imaging. Medicare part B 5% research identifiable files from 2004 through 2012 were used to compute national and state-by-state annual average per beneficiary spending on imaging. State-to-state geographic variation and temporal trends were analyzed. National average per beneficiary Medicare part B spending on imaging increased 7.8% annually between 2004 ($350.54) and its peak in 2006 ($405.41) then decreased 4.4% annually between 2006 and 2012 ($298.63). In 2012, annual per beneficiary spending was highest in Florida ($367.25) and New York ($355.67) and lowest in Ohio ($67.08) and Vermont ($72.78). Maximum state-to-state geographic variation increased over time, with the ratio of highest-spending state to lowest-spending state increasing from 4.0 in 2004 to 5.5 in 2012. Spending in nearly all states decreased since peaks in 2005 (six states) or 2006 (43 states). The average annual decrease among states was 5.1% ± 1.8% (range, 1.2-12.2%) The largest decrease was in Ohio. In only two states did per beneficiary spending increase (Maryland, 12.5% average annual increase since 2005; Oregon, 4.8% average annual increase since 2008). Medicare part B average per beneficiary spending on medical imaging declined in nearly every state since 2005 and 2006 peaks, abruptly reversing previously reported trends. Spending continued to increase, however, in Maryland and Oregon. Identification of state-level variation may facilitate future investigation of the potential effect of specific and regional changes in spending on patient access and outcomes.

  8. Automated image analysis of atomic force microscopy images of rotavirus particles

    International Nuclear Information System (INIS)

    Venkataraman, S.; Allison, D.P.; Qi, H.; Morrell-Falvey, J.L.; Kallewaard, N.L.; Crowe, J.E.; Doktycz, M.J.

    2006-01-01

    A variety of biological samples can be imaged by the atomic force microscope (AFM) under environments that range from vacuum to ambient to liquid. Generally imaging is pursued to evaluate structural features of the sample or perhaps identify some structural changes in the sample that are induced by the investigator. In many cases, AFM images of sample features and induced structural changes are interpreted in general qualitative terms such as markedly smaller or larger, rougher, highly irregular, or smooth. Various manual tools can be used to analyze images and extract more quantitative data, but this is usually a cumbersome process. To facilitate quantitative AFM imaging, automated image analysis routines are being developed. Viral particles imaged in water were used as a test case to develop an algorithm that automatically extracts average dimensional information from a large set of individual particles. The extracted information allows statistical analyses of the dimensional characteristics of the particles and facilitates interpretation related to the binding of the particles to the surface. This algorithm is being extended for analysis of other biological samples and physical objects that are imaged by AFM

  9. Automated image analysis of atomic force microscopy images of rotavirus particles

    Energy Technology Data Exchange (ETDEWEB)

    Venkataraman, S. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Allison, D.P. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Biochemistry, Cellular, and Molecular Biology, University of Tennessee, Knoxville, TN 37996 (United States); Molecular Imaging Inc. Tempe, AZ, 85282 (United States); Qi, H. [Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Morrell-Falvey, J.L. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Kallewaard, N.L. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Crowe, J.E. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Doktycz, M.J. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States)]. E-mail: doktyczmj@ornl.gov

    2006-06-15

    A variety of biological samples can be imaged by the atomic force microscope (AFM) under environments that range from vacuum to ambient to liquid. Generally imaging is pursued to evaluate structural features of the sample or perhaps identify some structural changes in the sample that are induced by the investigator. In many cases, AFM images of sample features and induced structural changes are interpreted in general qualitative terms such as markedly smaller or larger, rougher, highly irregular, or smooth. Various manual tools can be used to analyze images and extract more quantitative data, but this is usually a cumbersome process. To facilitate quantitative AFM imaging, automated image analysis routines are being developed. Viral particles imaged in water were used as a test case to develop an algorithm that automatically extracts average dimensional information from a large set of individual particles. The extracted information allows statistical analyses of the dimensional characteristics of the particles and facilitates interpretation related to the binding of the particles to the surface. This algorithm is being extended for analysis of other biological samples and physical objects that are imaged by AFM.

  10. Image analysis for ophthalmological diagnosis image processing of Corvis ST images using Matlab

    CERN Document Server

    Koprowski, Robert

    2016-01-01

    This monograph focuses on the use of analysis and processing methods for images from the Corvis® ST tonometer. The presented analysis is associated with the quantitative, repeatable and fully automatic evaluation of the response of the eye, eyeball and cornea to an air-puff. All the described algorithms were practically implemented in MATLAB®. The monograph also describes and provides the full source code designed to perform the discussed calculations. As a result, this monograph is intended for scientists, graduate students and students of computer science and bioengineering as well as doctors wishing to expand their knowledge of modern diagnostic methods assisted by various image analysis and processing methods.

  11. A Practical and Portable Solids-State Electronic Terahertz Imaging System

    Directory of Open Access Journals (Sweden)

    Ken Smart

    2016-04-01

    Full Text Available A practical compact solid-state terahertz imaging system is presented. Various beam guiding architectures were explored and hardware performance assessed to improve its compactness, robustness, multi-functionality and simplicity of operation. The system performance in terms of image resolution, signal-to-noise ratio, the electronic signal modulation versus optical chopper, is evaluated and discussed. The system can be conveniently switched between transmission and reflection mode according to the application. A range of imaging application scenarios was explored and images of high visual quality were obtained in both transmission and reflection mode.

  12. Separating Bulk and Surface Contributions to Electronic Excited-State Processes in Hybrid Mixed Perovskite Thin Films via Multimodal All-Optical Imaging.

    Science.gov (United States)

    Simpson, Mary Jane; Doughty, Benjamin; Das, Sanjib; Xiao, Kai; Ma, Ying-Zhong

    2017-07-20

    A comprehensive understanding of electronic excited-state phenomena underlying the impressive performance of solution-processed hybrid halide perovskite solar cells requires access to both spatially resolved electronic processes and corresponding sample morphological characteristics. Here, we demonstrate an all-optical multimodal imaging approach that enables us to obtain both electronic excited-state and morphological information on a single optical microscope platform with simultaneous high temporal and spatial resolution. Specifically, images were acquired for the same region of interest in thin films of chloride containing mixed lead halide perovskites (CH 3 NH 3 PbI 3-x Cl x ) using femtosecond transient absorption, time-integrated photoluminescence, confocal reflectance, and transmission microscopies. Comprehensive image analysis revealed the presence of surface- and bulk-dominated contributions to the various images, which describe either spatially dependent electronic excited-state properties or morphological variations across the probed region of the thin films. These results show that PL probes effectively the species near or at the film surface.

  13. Imaging hydrated microbial extracellular polymers: Comparative analysis by electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Dohnalkova, A.C.; Marshall, M. J.; Arey, B. W.; Williams, K. H.; Buck, E. C.; Fredrickson, J. K.

    2011-01-01

    Microbe-mineral and -metal interactions represent a major intersection between the biosphere and geosphere but require high-resolution imaging and analytical tools for investigating microscale associations. Electron microscopy has been used extensively for geomicrobial investigations and although used bona fide, the traditional methods of sample preparation do not preserve the native morphology of microbiological components, especially extracellular polymers. Herein, we present a direct comparative analysis of microbial interactions using conventional electron microscopy approaches of imaging at room temperature and a suite of cryogenic electron microscopy methods providing imaging in the close-to-natural hydrated state. In situ, we observed an irreversible transformation of the hydrated bacterial extracellular polymers during the traditional dehydration-based sample preparation that resulted in their collapse into filamentous structures. Dehydration-induced polymer collapse can lead to inaccurate spatial relationships and hence could subsequently affect conclusions regarding nature of interactions between microbial extracellular polymers and their environment.

  14. Applications of stochastic geometry in image analysis

    NARCIS (Netherlands)

    Lieshout, van M.N.M.; Kendall, W.S.; Molchanov, I.S.

    2009-01-01

    A discussion is given of various stochastic geometry models (random fields, sequential object processes, polygonal field models) which can be used in intermediate and high-level image analysis. Two examples are presented of actual image analysis problems (motion tracking in video,

  15. A report on digital image processing and analysis

    International Nuclear Information System (INIS)

    Singh, B.; Alex, J.; Haridasan, G.

    1989-01-01

    This report presents developments in software, connected with digital image processing and analysis in the Centre. In image processing, one resorts to either alteration of grey level values so as to enhance features in the image or resorts to transform domain operations for restoration or filtering. Typical transform domain operations like Karhunen-Loeve transforms are statistical in nature and are used for a good registration of images or template - matching. Image analysis procedures segment grey level images into images contained within selectable windows, for the purpose of estimating geometrical features in the image, like area, perimeter, projections etc. In short, in image processing both the input and output are images, whereas in image analyses, the input is an image whereas the output is a set of numbers and graphs. (author). 19 refs

  16. APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES

    International Nuclear Information System (INIS)

    STOYANOVA, R.S.; OCHS, M.F.; BROWN, T.R.; ROONEY, W.D.; LI, X.; LEE, J.H.; SPRINGER, C.S.

    1999-01-01

    Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content

  17. An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis

    Science.gov (United States)

    Kim, Yongmin; Alexander, Thomas

    1986-06-01

    In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.

  18. CONTEXT BASED FOOD IMAGE ANALYSIS

    OpenAIRE

    He, Ye; Xu, Chang; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.

    2013-01-01

    We are developing a dietary assessment system that records daily food intake through the use of food images. Recognizing food in an image is difficult due to large visual variance with respect to eating or preparation conditions. This task becomes even more challenging when different foods have similar visual appearance. In this paper we propose to incorporate two types of contextual dietary information, food co-occurrence patterns and personalized learning models, in food image analysis to r...

  19. Analysis of fetal movements by Doppler actocardiogram and fetal B-mode imaging.

    Science.gov (United States)

    Maeda, K; Tatsumura, M; Utsu, M

    1999-12-01

    We have presented that fetal surveillance may be enhanced by use of the fetal actocardiogram and by computerized processing of fetal motion as well as fetal B-mode ultrasound imaging. Ultrasonic Doppler fetal actogram is a sensitive and objective method for detecting and recording fetal movements. Computer processing of the actograph output signals enables powerful, detailed, and convenient analysis of fetal physiologic phenomena. The actocardiogram is a useful measurement tool not only in fetal behavioral studies but also in evaluation of fetal well-being. It reduces false-positive, nonreactive NST and false-positive sinusoidal FHR pattern. It is a valuable tool to predict fetal distress. The results of intrapartum fetal monitoring are further improved by the antepartum application of the actocardiogram. Quantified fetal motion analysis is a useful, objective evaluation of the embryo and fetus. This method allows monitoring of changes in fetal movement, as well as frequency, amplitude, and duration. Furthermore, quantification of fetal motion enables evaluation of fetal behavior states and how these states relate to other measurements, such as changes in FHR. Numeric analysis of both fetal actogram and fetal motion from B-mode images is a promising application in the correlation of fetal activity or behavior with other fetal physiologic measurements.

  20. Marginal space learning for medical image analysis efficient detection and segmentation of anatomical structures

    CERN Document Server

    Zheng, Yefeng

    2014-01-01

    Presents an award winning image analysis technology (Thomas Edison Patent Award, MICCAI Young Investigator Award) that achieves object detection and segmentation with state-of-the-art accuracy and efficiency Flexible, machine learning-based framework, applicable across multiple anatomical structures and imaging modalities Thirty five clinical applications on detecting and segmenting anatomical structures such as heart chambers and valves, blood vessels, liver, kidney, prostate, lymph nodes, and sub-cortical brain structures, in CT, MRI, X-Ray and Ultrasound.

  1. Simultaneous PET/MR imaging in a human brain PET/MR system in 50 patients—Current state of image quality

    International Nuclear Information System (INIS)

    Schwenzer, N.F.; Stegger, L.; Bisdas, S.; Schraml, C.; Kolb, A.; Boss, A.; Müller, M.

    2012-01-01

    Objectives: The present work illustrates the current state of image quality and diagnostic accuracy in a new hybrid BrainPET/MR. Materials and methods: 50 patients with intracranial masses, head and upper neck tumors or neurodegenerative diseases were examined with a hybrid BrainPET/MR consisting of a conventional 3T MR system and an MR-compatible PET insert. Directly before PET/MR, all patients underwent a PET/CT examination with either [ 18 F]-FDG, [ 11 C]-methionine or [ 68 Ga]-DOTATOC. In addition to anatomical MR scans, functional sequences were performed including diffusion tensor imaging (DTI), arterial spin labeling (ASL) and proton-spectroscopy. Image quality score of MR imaging was evaluated using a 4-point-scale. PET data quality was assessed by evaluating FDG-uptake and tumor delineation with [ 11 C]-methionine and [ 68 Ga]-DOTATOC. FDG uptake quantification accuracy was evaluated by means of ROI analysis (right and left frontal and temporo-occipital lobes). The asymmetry indices and ratios between frontal and occipital ROIs were compared. Results: In 45/50 patients, PET/MR examination was successful. Visual analysis revealed a diagnostic image quality of anatomical MR imaging (mean quality score T2 FSE: 1.27 ± 0.54; FLAIR: 1.38 ± 0.61). ASL and proton-spectroscopy was possible in all cases. In DTI, dental artifacts lead to one non-diagnostic dataset (mean quality score DTI: 1.32 ± 0.69; ASL: 1.10 ± 0.31). PET datasets of PET/MR and PET/CT offered comparable tumor delineation with [ 11 C]-methionine; additional lesions were found in 2/8 [ 68 Ga]-DOTATOC-PET in the PET/MR. Mean asymmetry index revealed a high accordance between PET/MR and PET/CT (1.5 ± 2.2% vs. 0.9 ± 3.6%; mean ratio (frontal/parieto-occipital) 0.93 ± 0.08 vs. 0.96 ± 0.05), respectively. Conclusions: The hybrid BrainPET/MR allows for molecular, anatomical and functional imaging with uncompromised MR image quality and a high accordance of PET results between PET/MR and PET

  2. Simultaneous PET/MR imaging in a human brain PET/MR system in 50 patients-Current state of image quality

    Energy Technology Data Exchange (ETDEWEB)

    Schwenzer, N.F., E-mail: nina.schwenzer@med.uni-tuebingen.de [Department of Diagnostic and Interventional Radiology, Eberhard-Karls University Tuebingen, Tuebingen (Germany); Stegger, L., E-mail: stegger@gmx.net [Department of Nuclear Medicine and European Institute for Molecular Imaging, University of Muenster, Muenster (Germany); Bisdas, S., E-mail: sbisdas@gmail.com [Department of Diagnostic and Interventional Neuroradiology, Eberhard-Karls University Tuebingen, Tuebingen (Germany); Schraml, C., E-mail: christina.schraml@med.uni-tuebingen.de [Department of Diagnostic and Interventional Radiology, Eberhard-Karls University Tuebingen, Tuebingen (Germany); Kolb, A., E-mail: armin.kolb@med.uni-tuebingen.de [Laboratory for Preclinical Imaging and Imaging Technology of the Werner Siemens-Foundation, Department of Preclinical Imaging and Radiopharmacy, Eberhard-Karls University Tuebingen, Tuebingen (Germany); Boss, A., E-mail: Andreas.Boss@usz.ch [Department of Diagnostic and Interventional Radiology, Eberhard-Karls University Tuebingen, Tuebingen (Germany); Institute of Diagnostic and Interventional Radiology, University Hospital Zuerich, Zuerich (Switzerland); Mueller, M., E-mail: mark.mueller@med.uni-tuebingen.de [Department of Nuclear Medicine, Eberhard-Karls University Tuebingen, Tuebingen (Germany); and others

    2012-11-15

    Objectives: The present work illustrates the current state of image quality and diagnostic accuracy in a new hybrid BrainPET/MR. Materials and methods: 50 patients with intracranial masses, head and upper neck tumors or neurodegenerative diseases were examined with a hybrid BrainPET/MR consisting of a conventional 3T MR system and an MR-compatible PET insert. Directly before PET/MR, all patients underwent a PET/CT examination with either [{sup 18}F]-FDG, [{sup 11}C]-methionine or [{sup 68}Ga]-DOTATOC. In addition to anatomical MR scans, functional sequences were performed including diffusion tensor imaging (DTI), arterial spin labeling (ASL) and proton-spectroscopy. Image quality score of MR imaging was evaluated using a 4-point-scale. PET data quality was assessed by evaluating FDG-uptake and tumor delineation with [{sup 11}C]-methionine and [{sup 68}Ga]-DOTATOC. FDG uptake quantification accuracy was evaluated by means of ROI analysis (right and left frontal and temporo-occipital lobes). The asymmetry indices and ratios between frontal and occipital ROIs were compared. Results: In 45/50 patients, PET/MR examination was successful. Visual analysis revealed a diagnostic image quality of anatomical MR imaging (mean quality score T2 FSE: 1.27 {+-} 0.54; FLAIR: 1.38 {+-} 0.61). ASL and proton-spectroscopy was possible in all cases. In DTI, dental artifacts lead to one non-diagnostic dataset (mean quality score DTI: 1.32 {+-} 0.69; ASL: 1.10 {+-} 0.31). PET datasets of PET/MR and PET/CT offered comparable tumor delineation with [{sup 11}C]-methionine; additional lesions were found in 2/8 [{sup 68}Ga]-DOTATOC-PET in the PET/MR. Mean asymmetry index revealed a high accordance between PET/MR and PET/CT (1.5 {+-} 2.2% vs. 0.9 {+-} 3.6%; mean ratio (frontal/parieto-occipital) 0.93 {+-} 0.08 vs. 0.96 {+-} 0.05), respectively. Conclusions: The hybrid BrainPET/MR allows for molecular, anatomical and functional imaging with uncompromised MR image quality and a high accordance

  3. Tolerance analysis through computational imaging simulations

    Science.gov (United States)

    Birch, Gabriel C.; LaCasse, Charles F.; Stubbs, Jaclynn J.; Dagel, Amber L.; Bradley, Jon

    2017-11-01

    The modeling and simulation of non-traditional imaging systems require holistic consideration of the end-to-end system. We demonstrate this approach through a tolerance analysis of a random scattering lensless imaging system.

  4. Uses of software in digital image analysis: a forensic report

    Science.gov (United States)

    Sharma, Mukesh; Jha, Shailendra

    2010-02-01

    Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.

  5. An image scanner for real time analysis of spark chamber images

    International Nuclear Information System (INIS)

    Cesaroni, F.; Penso, G.; Locci, A.M.; Spano, M.A.

    1975-01-01

    The notes describes the semiautomatic scanning system at LNF for the analysis of spark chamber images. From the projection of the images on the scanner table, the trajectory in the real space is reconstructed

  6. Three-dimensional constructive interference in steady-state magnetic resonance imaging in syringomyelia: advantages over conventional imaging.

    Science.gov (United States)

    Roser, Florian; Ebner, Florian H; Danz, Søren; Riether, Felix; Ritz, Rainer; Dietz, Klaus; Naegele, Thomas; Tatagiba, Marcos S

    2008-05-01

    Neuroradiology has become indispensable in detecting the pathophysiology in syringomyelia. Constructive interference in steady-state (CISS) magnetic resonance (MR) imaging can provide superior contrast at the sub-arachnoid tissue borders. As this region is critical in preoperative evaluation, the authors hypothesized that CISS imaging would provide superior assessment of syrinx pathology and surgical planning. Based on records collected from a database of 130 patients with syringomyelia treated at the authors' institution, 59 patients were prospectively evaluated with complete neuroradiological examinations. In addition to routine acquisitions with FLAIR, T1- and T2-weighted, and contrast-enhanced MR imaging series, the authors obtained sagittal cardiac-gated sequences to visualize cerebrospinal fluid (CSF) pulsations and axial 3D CISS MR sequences to detect focal arachnoid webs. Statistical qualitative and quantitative evaluations of spinal cord/CSF contrast, spinal cord/CSF delineation, motion artifacts, and artifacts induced by pulsatile CSF flow were performed. The 3D CISS MR sequences demonstrated a contrast-to-noise ratio significantly better than any other routine imaging sequence (p CSF flow voids. Constructive interference in steady-state MR imaging enables the neurosurgeon to accurately identify cases requiring decompression for obstructed CSF. Motion artifacts can be eliminated with technical variations.

  7. Application of automatic image analysis in wood science

    Science.gov (United States)

    Charles W. McMillin

    1982-01-01

    In this paper I describe an image analysis system and illustrate with examples the application of automatic quantitative measurement to wood science. Automatic image analysis, a powerful and relatively new technology, uses optical, video, electronic, and computer components to rapidly derive information from images with minimal operator interaction. Such instruments...

  8. Unbiased group-wise image registration: applications in brain fiber tract atlas construction and functional connectivity analysis.

    Science.gov (United States)

    Geng, Xiujuan; Gu, Hong; Shin, Wanyong; Ross, Thomas J; Yang, Yihong

    2011-10-01

    We propose an unbiased implicit-reference group-wise (IRG) image registration method and demonstrate its applications in the construction of a brain white matter fiber tract atlas and the analysis of resting-state functional MRI (fMRI) connectivity. Most image registration techniques pair-wise align images to a selected reference image and group analyses are performed in the reference space, which may produce bias. The proposed method jointly estimates transformations, with an elastic deformation model, registering all images to an implicit reference corresponding to the group average. The unbiased registration is applied to build a fiber tract atlas by registering a group of diffusion tensor images. Compared to reference-based registration, the IRG registration improves the fiber track overlap within the group. After applying the method in the fMRI connectivity analysis, results suggest a general improvement in functional connectivity maps at a group level in terms of larger cluster size and higher average t-scores.

  9. Imaging Acute Appendicitis: State of the Art

    Directory of Open Access Journals (Sweden)

    Diana Gaitini

    2011-01-01

    Full Text Available The goal of this review is to present the state of the art in imaging tests for the diagnosis of acute appendicitis. Relevant publications regarding performance and advantages/disadvantages of imaging modalities for the diagnosis of appendicitis in different clinical situations were reviewed. Articles were extracted from a computerized database (MEDLINE with the following activated limits: Humans, English, core clinical journals, and published in the last five years. Reference lists of relevant studies were checked manually to identify additional, related articles. Ultrasound (US examination should be the first imaging test performed, particularly among the pediatric and young adult populations, who represent the main targets for appendicitis, as well as in pregnant patients. A positive US examination for appendicitis or an alternative diagnosis of possible gastrointestinal or urological origin, or a negative US, either showing a normal appendix or presenting low clinical suspicion of appendicitis, should lead to a final diagnosis. A negative or indeterminate examination with a strong clinical suspicion of appendicitis should be followed by a computed tomography (CT scan or alternatively, a magnetic resonanace imaging (MRI scan in a pregnant patient. A second US examination in a patient with persistent symptoms, especially if the first one was performed by a less experienced imaging professional, is a valid alternative to a CT.

  10. Digital image processing and analysis human and computer vision applications with CVIPtools

    CERN Document Server

    Umbaugh, Scott E

    2010-01-01

    Section I Introduction to Digital Image Processing and AnalysisDigital Image Processing and AnalysisOverviewImage Analysis and Computer VisionImage Processing and Human VisionKey PointsExercisesReferencesFurther ReadingComputer Imaging SystemsImaging Systems OverviewImage Formation and SensingCVIPtools SoftwareImage RepresentationKey PointsExercisesSupplementary ExercisesReferencesFurther ReadingSection II Digital Image Analysis and Computer VisionIntroduction to Digital Image AnalysisIntroductionPreprocessingBinary Image AnalysisKey PointsExercisesSupplementary ExercisesReferencesFurther Read

  11. Digital color imaging

    CERN Document Server

    Fernandez-Maloigne, Christine; Macaire, Ludovic

    2013-01-01

    This collective work identifies the latest developments in the field of the automatic processing and analysis of digital color images.For researchers and students, it represents a critical state of the art on the scientific issues raised by the various steps constituting the chain of color image processing.It covers a wide range of topics related to computational color imaging, including color filtering and segmentation, color texture characterization, color invariant for object recognition, color and motion analysis, as well as color image and video indexing and retrieval. <

  12. Surface analysis of lipids by mass spectrometry: more than just imaging.

    Science.gov (United States)

    Ellis, Shane R; Brown, Simon H; In Het Panhuis, Marc; Blanksby, Stephen J; Mitchell, Todd W

    2013-10-01

    Mass spectrometry is now an indispensable tool for lipid analysis and is arguably the driving force in the renaissance of lipid research. In its various forms, mass spectrometry is uniquely capable of resolving the extensive compositional and structural diversity of lipids in biological systems. Furthermore, it provides the ability to accurately quantify molecular-level changes in lipid populations associated with changes in metabolism and environment; bringing lipid science to the "omics" age. The recent explosion of mass spectrometry-based surface analysis techniques is fuelling further expansion of the lipidomics field. This is evidenced by the numerous papers published on the subject of mass spectrometric imaging of lipids in recent years. While imaging mass spectrometry provides new and exciting possibilities, it is but one of the many opportunities direct surface analysis offers the lipid researcher. In this review we describe the current state-of-the-art in the direct surface analysis of lipids with a focus on tissue sections, intact cells and thin-layer chromatography substrates. The suitability of these different approaches towards analysis of the major lipid classes along with their current and potential applications in the field of lipid analysis are evaluated. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Interpretation of medical images by model guided analysis

    International Nuclear Information System (INIS)

    Karssemeijer, N.

    1989-01-01

    Progress in the development of digital pictorial information systems stimulates a growing interest in the use of image analysis techniques in medicine. Especially when precise quantitative information is required the use of fast and reproducable computer analysis may be more appropriate than relying on visual judgement only. Such quantitative information can be valuable, for instance, in diagnostics or in irradiation therapy planning. As medical images are mostly recorded in a prescribed way, human anatomy guarantees a common image structure for each particular type of exam. In this thesis it is investigated how to make use of this a priori knowledge to guide image analysis. For that purpose models are developed which are suited to capture common image structure. The first part of this study is devoted to an analysis of nuclear medicine images of myocardial perfusion. In ch. 2 a model of these images is designed in order to represent characteristic image properties. It is shown that for these relatively simple images a compact symbolic description can be achieved, without significant loss of diagnostically importance of several image properties. Possibilities for automatic interpretation of more complex images is investigated in the following chapters. The central topic is segmentation of organs. Two methods are proposed and tested on a set of abdominal X-ray CT scans. Ch. 3 describes a serial approach based on a semantic network and the use of search areas. Relational constraints are used to guide the image processing and to classify detected image segments. In teh ch.'s 4 and 5 a more general parallel approach is utilized, based on a markov random field image model. A stochastic model used to represent prior knowledge about the spatial arrangement of organs is implemented as an external field. (author). 66 refs.; 27 figs.; 6 tabs

  14. Determination of astaxanthin concentration in Rainbow trout (Oncorhynchus mykiss) by multispectral image analysis

    DEFF Research Database (Denmark)

    Frosch, Stina; Dissing, Bjørn Skovlund; Ersbøll, Bjarne Kjær

    Astaxanthin is the single most expensive constituent in salmonide fish feed. Therefore control and optimization of the astaxanthin concentration from feed to fish is of paramount importance for a cost effective salmonide production. Traditionally, methods for astaxanthin determination include...... extraction of astaxanthin from the minced sample into a suitable solvent such as acetone or hexane before further analysis. The existing methods have several drawbacks including being destructive and labour consuming. Current state-of-the art vision systems for quality and process control in the fish...... to a larger degree than in a trichromatic image. In this study multispectral imaging has been evaluated for characterization of the concentration of astaxanthin in rainbow trout fillets. Rainbow trout’s (Oncorhynchus mykiss), were filleted and imaged using a rapid multispectral imaging device...

  15. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    Science.gov (United States)

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  16. Time-Domain Fluorescence Lifetime Imaging Techniques Suitable for Solid-State Imaging Sensor Arrays

    Directory of Open Access Journals (Sweden)

    Robert K. Henderson

    2012-05-01

    Full Text Available We have successfully demonstrated video-rate CMOS single-photon avalanche diode (SPAD-based cameras for fluorescence lifetime imaging microscopy (FLIM by applying innovative FLIM algorithms. We also review and compare several time-domain techniques and solid-state FLIM systems, and adapt the proposed algorithms for massive CMOS SPAD-based arrays and hardware implementations. The theoretical error equations are derived and their performances are demonstrated on the data obtained from 0.13 μm CMOS SPAD arrays and the multiple-decay data obtained from scanning PMT systems. In vivo two photon fluorescence lifetime imaging data of FITC-albumin labeled vasculature of a P22 rat carcinosarcoma (BD9 rat window chamber are used to test how different algorithms perform on bi-decay data. The proposed techniques are capable of producing lifetime images with enough contrast.

  17. Wavelet analysis enables system-independent texture analysis of optical coherence tomography images

    Science.gov (United States)

    Lingley-Papadopoulos, Colleen A.; Loew, Murray H.; Zara, Jason M.

    2009-07-01

    Texture analysis for tissue characterization is a current area of optical coherence tomography (OCT) research. We discuss some of the differences between OCT systems and the effects those differences have on the resulting images and subsequent image analysis. In addition, as an example, two algorithms for the automatic recognition of bladder cancer are compared: one that was developed on a single system with no consideration for system differences, and one that was developed to address the issues associated with system differences. The first algorithm had a sensitivity of 73% and specificity of 69% when tested using leave-one-out cross-validation on data taken from a single system. When tested on images from another system with a different central wavelength, however, the method classified all images as cancerous regardless of the true pathology. By contrast, with the use of wavelet analysis and the removal of system-dependent features, the second algorithm reported sensitivity and specificity values of 87 and 58%, respectively, when trained on images taken with one imaging system and tested on images taken with another.

  18. Wavelet analysis enables system-independent texture analysis of optical coherence tomography images.

    Science.gov (United States)

    Lingley-Papadopoulos, Colleen A; Loew, Murray H; Zara, Jason M

    2009-01-01

    Texture analysis for tissue characterization is a current area of optical coherence tomography (OCT) research. We discuss some of the differences between OCT systems and the effects those differences have on the resulting images and subsequent image analysis. In addition, as an example, two algorithms for the automatic recognition of bladder cancer are compared: one that was developed on a single system with no consideration for system differences, and one that was developed to address the issues associated with system differences. The first algorithm had a sensitivity of 73% and specificity of 69% when tested using leave-one-out cross-validation on data taken from a single system. When tested on images from another system with a different central wavelength, however, the method classified all images as cancerous regardless of the true pathology. By contrast, with the use of wavelet analysis and the removal of system-dependent features, the second algorithm reported sensitivity and specificity values of 87 and 58%, respectively, when trained on images taken with one imaging system and tested on images taken with another.

  19. Analysis of PET hypoxia imaging in the quantitative imaging for personalized cancer medicine program

    International Nuclear Information System (INIS)

    Yeung, Ivan; Driscoll, Brandon; Keller, Harald; Shek, Tina; Jaffray, David; Hedley, David

    2014-01-01

    Quantitative imaging is an important tool in clinical trials of testing novel agents and strategies for cancer treatment. The Quantitative Imaging Personalized Cancer Medicine Program (QIPCM) provides clinicians and researchers participating in multi-center clinical trials with a central repository for their imaging data. In addition, a set of tools provide standards of practice (SOP) in end-to-end quality assurance of scanners and image analysis. The four components for data archiving and analysis are the Clinical Trials Patient Database, the Clinical Trials PACS, the data analysis engine(s) and the high-speed networks that connect them. The program provides a suite of software which is able to perform RECIST, dynamic MRI, CT and PET analysis. The imaging data can be assessed securely from remote and analyzed by researchers with these software tools, or with tools provided by the users and installed at the server. Alternatively, QIPCM provides a service for data analysis on the imaging data according developed SOP. An example of a clinical study in which patients with unresectable pancreatic adenocarcinoma were studied with dynamic PET-FAZA for hypoxia measurement will be discussed. We successfully quantified the degree of hypoxia as well as tumor perfusion in a group of 20 patients in terms of SUV and hypoxic fraction. It was found that there is no correlation between bulk tumor perfusion and hypoxia status in this cohort. QIPCM also provides end-to-end QA testing of scanners used in multi-center clinical trials. Based on quality assurance data from multiple CT-PET scanners, we concluded that quality control of imaging was vital in the success in multi-center trials as different imaging and reconstruction parameters in PET imaging could lead to very different results in hypoxia imaging. (author)

  20. GEOPOSITIONING PRECISION ANALYSIS OF MULTIPLE IMAGE TRIANGULATION USING LRO NAC LUNAR IMAGES

    Directory of Open Access Journals (Sweden)

    K. Di

    2016-06-01

    Full Text Available This paper presents an empirical analysis of the geopositioning precision of multiple image triangulation using Lunar Reconnaissance Orbiter Camera (LROC Narrow Angle Camera (NAC images at the Chang’e-3(CE-3 landing site. Nine LROC NAC images are selected for comparative analysis of geopositioning precision. Rigorous sensor models of the images are established based on collinearity equations with interior and exterior orientation elements retrieved from the corresponding SPICE kernels. Rational polynomial coefficients (RPCs of each image are derived by least squares fitting using vast number of virtual control points generated according to rigorous sensor models. Experiments of different combinations of images are performed for comparisons. The results demonstrate that the plane coordinates can achieve a precision of 0.54 m to 2.54 m, with a height precision of 0.71 m to 8.16 m when only two images are used for three-dimensional triangulation. There is a general trend that the geopositioning precision, especially the height precision, is improved with the convergent angle of the two images increasing from several degrees to about 50°. However, the image matching precision should also be taken into consideration when choosing image pairs for triangulation. The precisions of using all the 9 images are 0.60 m, 0.50 m, 1.23 m in along-track, cross-track, and height directions, which are better than most combinations of two or more images. However, triangulation with selected fewer images could produce better precision than that using all the images.

  1. An application of image processing techniques in computed tomography image analysis

    DEFF Research Database (Denmark)

    McEvoy, Fintan

    2007-01-01

    number of animals and image slices, automation of the process was desirable. The open-source and free image analysis program ImageJ was used. A macro procedure was created that provided the required functionality. The macro performs a number of basic image processing procedures. These include an initial...... process designed to remove the scanning table from the image and to center the animal in the image. This is followed by placement of a vertical line segment from the mid point of the upper border of the image to the image center. Measurements are made between automatically detected outer and inner...... boundaries of subcutaneous adipose tissue along this line segment. This process was repeated as the image was rotated (with the line position remaining unchanged) so that measurements around the complete circumference were obtained. Additionally, an image was created showing all detected boundary points so...

  2. Imaging of lumbar degenerative disk disease: history and current state

    International Nuclear Information System (INIS)

    Emch, Todd M.; Modic, Michael T.

    2011-01-01

    One of the most common indications for performing magnetic resonance (MR) imaging of the lumbar spine is the symptom complex thought to originate as a result of degenerative disk disease. MR imaging, which has emerged as perhaps the modality of choice for imaging degenerative disk disease, can readily demonstrate disk pathology, degenerative endplate changes, facet and ligamentous hypertrophic changes, and the sequelae of instability. Its role in terms of predicting natural history of low back pain, identifying causality, or offering prognostic information is unclear. As available modalities for imaging the spine have progressed from radiography, myelography, and computed tomography to MR imaging, there have also been advances in spine surgery for degenerative disk disease. These advances are described in a temporal context for historical purposes with a focus on MR imaging's history and current state. (orig.)

  3. Reconstruction of Intima and Adventitia Models into a State Undeformed by a Catheter by Using CT, IVUS, and Biplane X-Ray Angiogram Images

    Directory of Open Access Journals (Sweden)

    Jinwon Son

    2017-01-01

    Full Text Available The number of studies on blood flow analysis using fluid-structure interaction (FSI analysis is increasing. Though a 3D blood vessel model that includes intima and adventitia is required for FSI analysis, there are difficulties in generating it using only one type of medical imaging. In this paper, we propose a 3D modeling method for accurate FSI analysis. An intravascular ultrasound (IVUS image is used with biplane X-ray angiogram images to calculate the position and orientation of the blood vessel. However, these images show that the blood vessel is deformed by the catheter inserted into the blood vessel for IVUS imaging. To eliminate such deformation, a CT image was added and the two models were registered. First, a 3D model of the undeformed intima was generated using a CT image. In the second stage, a model of intima and adventitia deformed by the catheter was generated by combining the IVUS image and the X-ray angiogram images. A 3D model of intima and adventitia with the deformation caused by insertion of the catheter eliminated was generated by matching these 3D blood vessel models in different states. In addition, a 3D blood vessel model including bifurcation was generated using the proposed method.

  4. Object-Based Image Analysis of WORLDVIEW-2 Satellite Data for the Classification of Mangrove Areas in the City of SÃO LUÍS, MARANHÃO State, Brazil

    Science.gov (United States)

    Kux, H. J. H.; Souza, U. D. V.

    2012-07-01

    Taking into account the importance of mangrove environments for the biodiversity of coastal areas, the objective of this paper is to classify the different types of irregular human occupation on the areas of mangrove vegetation in São Luis, capital of Maranhão State, Brazil, considering the OBIA (Object-based Image Analysis) approach with WorldView-2 satellite data and using InterIMAGE, a free image analysis software. A methodology for the study of the area covered by mangroves at the northern portion of the city was proposed to identify the main targets of this area, such as: marsh areas (known locally as Apicum), mangrove forests, tidal channels, blockhouses (irregular constructions), embankments, paved streets and different condominiums. Initially a databank including information on the main types of occupation and environments was established for the area under study. An image fusion (multispectral bands with panchromatic band) was done, to improve the information content of WorldView-2 data. Following an ortho-rectification was made with the dataset used, in order to compare with cartographical data from the municipality, using Ground Control Points (GCPs) collected during field survey. Using the data mining software GEODMA, a series of attributes which characterize the targets of interest was established. Afterwards the classes were structured, a knowledge model was created and the classification performed. The OBIA approach eased mapping of such sensitive areas, showing the irregular occupations and embankments of mangrove forests, reducing its area and damaging the marine biodiversity.

  5. Integrating fuzzy object based image analysis and ant colony optimization for road extraction from remotely sensed images

    Science.gov (United States)

    Maboudi, Mehdi; Amini, Jalal; Malihi, Shirin; Hahn, Michael

    2018-04-01

    Updated road network as a crucial part of the transportation database plays an important role in various applications. Thus, increasing the automation of the road extraction approaches from remote sensing images has been the subject of extensive research. In this paper, we propose an object based road extraction approach from very high resolution satellite images. Based on the object based image analysis, our approach incorporates various spatial, spectral, and textural objects' descriptors, the capabilities of the fuzzy logic system for handling the uncertainties in road modelling, and the effectiveness and suitability of ant colony algorithm for optimization of network related problems. Four VHR optical satellite images which are acquired by Worldview-2 and IKONOS satellites are used in order to evaluate the proposed approach. Evaluation of the extracted road networks shows that the average completeness, correctness, and quality of the results can reach 89%, 93% and 83% respectively, indicating that the proposed approach is applicable for urban road extraction. We also analyzed the sensitivity of our algorithm to different ant colony optimization parameter values. Comparison of the achieved results with the results of four state-of-the-art algorithms and quantifying the robustness of the fuzzy rule set demonstrate that the proposed approach is both efficient and transferable to other comparable images.

  6. Frequency domain analysis of knock images

    Science.gov (United States)

    Qi, Yunliang; He, Xin; Wang, Zhi; Wang, Jianxin

    2014-12-01

    High speed imaging-based knock analysis has mainly focused on time domain information, e.g. the spark triggered flame speed, the time when end gas auto-ignition occurs and the end gas flame speed after auto-ignition. This study presents a frequency domain analysis on the knock images recorded using a high speed camera with direct photography in a rapid compression machine (RCM). To clearly visualize the pressure wave oscillation in the combustion chamber, the images were high-pass-filtered to extract the luminosity oscillation. The luminosity spectrum was then obtained by applying fast Fourier transform (FFT) to three basic colour components (red, green and blue) of the high-pass-filtered images. Compared to the pressure spectrum, the luminosity spectra better identify the resonant modes of pressure wave oscillation. More importantly, the resonant mode shapes can be clearly visualized by reconstructing the images based on the amplitudes of luminosity spectra at the corresponding resonant frequencies, which agree well with the analytical solutions for mode shapes of gas vibration in a cylindrical cavity.

  7. Are chargemaster rates for imaging studies lower in States that cap noneconomic damages (tort reform)?

    Science.gov (United States)

    Stein, Seth I; Barry, Jeffrey R; Jha, Saurabh

    2014-08-01

    To determine whether chargemaster (a list of prices for common services and procedures set by individual hospitals) rates for diagnostic imaging were lower in states that cap awards for noneconomic damages (NED) than states with unlimited awards for medical negligence. We analyzed 2011 chargemaster data from the Centers for Medicare & Medicaid, pertaining to 6 ambulatory patient classifications specific to imaging. The dataset includes outpatient imaging facilities and hospitals in 49 states and the District of Columbia. The association between caps on NED and chargemaster rates for imaging in a sample of 15,218 data points was analyzed using linear regression and two-sample t tests assuming unequal variances. In states that cap NED, the chargemaster rates were higher for the following modalities: Level II Echocardiogram without Contrast (mean charges: $2,015.60 versus $1,884.81, P = .0018); Level II Cardiac Imaging ($4,670.25 versus $4,398.58, P = .002); MRI & Magnetic Resonance Angiography without Contrast ($2,654.31 versus $2,526.74, P = .002); and Level III Diagnostic and Screening Ultrasound ($1,073.31 versus $1,027.32, P = .037). High charge-to-payment ratios were associated with states with the highest charges. There was a positive correlation between number of outpatient centers in the state and the average chargemaster rates for the state (mean chargemaster rate = 1727 + 0.79*Number of Outpatient Centers; R-squared = 0.23, P = .0004). Chargemaster rates for select imaging services are not lower in states that have capped NED. Copyright © 2014 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  8. ImageJ-MATLAB: a bidirectional framework for scientific image analysis interoperability.

    Science.gov (United States)

    Hiner, Mark C; Rueden, Curtis T; Eliceiri, Kevin W

    2017-02-15

    ImageJ-MATLAB is a lightweight Java library facilitating bi-directional interoperability between MATLAB and ImageJ. By defining a standard for translation between matrix and image data structures, researchers are empowered to select the best tool for their image-analysis tasks. Freely available extension to ImageJ2 ( http://imagej.net/Downloads ). Installation and use instructions available at http://imagej.net/MATLAB_Scripting. Tested with ImageJ 2.0.0-rc-54 , Java 1.8.0_66 and MATLAB R2015b. eliceiri@wisc.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  9. The MicroAnalysis Toolkit: X-ray Fluorescence Image Processing Software

    International Nuclear Information System (INIS)

    Webb, S. M.

    2011-01-01

    The MicroAnalysis Toolkit is an analysis suite designed for the processing of x-ray fluorescence microprobe data. The program contains a wide variety of analysis tools, including image maps, correlation plots, simple image math, image filtering, multiple energy image fitting, semi-quantitative elemental analysis, x-ray fluorescence spectrum analysis, principle component analysis, and tomographic reconstructions. To be as widely useful as possible, data formats from many synchrotron sources can be read by the program with more formats available by request. An overview of the most common features will be presented.

  10. Image sequence analysis in nuclear medicine: (1) Parametric imaging using statistical modelling

    International Nuclear Information System (INIS)

    Liehn, J.C.; Hannequin, P.; Valeyre, J.

    1989-01-01

    This is a review of parametric imaging methods on Nuclear Medicine. A Parametric Image is an image in which each pixel value is a function of the value of the same pixel of an image sequence. The Local Model Method is the fitting of each pixel time activity curve by a model which parameter values form the Parametric Images. The Global Model Method is the modelling of the changes between two images. It is applied to image comparison. For both methods, the different models, the identification criterion, the optimization methods and the statistical properties of the images are discussed. The analysis of one or more Parametric Images is performed using 1D or 2D histograms. The statistically significant Parametric Images, (Images of significant Variances, Amplitudes and Differences) are also proposed [fr

  11. Multiplicative calculus in biomedical image analysis

    NARCIS (Netherlands)

    Florack, L.M.J.; Assen, van H.C.

    2011-01-01

    We advocate the use of an alternative calculus in biomedical image analysis, known as multiplicative (a.k.a. non-Newtonian) calculus. It provides a natural framework in problems in which positive images or positive definite matrix fields and positivity preserving operators are of interest. Indeed,

  12. Breakfast food health and acute exercise: Effects on state body image.

    Science.gov (United States)

    Hayes, Jacqueline F; Giles, Grace E; Mahoney, Caroline R; Kanarek, Robin B

    2018-05-10

    Food intake and exercise have been shown to alter body satisfaction in a state-dependent manner. One-time consumption of food perceived as unhealthy can be detrimental to body satisfaction, whereas an acute bout of moderate-intensity aerobic exercise can be beneficial. The current study examined the effect of exercise on state body image and appearance-related self-esteem following consumption of isocaloric foods perceived as healthy or unhealthy in 36 female college students (18-30 years old) in the Northeastern United States. Using a randomized-controlled design, participants attended six study sessions with breakfast conditions (healthy, unhealthy, no food) and activity (exercise, quiet rest) as within-participants factors. Body image questionnaires were completed prior to breakfast condition, between breakfast and activity conditions, and following activity condition. Results showed that consumption of an unhealthy breakfast decreased appearance self-esteem and increased body size perception, whereas consumption of a healthy breakfast did not influence appearance self-esteem but increased body size perception. Exercise did not influence state body image attitudes or perceptions following meal consumption. Study findings suggest that morning meal type, but not aerobic exercise, influence body satisfaction in college-aged females. Copyright © 2018. Published by Elsevier Ltd.

  13. Plant phenomics: an overview of image acquisition technologies and image data analysis algorithms.

    Science.gov (United States)

    Perez-Sanz, Fernando; Navarro, Pedro J; Egea-Cortines, Marcos

    2017-11-01

    The study of phenomes or phenomics has been a central part of biology. The field of automatic phenotype acquisition technologies based on images has seen an important advance in the last years. As with other high-throughput technologies, it addresses a common set of problems, including data acquisition and analysis. In this review, we give an overview of the main systems developed to acquire images. We give an in-depth analysis of image processing with its major issues and the algorithms that are being used or emerging as useful to obtain data out of images in an automatic fashion. © The Author 2017. Published by Oxford University Press.

  14. Unveiling multiple solid-state transitions in pharmaceutical solid dosage forms using multi-series hyperspectral imaging and different curve resolution approaches

    DEFF Research Database (Denmark)

    Alexandrino, Guilherme L; Amigo Rubio, Jose Manuel; Khorasani, Milad Rouhi

    2017-01-01

    Solid-state transitions at the surface of pharmaceutical solid dosage forms (SDF) were monitored using multi-series hyperspectral imaging (HSI) along with Multivariate Curve Resolution – Alternating Least Squares (MCR-ALS) and Parallel Factor Analysis (PARAFAC and PARAFAC2). First, the solid-stat...

  15. Operational Automatic Remote Sensing Image Understanding Systems: Beyond Geographic Object-Based and Object-Oriented Image Analysis (GEOBIA/GEOOIA. Part 1: Introduction

    Directory of Open Access Journals (Sweden)

    Andrea Baraldi

    2012-09-01

    Full Text Available According to existing literature and despite their commercial success, state-of-the-art two-stage non-iterative geographic object-based image analysis (GEOBIA systems and three-stage iterative geographic object-oriented image analysis (GEOOIA systems, where GEOOIA/GEOBIA, remain affected by a lack of productivity, general consensus and research. To outperform the degree of automation, accuracy, efficiency, robustness, scalability and timeliness of existing GEOBIA/GEOOIA systems in compliance with the Quality Assurance Framework for Earth Observation (QA4EO guidelines, this methodological work is split into two parts. The present first paper provides a multi-disciplinary Strengths, Weaknesses, Opportunities and Threats (SWOT analysis of the GEOBIA/GEOOIA approaches that augments similar analyses proposed in recent years. In line with constraints stemming from human vision, this SWOT analysis promotes a shift of learning paradigm in the pre-attentive vision first stage of a remote sensing (RS image understanding system (RS-IUS, from sub-symbolic statistical model-based (inductive image segmentation to symbolic physical model-based (deductive image preliminary classification. Hence, a symbolic deductive pre-attentive vision first stage accomplishes image sub-symbolic segmentation and image symbolic pre-classification simultaneously. In the second part of this work a novel hybrid (combined deductive and inductive RS-IUS architecture featuring a symbolic deductive pre-attentive vision first stage is proposed and discussed in terms of: (a computational theory (system design; (b information/knowledge representation; (c algorithm design; and (d implementation. As proof-of-concept of symbolic physical model-based pre-attentive vision first stage, the spectral knowledge-based, operational, near real-time Satellite Image Automatic Mapper™ (SIAM™ is selected from existing literature. To the best of these authors’ knowledge, this is the first time a

  16. Tridimensional ultrasonic images analysis for the in service inspection of fast breeder reactors; Analyse d'images tridimensionnelles ultrasonores pour l'inspection en service des reacteurs a neutrons rapides

    Energy Technology Data Exchange (ETDEWEB)

    Dancre, M

    1999-11-01

    Tridimensional image analysis provides a set of methods for the intelligent extraction of information in order to visualize, recognize or inspect objects in volumetric images. In this field of research, we are interested in algorithmic and methodological aspects to extract surface visual information embedded in volume ultrasonic images. The aim is to help a non-acoustician operator, possibly the system itself, to inspect surfaces of vessel and internals in Fast Breeder Reactors (FBR). Those surfaces are immersed in liquid metal, what justifies the ultrasonic technology choice. We expose firstly a state of the art on the visualization of volume ultrasonic images, the methods of noise analysis, the geometrical modelling for surface analysis and finally curves and surfaces matching. These four points are then inserted in a global analysis strategy that relies on an acoustical analysis (echoes recognition), an object analysis (object recognition and reconstruction) and a surface analysis (surface defects detection). Few literature can be found on ultrasonic echoes recognition through image analysis. We suggest an original method that can be generalized to all images with structured and non-structured noise. From a technical point of view, this methodology applied to echoes recognition turns out to be a cooperative approach between morphological mathematics and snakes (active contours). An entropy maximization technique is required for volumetric data binarization. (author)

  17. High Density Aerial Image Matching: State-Of and Future Prospects

    Science.gov (United States)

    Haala, N.; Cavegn, S.

    2016-06-01

    Ongoing innovations in matching algorithms are continuously improving the quality of geometric surface representations generated automatically from aerial images. This development motivated the launch of the joint ISPRS/EuroSDR project "Benchmark on High Density Aerial Image Matching", which aims on the evaluation of photogrammetric 3D data capture in view of the current developments in dense multi-view stereo-image matching. Originally, the test aimed on image based DSM computation from conventional aerial image flights for different landuse and image block configurations. The second phase then put an additional focus on high quality, high resolution 3D geometric data capture in complex urban areas. This includes both the extension of the test scenario to oblique aerial image flights as well as the generation of filtered point clouds as additional output of the respective multi-view reconstruction. The paper uses the preliminary outcomes of the benchmark to demonstrate the state-of-the-art in airborne image matching with a special focus of high quality geometric data capture in urban scenarios.

  18. Fractal-Based Image Analysis In Radiological Applications

    Science.gov (United States)

    Dellepiane, S.; Serpico, S. B.; Vernazza, G.; Viviani, R.

    1987-10-01

    We present some preliminary results of a study aimed to assess the actual effectiveness of fractal theory and to define its limitations in the area of medical image analysis for texture description, in particular, in radiological applications. A general analysis to select appropriate parameters (mask size, tolerance on fractal dimension estimation, etc.) has been performed on synthetically generated images of known fractal dimensions. Moreover, we analyzed some radiological images of human organs in which pathological areas can be observed. Input images were subdivided into blocks of 6x6 pixels; then, for each block, the fractal dimension was computed in order to create fractal images whose intensity was related to the D value, i.e., texture behaviour. Results revealed that the fractal images could point out the differences between normal and pathological tissues. By applying histogram-splitting segmentation to the fractal images, pathological areas were isolated. Two different techniques (i.e., the method developed by Pentland and the "blanket" method) were employed to obtain fractal dimension values, and the results were compared; in both cases, the appropriateness of the fractal description of the original images was verified.

  19. Digital image analysis of NDT radiographs

    International Nuclear Information System (INIS)

    Graeme, W.A. Jr.; Eizember, A.C.; Douglass, J.

    1989-01-01

    Prior to the introduction of Charge Coupled Device (CCD) detectors the majority of image analysis performed on NDT radiographic images was done visually in the analog domain. While some film digitization was being performed, the process was often unable to capture all the usable information on the radiograph or was too time consuming. CCD technology now provides a method to digitize radiographic film images without losing the useful information captured in the original radiograph in a timely process. Incorporating that technology into a complete digital radiographic workstation allows analog radiographic information to be processed, providing additional information to the radiographer. Once in the digital domain, that data can be stored, and fused with radioscopic and other forms of digital data. The result is more productive analysis and management of radiographic inspection data. The principal function of the NDT Scan IV digital radiography system is the digitization, enhancement and storage of radiographic images

  20. Breast cancer histopathology image analysis: a review.

    Science.gov (United States)

    Veta, Mitko; Pluim, Josien P W; van Diest, Paul J; Viergever, Max A

    2014-05-01

    This paper presents an overview of methods that have been proposed for the analysis of breast cancer histopathology images. This research area has become particularly relevant with the advent of whole slide imaging (WSI) scanners, which can perform cost-effective and high-throughput histopathology slide digitization, and which aim at replacing the optical microscope as the primary tool used by pathologist. Breast cancer is the most prevalent form of cancers among women, and image analysis methods that target this disease have a huge potential to reduce the workload in a typical pathology lab and to improve the quality of the interpretation. This paper is meant as an introduction for nonexperts. It starts with an overview of the tissue preparation, staining and slide digitization processes followed by a discussion of the different image processing techniques and applications, ranging from analysis of tissue staining to computer-aided diagnosis, and prognosis of breast cancer patients.

  1. Image analysis and machine learning in digital pathology: Challenges and opportunities.

    Science.gov (United States)

    Madabhushi, Anant; Lee, George

    2016-10-01

    With the rise in whole slide scanner technology, large numbers of tissue slides are being scanned and represented and archived digitally. While digital pathology has substantial implications for telepathology, second opinions, and education there are also huge research opportunities in image computing with this new source of "big data". It is well known that there is fundamental prognostic data embedded in pathology images. The ability to mine "sub-visual" image features from digital pathology slide images, features that may not be visually discernible by a pathologist, offers the opportunity for better quantitative modeling of disease appearance and hence possibly improved prediction of disease aggressiveness and patient outcome. However the compelling opportunities in precision medicine offered by big digital pathology data come with their own set of computational challenges. Image analysis and computer assisted detection and diagnosis tools previously developed in the context of radiographic images are woefully inadequate to deal with the data density in high resolution digitized whole slide images. Additionally there has been recent substantial interest in combining and fusing radiologic imaging and proteomics and genomics based measurements with features extracted from digital pathology images for better prognostic prediction of disease aggressiveness and patient outcome. Again there is a paucity of powerful tools for combining disease specific features that manifest across multiple different length scales. The purpose of this review is to discuss developments in computational image analysis tools for predictive modeling of digital pathology images from a detection, segmentation, feature extraction, and tissue classification perspective. We discuss the emergence of new handcrafted feature approaches for improved predictive modeling of tissue appearance and also review the emergence of deep learning schemes for both object detection and tissue classification

  2. Image processing and analysis software development

    International Nuclear Information System (INIS)

    Shahnaz, R.

    1999-01-01

    The work presented in this project is aimed at developing a software 'IMAGE GALLERY' to investigate various image processing and analysis techniques. The work was divided into two parts namely the image processing techniques and pattern recognition, which further comprised of character and face recognition. Various image enhancement techniques including negative imaging, contrast stretching, compression of dynamic, neon, diffuse, emboss etc. have been studied. Segmentation techniques including point detection, line detection, edge detection have been studied. Also some of the smoothing and sharpening filters have been investigated. All these imaging techniques have been implemented in a window based computer program written in Visual Basic Neural network techniques based on Perception model have been applied for face and character recognition. (author)

  3. On the effect of image states on resonant neutralization of hydrogen anions near metal surfaces

    International Nuclear Information System (INIS)

    Chakraborty, Himadri S.; Niederhausen, Thomas; Thumm, Uwe

    2005-01-01

    We directly assess the role of image state electronic structures on the ion-survival by comparing the resonant charge transfer dynamics of hydrogen anions near Pd(1 1 1), Pd(1 0 0), and Ag(1 1 1) surfaces. Our simulations show that image states that are degenerate with the metal conduction band favor the recapture of electrons by outgoing ions. In sharp contrast, localized image states that occur inside the band gap hinder the recapture process and thus enhance the ion-neutralization probability

  4. Analysis on the steady-state coherent synchrotron radiation with strong shielding

    International Nuclear Information System (INIS)

    Li, R.; Bohn, C.L.; Bisognano, J.J.

    1997-01-01

    There are several papers concerning shielding of coherent synchrotron radiation (CSR) emitted by a Gaussian line charge on a circular orbit centered between two parallel conducting plates. Previous asymptotic analyses in the frequency domain show that shielded steady-state CSR mainly arises from harmonics in the bunch frequency exceeding the threshold harmonic for satisfying the boundary conditions at the plates. In this paper the authors extend the frequency-domain analysis into the regime of strong shielding, in which the threshold harmonic exceeds the characteristic frequency of the bunch. The result is then compared to the shielded steady-state CSR power obtained using image charges

  5. Forensic Analysis of Digital Image Tampering

    Science.gov (United States)

    2004-12-01

    analysis of when each method fails, which Chapter 4 discusses. Finally, a test image containing an invisible watermark using LSB steganography is...2.2 – Example of invisible watermark using Steganography Software F5 ............. 8 Figure 2.3 – Example of copy-move image forgery [12...used to embed the hidden watermark is Steganography Software F5 version 11+ discussed in Section 2.2. Original JPEG Image – 580 x 435 – 17.4

  6. Automated thermal mapping techniques using chromatic image analysis

    Science.gov (United States)

    Buck, Gregory M.

    1989-01-01

    Thermal imaging techniques are introduced using a chromatic image analysis system and temperature sensitive coatings. These techniques are used for thermal mapping and surface heat transfer measurements on aerothermodynamic test models in hypersonic wind tunnels. Measurements are made on complex vehicle configurations in a timely manner and at minimal expense. The image analysis system uses separate wavelength filtered images to analyze surface spectral intensity data. The system was initially developed for quantitative surface temperature mapping using two-color thermographic phosphors but was found useful in interpreting phase change paint and liquid crystal data as well.

  7. Imaging of Coulomb-Driven Quantum Hall Edge States

    KAUST Repository

    Lai, Keji

    2011-10-01

    The edges of a two-dimensional electron gas (2DEG) in the quantum Hall effect (QHE) regime are divided into alternating metallic and insulating strips, with their widths determined by the energy gaps of the QHE states and the electrostatic Coulomb interaction. Local probing of these submicrometer features, however, is challenging due to the buried 2DEG structures. Using a newly developed microwave impedance microscope, we demonstrate the real-space conductivity mapping of the edge and bulk states. The sizes, positions, and field dependence of the edge strips around the sample perimeter agree quantitatively with the self-consistent electrostatic picture. The evolution of microwave images as a function of magnetic fields provides rich microscopic information around the ν=2 QHE state. © 2011 American Physical Society.

  8. [Quantitative data analysis for live imaging of bone.

    Science.gov (United States)

    Seno, Shigeto

    Bone tissue is a hard tissue, it was difficult to observe the interior of the bone tissue alive. With the progress of microscopic technology and fluorescent probe technology in recent years, it becomes possible to observe various activities of various cells forming bone society. On the other hand, the quantitative increase in data and the diversification and complexity of the images makes it difficult to perform quantitative analysis by visual inspection. It has been expected to develop a methodology for processing microscopic images and data analysis. In this article, we introduce the research field of bioimage informatics which is the boundary area of biology and information science, and then outline the basic image processing technology for quantitative analysis of live imaging data of bone.

  9. Solid-state, flat-panel, digital radiography detectors and their physical imaging characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Cowen, A.R. [LXi Research, Academic Unit of Medical Physics, University of Leeds, West Yorkshire (United Kingdom)], E-mail: a.r.cowen@leeds.ac.uk; Kengyelics, S.M.; Davies, A.G. [LXi Research, Academic Unit of Medical Physics, University of Leeds, West Yorkshire (United Kingdom)

    2008-05-15

    Solid-state, digital radiography (DR) detectors, designed specifically for standard projection radiography, emerged just before the turn of the millennium. This new generation of digital image detector comprises a thin layer of x-ray absorptive material combined with an electronic active matrix array fabricated in a thin film of hydrogenated amorphous silicon (a-Si:H). DR detectors can offer both efficient (low-dose) x-ray image acquisition plus on-line readout of the latent image as electronic data. To date, solid-state, flat-panel, DR detectors have come in two principal designs, the indirect-conversion (x-ray scintillator-based) and the direct-conversion (x-ray photoconductor-based) types. This review describes the underlying principles and enabling technologies exploited by these designs of detector, and evaluates their physical imaging characteristics, comparing performance both against each other and computed radiography (CR). In standard projection radiography indirect conversion DR detectors currently offer superior physical image quality and dose efficiency compared with direct conversion DR and modern point-scan CR. These conclusions have been confirmed in the findings of clinical evaluations of DR detectors. Future trends in solid-state DR detector technologies are also briefly considered. Salient innovations include WiFi-enabled, portable DR detectors, improvements in x-ray absorber layers and developments in alternative electronic media to a-Si:H.

  10. Flame analysis using image processing techniques

    Science.gov (United States)

    Her Jie, Albert Chang; Zamli, Ahmad Faizal Ahmad; Zulazlan Shah Zulkifli, Ahmad; Yee, Joanne Lim Mun; Lim, Mooktzeng

    2018-04-01

    This paper presents image processing techniques with the use of fuzzy logic and neural network approach to perform flame analysis. Flame diagnostic is important in the industry to extract relevant information from flame images. Experiment test is carried out in a model industrial burner with different flow rates. Flame features such as luminous and spectral parameters are extracted using image processing and Fast Fourier Transform (FFT). Flame images are acquired using FLIR infrared camera. Non-linearities such as thermal acoustic oscillations and background noise affect the stability of flame. Flame velocity is one of the important characteristics that determines stability of flame. In this paper, an image processing method is proposed to determine flame velocity. Power spectral density (PSD) graph is a good tool for vibration analysis where flame stability can be approximated. However, a more intelligent diagnostic system is needed to automatically determine flame stability. In this paper, flame features of different flow rates are compared and analyzed. The selected flame features are used as inputs to the proposed fuzzy inference system to determine flame stability. Neural network is used to test the performance of the fuzzy inference system.

  11. Image analysis for material characterisation

    Science.gov (United States)

    Livens, Stefan

    In this thesis, a number of image analysis methods are presented as solutions to two applications concerning the characterisation of materials. Firstly, we deal with the characterisation of corrosion images, which is handled using a multiscale texture analysis method based on wavelets. We propose a feature transformation that deals with the problem of rotation invariance. Classification is performed with a Learning Vector Quantisation neural network and with combination of outputs. In an experiment, 86,2% of the images showing either pit formation or cracking, are correctly classified. Secondly, we develop an automatic system for the characterisation of silver halide microcrystals. These are flat crystals with a triangular or hexagonal base and a thickness in the 100 to 200 nm range. A light microscope is used to image them. A novel segmentation method is proposed, which allows to separate agglomerated crystals. For the measurement of shape, the ratio between the largest and the smallest radius yields the best results. The thickness measurement is based on the interference colours that appear for light reflected at the crystals. The mean colour of different thickness populations is determined, from which a calibration curve is derived. With this, the thickness of new populations can be determined accurately.

  12. Precision Statistical Analysis of Images Based on Brightness Distribution

    Directory of Open Access Journals (Sweden)

    Muzhir Shaban Al-Ani

    2017-07-01

    Full Text Available Study the content of images is considered an important topic in which reasonable and accurate analysis of images are generated. Recently image analysis becomes a vital field because of huge number of images transferred via transmission media in our daily life. These crowded media with images lead to highlight in research area of image analysis. In this paper, the implemented system is passed into many steps to perform the statistical measures of standard deviation and mean values of both color and grey images. Whereas the last step of the proposed method concerns to compare the obtained results in different cases of the test phase. In this paper, the statistical parameters are implemented to characterize the content of an image and its texture. Standard deviation, mean and correlation values are used to study the intensity distribution of the tested images. Reasonable results are obtained for both standard deviation and mean value via the implementation of the system. The major issue addressed in the work is concentrated on brightness distribution via statistical measures applying different types of lighting.

  13. Methods for processing and analysis functional and anatomical brain images: computerized tomography, emission tomography and nuclear resonance imaging

    International Nuclear Information System (INIS)

    Mazoyer, B.M.

    1988-01-01

    The various methods for brain image processing and analysis are presented and compared. The following topics are developed: the physical basis of brain image comparison (nature and formation of signals intrinsic performance of the methods image characteristics); mathematical methods for image processing and analysis (filtering, functional parameter extraction, morphological analysis, robotics and artificial intelligence); methods for anatomical localization (neuro-anatomy atlas, proportional stereotaxic atlas, numerized atlas); methodology of cerebral image superposition (normalization, retiming); image networks [fr

  14. Analysis of rocket flight stability based on optical image measurement

    Science.gov (United States)

    Cui, Shuhua; Liu, Junhu; Shen, Si; Wang, Min; Liu, Jun

    2018-02-01

    Based on the abundant optical image measurement data from the optical measurement information, this paper puts forward the method of evaluating the rocket flight stability performance by using the measurement data of the characteristics of the carrier rocket in imaging. On the basis of the method of measuring the characteristics of the carrier rocket, the attitude parameters of the rocket body in the coordinate system are calculated by using the measurements data of multiple high-speed television sets, and then the parameters are transferred to the rocket body attack angle and it is assessed whether the rocket has a good flight stability flying with a small attack angle. The measurement method and the mathematical algorithm steps through the data processing test, where you can intuitively observe the rocket flight stability state, and also can visually identify the guidance system or failure analysis.

  15. Computer-based image analysis in radiological diagnostics and image-guided therapy: 3D-Reconstruction, contrast medium dynamics, surface analysis, radiation therapy and multi-modal image fusion

    International Nuclear Information System (INIS)

    Beier, J.

    2001-01-01

    This book deals with substantial subjects of postprocessing and analysis of radiological image data, a particular emphasis was put on pulmonary themes. For a multitude of purposes the developed methods and procedures can directly be transferred to other non-pulmonary applications. The work presented here is structured in 14 chapters, each describing a selected complex of research. The chapter order reflects the sequence of the processing steps starting from artefact reduction, segmentation, visualization, analysis, therapy planning and image fusion up to multimedia archiving. In particular, this includes virtual endoscopy with three different scene viewers (Chap. 6), visualizations of the lung disease bronchiectasis (Chap. 7), surface structure analysis of pulmonary tumors (Chap. 8), quantification of contrast medium dynamics from temporal 2D and 3D image sequences (Chap. 9) as well as multimodality image fusion of arbitrary tomographical data using several visualization techniques (Chap. 12). Thus, the software systems presented cover the majority of image processing applications necessary in radiology and were entirely developed, implemented and validated in the clinical routine of a university medical school. (orig.) [de

  16. Applications Of Binary Image Analysis Techniques

    Science.gov (United States)

    Tropf, H.; Enderle, E.; Kammerer, H. P.

    1983-10-01

    After discussing the conditions where binary image analysis techniques can be used, three new applications of the fast binary image analysis system S.A.M. (Sensorsystem for Automation and Measurement) are reported: (1) The human view direction is measured at TV frame rate while the subject's head is free movable. (2) Industrial parts hanging on a moving conveyor are classified prior to spray painting by robot. (3) In automotive wheel assembly, the eccentricity of the wheel is minimized by turning the tyre relative to the rim in order to balance the eccentricity of the components.

  17. Analysis of licensed South African diagnostic imaging equipment ...

    African Journals Online (AJOL)

    Analysis of licensed South African diagnostic imaging equipment. ... Pan African Medical Journal ... Introduction: Objective: To conduct an analysis of all registered South Africa (SA) diagnostic radiology equipment, assess the number of equipment units per capita by imaging modality, and compare SA figures with published ...

  18. Application of Image Texture Analysis for Evaluation of X-Ray Images of Fungal-Infected Maize Kernels

    DEFF Research Database (Denmark)

    Orina, Irene; Manley, Marena; Kucheryavskiy, Sergey V.

    2018-01-01

    The feasibility of image texture analysis to evaluate X-ray images of fungal-infected maize kernels was investigated. X-ray images of maize kernels infected with Fusarium verticillioides and control kernels were acquired using high-resolution X-ray micro-computed tomography. After image acquisition...... developed using partial least squares discriminant analysis (PLS-DA), and accuracies of 67 and 73% were achieved using first-order statistical features and GLCM extracted features, respectively. This work provides information on the possible application of image texture as method for analysing X-ray images......., homogeneity and contrast) were extracted from the side, front and top views of each kernel and used as inputs for principal component analysis (PCA). The first-order statistical image features gave a better separation of the control from infected kernels on day 8 post-inoculation. Classification models were...

  19. Digital images segmentation: a state of art of the different methods ...

    African Journals Online (AJOL)

    An image is a planar representation of a scene or a 3 D object. The primary information associated to each point of the image is transcribed in grey level or in colour. Image analysis is the set of methods which permits the extraction of pertinent information from the image according to the concerned application, to treat them ...

  20. Material State Awareness for Composites Part II: Precursor Damage Analysis and Quantification of Degraded Material Properties Using Quantitative Ultrasonic Image Correlation (QUIC)

    Science.gov (United States)

    Patra, Subir; Banerjee, Sourav

    2017-01-01

    Material state awareness of composites using conventional Nondestructive Evaluation (NDE) method is limited by finding the size and the locations of the cracks and the delamination in a composite structure. To aid the progressive failure models using the slow growth criteria, the awareness of the precursor damage state and quantification of the degraded material properties is necessary, which is challenging using the current NDE methods. To quantify the material state, a new offline NDE method is reported herein. The new method named Quantitative Ultrasonic Image Correlation (QUIC) is devised, where the concept of microcontinuum mechanics is hybrid with the experimentally measured Ultrasonic wave parameters. This unique combination resulted in a parameter called Nonlocal Damage Entropy for the precursor awareness. High frequency (more than 25 MHz) scanning acoustic microscopy is employed for the proposed QUIC. Eight woven carbon-fiber-reinforced-plastic composite specimens were tested under fatigue up to 70% of their remaining useful life. During the first 30% of the life, the proposed nonlocal damage entropy is plotted to demonstrate the degradation of the material properties via awareness of the precursor damage state. Visual proofs for the precursor damage states are provided with the digital images obtained from the micro-optical microscopy, the scanning acoustic microscopy and the scanning electron microscopy. PMID:29258256

  1. Material State Awareness for Composites Part II: Precursor Damage Analysis and Quantification of Degraded Material Properties Using Quantitative Ultrasonic Image Correlation (QUIC

    Directory of Open Access Journals (Sweden)

    Subir Patra

    2017-12-01

    Full Text Available Material state awareness of composites using conventional Nondestructive Evaluation (NDE method is limited by finding the size and the locations of the cracks and the delamination in a composite structure. To aid the progressive failure models using the slow growth criteria, the awareness of the precursor damage state and quantification of the degraded material properties is necessary, which is challenging using the current NDE methods. To quantify the material state, a new offline NDE method is reported herein. The new method named Quantitative Ultrasonic Image Correlation (QUIC is devised, where the concept of microcontinuum mechanics is hybrid with the experimentally measured Ultrasonic wave parameters. This unique combination resulted in a parameter called Nonlocal Damage Entropy for the precursor awareness. High frequency (more than 25 MHz scanning acoustic microscopy is employed for the proposed QUIC. Eight woven carbon-fiber-reinforced-plastic composite specimens were tested under fatigue up to 70% of their remaining useful life. During the first 30% of the life, the proposed nonlocal damage entropy is plotted to demonstrate the degradation of the material properties via awareness of the precursor damage state. Visual proofs for the precursor damage states are provided with the digital images obtained from the micro-optical microscopy, the scanning acoustic microscopy and the scanning electron microscopy.

  2. Multi-spectral Image Analysis for Astaxanthin Coating Classification

    DEFF Research Database (Denmark)

    Ljungqvist, Martin Georg; Ersbøll, Bjarne Kjær; Nielsen, Michael Engelbrecht

    2011-01-01

    Industrial quality inspection using image analysis on astaxanthin coating in aquaculture feed pellets is of great importance for automatic production control. In this study multi-spectral image analysis of pellets was performed using LDA, QDA, SNV and PCA on pixel level and mean value of pixels...

  3. A short introduction to image analysis - Matlab exercises

    DEFF Research Database (Denmark)

    Hansen, Michael Adsetts Edberg

    2000-01-01

    This document contain a short introduction to Image analysis. In addition small exercises has been prepared in order to support the theoretical understanding.......This document contain a short introduction to Image analysis. In addition small exercises has been prepared in order to support the theoretical understanding....

  4. Vaccine Images on Twitter: Analysis of What Images are Shared

    Science.gov (United States)

    Dredze, Mark

    2018-01-01

    Background Visual imagery plays a key role in health communication; however, there is little understanding of what aspects of vaccine-related images make them effective communication aids. Twitter, a popular venue for discussions related to vaccination, provides numerous images that are shared with tweets. Objective The objectives of this study were to understand how images are used in vaccine-related tweets and provide guidance with respect to the characteristics of vaccine-related images that correlate with the higher likelihood of being retweeted. Methods We collected more than one million vaccine image messages from Twitter and characterized various properties of these images using automated image analytics. We fit a logistic regression model to predict whether or not a vaccine image tweet was retweeted, thus identifying characteristics that correlate with a higher likelihood of being shared. For comparison, we built similar models for the sharing of vaccine news on Facebook and for general image tweets. Results Most vaccine-related images are duplicates (125,916/237,478; 53.02%) or taken from other sources, not necessarily created by the author of the tweet. Almost half of the images contain embedded text, and many include images of people and syringes. The visual content is highly correlated with a tweet’s textual topics. Vaccine image tweets are twice as likely to be shared as nonimage tweets. The sentiment of an image and the objects shown in the image were the predictive factors in determining whether an image was retweeted. Conclusions We are the first to study vaccine images on Twitter. Our findings suggest future directions for the study and use of vaccine imagery and may inform communication strategies around vaccination. Furthermore, our study demonstrates an effective study methodology for image analysis. PMID:29615386

  5. Vaccine Images on Twitter: Analysis of What Images are Shared.

    Science.gov (United States)

    Chen, Tao; Dredze, Mark

    2018-04-03

    Visual imagery plays a key role in health communication; however, there is little understanding of what aspects of vaccine-related images make them effective communication aids. Twitter, a popular venue for discussions related to vaccination, provides numerous images that are shared with tweets. The objectives of this study were to understand how images are used in vaccine-related tweets and provide guidance with respect to the characteristics of vaccine-related images that correlate with the higher likelihood of being retweeted. We collected more than one million vaccine image messages from Twitter and characterized various properties of these images using automated image analytics. We fit a logistic regression model to predict whether or not a vaccine image tweet was retweeted, thus identifying characteristics that correlate with a higher likelihood of being shared. For comparison, we built similar models for the sharing of vaccine news on Facebook and for general image tweets. Most vaccine-related images are duplicates (125,916/237,478; 53.02%) or taken from other sources, not necessarily created by the author of the tweet. Almost half of the images contain embedded text, and many include images of people and syringes. The visual content is highly correlated with a tweet's textual topics. Vaccine image tweets are twice as likely to be shared as nonimage tweets. The sentiment of an image and the objects shown in the image were the predictive factors in determining whether an image was retweeted. We are the first to study vaccine images on Twitter. Our findings suggest future directions for the study and use of vaccine imagery and may inform communication strategies around vaccination. Furthermore, our study demonstrates an effective study methodology for image analysis. ©Tao Chen, Mark Dredze. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 03.04.2018.

  6. An approach for quantitative image quality analysis for CT

    Science.gov (United States)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  7. The most-cited articles in pediatric imaging: a bibliometric analysis.

    Science.gov (United States)

    Hong, Su J; Lim, Kyoung J; Yoon, Dae Y; Choi, Chul S; Yun, Eun J; Seo, Young L; Cho, Young K; Yoon, Soo J; Moon, Ji Y; Baek, Sora; Lim, Yun-Jung; Lee, Kwanseop

    2017-07-27

    The number of citations that an article has received reflects its impact on the scientific community. The purpose of our study was to identify and characterize the 51 most-cited articles in pediatric imaging. Based on the database of Journal Citation Reports, we selected 350 journals that were considered as potential outlets for pediatric imaging articles. The Web of Science search tools were used to identify the most-cited articles relevant to pediatric imaging within the selected journals. The 51 most-cited articles in pediatric imaging were published between 1952 and 2011, with 1980- 1989 and 2000-2009 producing 15 articles, each. The number of citations ranged from 576-124 and the number of annual citations ranged from 49.05-2.56. The majority of articles were published in pediatric and related journals (n=26), originated in the United States (n=23), were original articles (n=45), used MRI as imaging modality (n=27), and were concerned with the subspecialty of brain (n=34). University College London School of Medicine (n=6) and School of Medicine University of California (n=4) were the leading institutions and Reynolds EO (n=7) was the most voluminous author. Our study presents a detailed list and an analysis of the most-cited articles in the field of pediatric imaging, which provides an insight into historical developments and allows for recognition of the important advances in this field.

  8. Technical guidance for the development of a solid state image sensor for human low vision image warping

    Science.gov (United States)

    Vanderspiegel, Jan

    1994-01-01

    This report surveys different technologies and approaches to realize sensors for image warping. The goal is to study the feasibility, technical aspects, and limitations of making an electronic camera with special geometries which implements certain transformations for image warping. This work was inspired by the research done by Dr. Juday at NASA Johnson Space Center on image warping. The study has looked into different solid-state technologies to fabricate image sensors. It is found that among the available technologies, CMOS is preferred over CCD technology. CMOS provides more flexibility to design different functions into the sensor, is more widely available, and is a lower cost solution. By using an architecture with row and column decoders one has the added flexibility of addressing the pixels at random, or read out only part of the image.

  9. Multi-phase imaging of intermittency at steady state using differential imaging method by X-ray micro-tomography

    Science.gov (United States)

    Gao, Y.; Lin, Q.; Bijeljic, B.; Blunt, M. J.

    2017-12-01

    To observe intermittency in consolidated rock, we image a steady state flow of brine and decane in Bentheimer sandstone. We devise an experimental method based on X-ray differential imaging method to examine how changes in flow rate impact the pore-scale distribution of fluids during co-injection flow under dynamic flow conditions at steady state. This helps us elucidate the diverse flow regimes (connected, intermittent break-up, or continual break-up of the non-wetting phase pathways) for two capillary numbers. Also, relative permeability curves under both capillary and viscous limited conditions could be measured. We have performed imbibition sample floods using oil-brine and measured steady state relative permeability on a sandstone rock core in order to fully characterize the flow behaviour at low and high Ca. Two sets of experiments at high and low flow rates are provided to explore the time-evolution of the non-wetting phase clusters distribution under different flow conditions. The high flow rate is 0.5 mL/min, whose corresponding capillary number is 7.7×10-6. The low flow rate is 0.02 mL/min, whose capillary number is 3.1×10-7. A procedure based on using high-salinity brine as the contrast phase and applying differential imaging between the dry scan and that of the sample saturation with a 30 wt% Potassium iodide (KI) doped brine help to make sure there is no non-wetting phase in micro-pores. Then the intermittent phase in multiphase flow image at high Ca can be quantified by obtaining the differential image between the 30 wt% KI brine image and the scans that taken at each fixed fractional flow. By using the grey scale histogram distribution of the raw images at each condition, the oil proportion in the intermittent phase can be calculated. The pressure drops at each fractional flow at low and high Ca can be measured by high-precision pressure differential sensors and utilized to calculate to the relative permeability at pore scale. The relative

  10. Multivariate statistical analysis for x-ray photoelectron spectroscopy spectral imaging: Effect of image acquisition time

    International Nuclear Information System (INIS)

    Peebles, D.E.; Ohlhausen, J.A.; Kotula, P.G.; Hutton, S.; Blomfield, C.

    2004-01-01

    The acquisition of spectral images for x-ray photoelectron spectroscopy (XPS) is a relatively new approach, although it has been used with other analytical spectroscopy tools for some time. This technique provides full spectral information at every pixel of an image, in order to provide a complete chemical mapping of the imaged surface area. Multivariate statistical analysis techniques applied to the spectral image data allow the determination of chemical component species, and their distribution and concentrations, with minimal data acquisition and processing times. Some of these statistical techniques have proven to be very robust and efficient methods for deriving physically realistic chemical components without input by the user other than the spectral matrix itself. The benefits of multivariate analysis of the spectral image data include significantly improved signal to noise, improved image contrast and intensity uniformity, and improved spatial resolution - which are achieved due to the effective statistical aggregation of the large number of often noisy data points in the image. This work demonstrates the improvements in chemical component determination and contrast, signal-to-noise level, and spatial resolution that can be obtained by the application of multivariate statistical analysis to XPS spectral images

  11. An expert image analysis system for chromosome analysis application

    International Nuclear Information System (INIS)

    Wu, Q.; Suetens, P.; Oosterlinck, A.; Van den Berghe, H.

    1987-01-01

    This paper reports a recent study on applying a knowledge-based system approach as a new attempt to solve the problem of chromosome classification. A theoretical framework of an expert image analysis system is proposed, based on such a study. In this scheme, chromosome classification can be carried out under a hypothesize-and-verify paradigm, by integrating a rule-based component, in which the expertise of chromosome karyotyping is formulated with an existing image analysis system which uses conventional pattern recognition techniques. Results from the existing system can be used to bring in hypotheses, and with the rule-based verification and modification procedures, improvement of the classification performance can be excepted

  12. Research of second harmonic generation images based on texture analysis

    Science.gov (United States)

    Liu, Yao; Li, Yan; Gong, Haiming; Zhu, Xiaoqin; Huang, Zufang; Chen, Guannan

    2014-09-01

    Texture analysis plays a crucial role in identifying objects or regions of interest in an image. It has been applied to a variety of medical image processing, ranging from the detection of disease and the segmentation of specific anatomical structures, to differentiation between healthy and pathological tissues. Second harmonic generation (SHG) microscopy as a potential noninvasive tool for imaging biological tissues has been widely used in medicine, with reduced phototoxicity and photobleaching. In this paper, we clarified the principles of texture analysis including statistical, transform, structural and model-based methods and gave examples of its applications, reviewing studies of the technique. Moreover, we tried to apply texture analysis to the SHG images for the differentiation of human skin scar tissues. Texture analysis method based on local binary pattern (LBP) and wavelet transform was used to extract texture features of SHG images from collagen in normal and abnormal scars, and then the scar SHG images were classified into normal or abnormal ones. Compared with other texture analysis methods with respect to the receiver operating characteristic analysis, LBP combined with wavelet transform was demonstrated to achieve higher accuracy. It can provide a new way for clinical diagnosis of scar types. At last, future development of texture analysis in SHG images were discussed.

  13. CALIPSO: an interactive image analysis software package for desktop PACS workstations

    Science.gov (United States)

    Ratib, Osman M.; Huang, H. K.

    1990-07-01

    The purpose of this project is to develop a low cost workstation for quantitative analysis of multimodality images using a Macintosh II personal computer. In the current configuration the Macintosh operates as a stand alone workstation where images are imported either from a central PACS server through a standard Ethernet network or recorded through video digitizer board. The CALIPSO software developed contains a large variety ofbasic image display and manipulation tools. We focused our effort however on the design and implementation ofquantitative analysis methods that can be applied to images from different imaging modalities. Analysis modules currently implemented include geometric and densitometric volumes and ejection fraction calculation from radionuclide and cine-angiograms Fourier analysis ofcardiac wall motion vascular stenosis measurement color coded parametric display of regional flow distribution from dynamic coronary angiograms automatic analysis ofmyocardial distribution ofradiolabelled tracers from tomoscintigraphic images. Several of these analysis tools were selected because they use similar color coded andparametric display methods to communicate quantitative data extracted from the images. 1. Rationale and objectives of the project Developments of Picture Archiving and Communication Systems (PACS) in clinical environment allow physicians and radiologists to assess radiographic images directly through imaging workstations (''). This convenient access to the images is often limited by the number of workstations available due in part to their high cost. There is also an increasing need for quantitative analysis ofthe images. During thepast decade

  14. Appearance of the minority dz2 surface state and disappearance of the image-potential state: Criteria for clean Fe(001)

    Science.gov (United States)

    Eibl, Christian; Schmidt, Anke B.; Donath, Markus

    2012-10-01

    The unoccupied surface electronic structure of clean and oxidized Fe(001) was studied with spin-resolved inverse photoemission and target current spectroscopy. For the clean surface, we detected a dz2 surface state with minority spin character just above the Fermi level, while the image-potential surface state disappears. The opposite is observed for the ordered p(1×1)O/Fe(001) surface: the dz2-type surface state is quenched, while the image-potential state shows up as a pronounced feature. This behavior indicates enhanced surface reflectivity at the oxidized surface. The appearance and disappearance of specific unoccupied surface states prove to be decisive criteria for a clean Fe(001) surface. In addition, enhanced spin asymmetry in the unoccupied states is observed for the oxidized surface. Our results have implications for the use of clean and oxidized Fe(001) films as spin-polarization detectors.

  15. A robust state-space kinetics-guided framework for dynamic PET image reconstruction

    International Nuclear Information System (INIS)

    Tong, S; Alessio, A M; Kinahan, P E; Liu, H; Shi, P

    2011-01-01

    Dynamic PET image reconstruction is a challenging issue due to the low SNR and the large quantity of spatio-temporal data. We propose a robust state-space image reconstruction (SSIR) framework for activity reconstruction in dynamic PET. Unlike statistically-based frame-by-frame methods, tracer kinetic modeling is incorporated to provide physiological guidance for the reconstruction, harnessing the temporal information of the dynamic data. Dynamic reconstruction is formulated in a state-space representation, where a compartmental model describes the kinetic processes in a continuous-time system equation, and the imaging data are expressed in a discrete measurement equation. Tracer activity concentrations are treated as the state variables, and are estimated from the dynamic data. Sampled-data H ∞ filtering is adopted for robust estimation. H ∞ filtering makes no assumptions on the system and measurement statistics, and guarantees bounded estimation error for finite-energy disturbances, leading to robust performance for dynamic data with low SNR and/or errors. This alternative reconstruction approach could help us to deal with unpredictable situations in imaging (e.g. data corruption from failed detector blocks) or inaccurate noise models. Experiments on synthetic phantom and patient PET data are performed to demonstrate feasibility of the SSIR framework, and to explore its potential advantages over frame-by-frame statistical reconstruction approaches.

  16. Digital image analysis of X-ray television with an image digitizer

    International Nuclear Information System (INIS)

    Mochizuki, Yasuo; Akaike, Hisahiko; Ogawa, Hitoshi; Kyuma, Yukishige

    1995-01-01

    When video signals of X-ray fluoroscopy were transformed from analog-to-digital ones with an image digitizer, their digital characteristic curves, pre-sampling MTF's and digital Wiener spectral could be measured. This method was advant ageous in that it was able to carry out data sampling because the pixel values inputted could be verified on a CRT. The system of image analysis by this method is inexpensive and effective in evaluating the image quality of digital system. Also, it is expected that this method can be used as a tool for learning the measurement techniques and physical characteristics of digital image quality effectively. (author)

  17. Non-invasive quality evaluation of confluent cells by image-based orientation heterogeneity analysis.

    Science.gov (United States)

    Sasaki, Kei; Sasaki, Hiroto; Takahashi, Atsuki; Kang, Siu; Yuasa, Tetsuya; Kato, Ryuji

    2016-02-01

    In recent years, cell and tissue therapy in regenerative medicine have advanced rapidly towards commercialization. However, conventional invasive cell quality assessment is incompatible with direct evaluation of the cells produced for such therapies, especially in the case of regenerative medicine products. Our group has demonstrated the potential of quantitative assessment of cell quality, using information obtained from cell images, for non-invasive real-time evaluation of regenerative medicine products. However, image of cells in the confluent state are often difficult to evaluate, because accurate recognition of cells is technically difficult and the morphological features of confluent cells are non-characteristic. To overcome these challenges, we developed a new image-processing algorithm, heterogeneity of orientation (H-Orient) processing, to describe the heterogeneous density of cells in the confluent state. In this algorithm, we introduced a Hessian calculation that converts pixel intensity data to orientation data and a statistical profiling calculation that evaluates the heterogeneity of orientations within an image, generating novel parameters that yield a quantitative profile of an image. Using such parameters, we tested the algorithm's performance in discriminating different qualities of cellular images with three types of clinically important cell quality check (QC) models: remaining lifespan check (QC1), manipulation error check (QC2), and differentiation potential check (QC3). Our results show that our orientation analysis algorithm could predict with high accuracy the outcomes of all types of cellular quality checks (>84% average accuracy with cross-validation). Copyright © 2015 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  18. Image analysis in x-ray cinefluorography

    Energy Technology Data Exchange (ETDEWEB)

    Ikuse, J; Yasuhara, H; Sugimoto, H [Toshiba Corp., Kawasaki, Kanagawa (Japan)

    1979-02-01

    For the cinefluorographic image in the cardiovascular diagnostic system, the image quality is evaluated by means of MTF (Modulation Transfer Function), and object contrast by introducing the concept of x-ray spectrum analysis. On the basis of these results, further investigation is made of optimum X-ray exposure factors set for cinefluorography and the cardiovascular diagnostic system.

  19. Holy images on blades: unique swords from the State Hermitage Museum (preliminary publication

    Directory of Open Access Journals (Sweden)

    Vsevolod Obraztsov

    2013-12-01

    Full Text Available The focus of this article are interesting rarities from the collection of the State Hermitage Museum - swords of the 17th-18th centuries with inscriptions in Greek and Slavonic, with images of Christian saints inlaid in gold. The authors offer the general characteristics of 17 exemplars of this kind of arms which are divided into several groups according to the shape of the hilt. A brief overview of the relatively few publications on this subject includes articles by Vasilii Prokhorov (1877; data from the Index of the Medieval Department of the Imperial Hermitage published by Nikodim Kondakov (1891, a catalogue of Count Sergei Sheremetev's collection of arms compiled by Eduard Lenz (1895, and a monograph by E. Astvatsaturian on Turkish arms from the collection of the State Historical Museum (2002. The authors pay special attention to the description and analysis of two swords from the Hermit- age collection. One of them belonged to Count Michail Miloradovich, and was presented to him in 1807 from the city of Bucharest. The second sword came to the Hermitage after the Bolshevik Revolution from the Marble Palace, the residency of the Grand Dukes Konstantinovichi. Besides the traditional inscriptions and images of the Virgin with Child crowned by angels, the blade bears a unique image of Byzantine Emperor Nikephoros Phokas blessed by Jesus Christ with both hands. There are also two cartouches with quotations from Psalms in Greek. The extremely rich décor of this sword and the unique depiction of the Byzantine Emperor leave no doubt that they were made on a special order. The authors connect the sword to the Greek Project initiated by the Russian Empress Catherine the Great. The main idea of the project was a restoration of the Byzantine Empire with Constantinople-Istanbul as its capital, where Grand Duke Konstantin, Catherine the Great's grandchild, would be ascended to the throne. This article is a preliminary publication of a project in process

  20. Mesh Processing in Medical-Image Analysis-a Tutorial

    DEFF Research Database (Denmark)

    Levine, Joshua A.; Paulsen, Rasmus Reinhold; Zhang, Yongjie

    2012-01-01

    Medical-image analysis requires an understanding of sophisticated scanning modalities, constructing geometric models, building meshes to represent domains, and downstream biological applications. These four steps form an image-to-mesh pipeline. For research in this field to progress, the imaging...

  1. 5-ALA induced fluorescent image analysis of actinic keratosis

    Science.gov (United States)

    Cho, Yong-Jin; Bae, Youngwoo; Choi, Eung-Ho; Jung, Byungjo

    2010-02-01

    In this study, we quantitatively analyzed 5-ALA induced fluorescent images of actinic keratosis using digital fluorescent color and hyperspectral imaging modalities. UV-A was utilized to induce fluorescent images and actinic keratosis (AK) lesions were demarcated from surrounding the normal region with different methods. Eight subjects with AK lesion were participated in this study. In the hyperspectral imaging modality, spectral analysis method was utilized for hyperspectral cube image and AK lesions were demarcated from the normal region. Before image acquisition, we designated biopsy position for histopathology of AK lesion and surrounding normal region. Erythema index (E.I.) values on both regions were calculated from the spectral cube data. Image analysis of subjects resulted in two different groups: the first group with the higher fluorescence signal and E.I. on AK lesion than the normal region; the second group with lower fluorescence signal and without big difference in E.I. between two regions. In fluorescent color image analysis of facial AK, E.I. images were calculated on both normal and AK lesions and compared with the results of hyperspectral imaging modality. The results might indicate that the different intensity of fluorescence and E.I. among the subjects with AK might be interpreted as different phases of morphological and metabolic changes of AK lesions.

  2. Enabling Collaborative Analysis: State Evaluation Groups, the Electronic State File, and Collaborative Analysis Tools

    International Nuclear Information System (INIS)

    Eldridge, C.; Gagne, D.; Wilson, B.; Murray, J.; Gazze, C.; Feldman, Y.; Rorif, F.

    2015-01-01

    The timely collection and analysis of all safeguards relevant information is the key to drawing and maintaining soundly-based safeguards conclusions. In this regard, the IAEA has made multidisciplinary State Evaluation Groups (SEGs) central to this process. To date, SEGs have been established for all States and tasked with developing State-level approaches (including the identification of technical objectives), drafting annual implementation plans specifying the field and headquarters activities necessary to meet technical objectives, updating the State evaluation on an ongoing basis to incorporate new information, preparing an annual evaluation summary, and recommending a safeguards conclusion to IAEA senior management. To accomplish these tasks, SEGs need to be staffed with relevant expertise and empowered with tools that allow for collaborative access to, and analysis of, disparate information sets. To ensure SEGs have the requisite expertise, members are drawn from across the Department of Safeguards based on their knowledge of relevant data sets (e.g., nuclear material accountancy, material balance evaluation, environmental sampling, satellite imagery, open source information, etc.) or their relevant technical (e.g., fuel cycle) expertise. SEG members also require access to all available safeguards relevant data on the State. To facilitate this, the IAEA is also developing a common, secure platform where all safeguards information can be electronically stored and made available for analysis (an electronic State file). The structure of this SharePoint-based system supports IAEA information collection processes, enables collaborative analysis by SEGs, and provides for management insight and review. In addition to this common platform, the Agency is developing, deploying, and/or testing sophisticated data analysis tools that can synthesize information from diverse information sources, analyze diverse datasets from multiple viewpoints (e.g., temporal, geospatial

  3. High-speed vibrational imaging and spectral analysis of lipid bodies by compound Raman microscopy.

    Science.gov (United States)

    Slipchenko, Mikhail N; Le, Thuc T; Chen, Hongtao; Cheng, Ji-Xin

    2009-05-28

    Cells store excess energy in the form of cytoplasmic lipid droplets. At present, it is unclear how different types of fatty acids contribute to the formation of lipid droplets. We describe a compound Raman microscope capable of both high-speed chemical imaging and quantitative spectral analysis on the same platform. We used a picosecond laser source to perform coherent Raman scattering imaging of a biological sample and confocal Raman spectral analysis at points of interest. The potential of the compound Raman microscope was evaluated on lipid bodies of cultured cells and live animals. Our data indicate that the in vivo fat contains much more unsaturated fatty acids (FAs) than the fat formed via de novo synthesis in 3T3-L1 cells. Furthermore, in vivo analysis of subcutaneous adipocytes and glands revealed a dramatic difference not only in the unsaturation level but also in the thermodynamic state of FAs inside their lipid bodies. Additionally, the compound Raman microscope allows tracking of the cellular uptake of a specific fatty acid and its abundance in nascent cytoplasmic lipid droplets. The high-speed vibrational imaging and spectral analysis capability renders compound Raman microscopy an indispensible analytical tool for the study of lipid-droplet biology.

  4. Image analysis of microsialograms of the mouse parotid gland using digital image processing

    International Nuclear Information System (INIS)

    Yoshiura, K.; Ohki, M.; Yamada, N.

    1991-01-01

    The authors compared two digital-image feature-extraction methods for the analysis of microsialograms of the mouse parotid gland following either overfilling, experimentally induced acute sialoadenitis or irradiation. Microsialograms were digitized using a drum-scanning microdensitometer. The grey levels were then partitioned into four bands representing soft tissue, peripheral minor, middle-sized and major ducts, and run-length and histogram analysis of the digital images performed. Serial analysis of microsialograms during progressive filling showed that both methods depicted the structural characteristics of the ducts at each grey level. However, in the experimental groups, run-length analysis showed slight changes in the peripheral duct system more clearly. This method was therefore considered more effective than histogram analysis

  5. Introduction to the Multifractal Analysis of Images

    OpenAIRE

    Lévy Véhel , Jacques

    1998-01-01

    International audience; After a brief review of some classical approaches in image segmentation, the basics of multifractal theory and its application to image analysis are presented. Practical methods for multifractal spectrum estimation are discussed and some experimental results are given.

  6. Computerised image analysis of biocrystallograms originating from agricultural products

    DEFF Research Database (Denmark)

    Andersen, Jens-Otto; Henriksen, Christian B.; Laursen, J.

    1999-01-01

    Procedures are presented for computerised image analysis of iocrystallogram images, originating from biocrystallization investigations of agricultural products. The biocrystallization method is based on the crystallographic phenomenon that when adding biological substances, such as plant extracts...... on up to eight parameters indicated strong relationships, with R2 up to 0.98. It is concluded that the procedures were able to discriminate the seven groups of images, and are applicable for biocrystallization investigations of agricultural products. Perspectives for the application of image analysis...

  7. Toward an implicit measure of emotions: ratings of abstract images reveal distinct emotional states.

    Science.gov (United States)

    Bartoszek, Gregory; Cervone, Daniel

    2017-11-01

    Although implicit tests of positive and negative affect exist, implicit measures of distinct emotional states are scarce. Three experiments examined whether a novel implicit emotion-assessment task, the rating of emotion expressed in abstract images, would reveal distinct emotional states. In Experiment 1, participants exposed to a sadness-inducing story inferred more sadness, and less happiness, in abstract images. In Experiment 2, an anger-provoking interaction increased anger ratings. In Experiment 3, compared to neutral images, spider images increased fear ratings in spider-fearful participants but not in controls. In each experiment, the implicit task indicated elevated levels of the target emotion and did not indicate elevated levels of non-target negative emotions; the task thus differentiated among emotional states of the same valence. Correlations also supported the convergent and discriminant validity of the implicit task. Supporting the possibility that heuristic processes underlie the ratings, group differences were stronger among those who responded relatively quickly.

  8. A fast chaos-based image encryption scheme with a dynamic state variables selection mechanism

    Science.gov (United States)

    Chen, Jun-xin; Zhu, Zhi-liang; Fu, Chong; Yu, Hai; Zhang, Li-bo

    2015-03-01

    In recent years, a variety of chaos-based image cryptosystems have been investigated to meet the increasing demand for real-time secure image transmission. Most of them are based on permutation-diffusion architecture, in which permutation and diffusion are two independent procedures with fixed control parameters. This property results in two flaws. (1) At least two chaotic state variables are required for encrypting one plain pixel, in permutation and diffusion stages respectively. Chaotic state variables produced with high computation complexity are not sufficiently used. (2) The key stream solely depends on the secret key, and hence the cryptosystem is vulnerable against known/chosen-plaintext attacks. In this paper, a fast chaos-based image encryption scheme with a dynamic state variables selection mechanism is proposed to enhance the security and promote the efficiency of chaos-based image cryptosystems. Experimental simulations and extensive cryptanalysis have been carried out and the results prove the superior security and high efficiency of the scheme.

  9. Fast Depiction Invariant Visual Similarity for Content Based Image Retrieval Based on Data-driven Visual Similarity using Linear Discriminant Analysis

    Science.gov (United States)

    Wihardi, Y.; Setiawan, W.; Nugraha, E.

    2018-01-01

    On this research we try to build CBIRS based on Learning Distance/Similarity Function using Linear Discriminant Analysis (LDA) and Histogram of Oriented Gradient (HoG) feature. Our method is invariant to depiction of image, such as similarity of image to image, sketch to image, and painting to image. LDA can decrease execution time compared to state of the art method, but it still needs an improvement in term of accuracy. Inaccuracy in our experiment happen because we did not perform sliding windows search and because of low number of negative samples as natural-world images.

  10. Improving spatial resolution in quantum imaging beyond the Rayleigh diffraction limit using multiphoton W entangled states

    Energy Technology Data Exchange (ETDEWEB)

    Wen Jianming, E-mail: jianming.wen@gmail.co [National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093 (China); Department of Physics, University of Arkansas, Fayetteville, AR 72701 (United States); Du, Shengwang [Department of Physics, Hong Kong University of Science and Technology, Clear Bay (Hong Kong); Xiao Min [National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093 (China); Department of Physics, University of Arkansas, Fayetteville, AR 72701 (United States); School of Modern Engineering and Applied Science, Nanjing University, Nanjing 210093 (China)

    2010-08-23

    Using multiphoton entangled states, we demonstrate improving spatial imaging resolution beyond the Rayleigh diffraction limit in the quantum imaging process. In particular, we examine resolution enhancement using triphoton W state and a factor of 2 is achievable as with the use of the Greenberger-Horne-Zeilinger state, compared to using a classical-light source.

  11. Imaging of fast-neutron sources using solid-state track-recorder pinhole radiography

    International Nuclear Information System (INIS)

    Ruddy, F.H.; Gold, R.; Roberts, J.H.; Kaiser, B.J.; Preston, C.C.

    1983-08-01

    Pinhole imaging methods are being developed and tested for potential future use in imaging the intense neutron source of the Fusion Materials Irradiation Test (FMIT) Facility. Previously reported, extensive calibration measurements of the proton, neutron, and alpha particle response characteristics of CR-39 polymer solid state track recorders (SSTRs) are being used to interpret the results of imaging experiments using both charged particle and neutron pinhole collimators. High resolution, neutron pinhole images of a 252 Cf source have been obtained in the form of neutron induced proton recoil tracks in CR-39 polymer SSTR. These imaging experiments are described as well as their potential future applications to FMIT

  12. From Pixels to Geographic Objects in Remote Sensing Image Analysis

    NARCIS (Netherlands)

    Addink, E.A.; Van Coillie, Frieke M.B.; Jong, Steven M. de

    Traditional image analysis methods are mostly pixel-based and use the spectral differences of landscape elements at the Earth surface to classify these elements or to extract element properties from the Earth Observation image. Geographic object-based image analysis (GEOBIA) has received

  13. Excitation and characterization of image potential state electrons on quasi-free-standing graphene

    Science.gov (United States)

    Lin, Yi; Li, Yunzhe; Sadowski, Jerzy T.; Jin, Wencan; Dadap, Jerry I.; Hybertsen, Mark S.; Osgood, Richard M.

    2018-04-01

    We investigate the band structure of image potential states in quasi-free-standing graphene (QFG) monolayer islands using angle-resolved two-photon-photoemission spectroscopy. Direct probing by low-energy electron diffraction shows that QFG is formed following oxygen intercalation into the graphene-Ir(111) interface. Despite the apparent decoupling of the monolayer graphene from the Ir substrate, we find that the binding energy of the n =1 image potential state on these QFG islands increases by 0.17 eV, as compared to the original Gr/Ir(111) interface. We use calculations based on density-functional theory to construct an empirical, one-dimensional potential that quantitatively reproduces the image potential state binding energy and links the changes in the interface structure to the shift in energy. Specifically, two factors contribute comparably to this energy shift: a deeper potential well arising from the presence of intercalated oxygen adatoms and a wider potential well associated with the increase in the graphene-Ir distance. While image potential states have not been observed previously on QFG by photoemission, our paper now demonstrates that they may be strongly excited in a well-defined QFG system produced by oxygen intercalation. This opens an opportunity for studying the surface electron dynamics in QFG systems, beyond those found in typical nonintercalated graphene-on-substrate systems.

  14. Mueller matrix polarimetry imaging for breast cancer analysis (Conference Presentation)

    Science.gov (United States)

    Gribble, Adam; Vitkin, Alex

    2017-02-01

    Polarized light has many applications in biomedical imaging. The interaction of a biological sample with polarized light reveals information about its biological composition, both structural and functional. The most comprehensive type of polarimetry analysis is to measure the Mueller matrix, a polarization transfer function that completely describes how a sample interacts with polarized light. However, determination of the Mueller matrix requires tissue analysis under many different states of polarized light; a time consuming and measurement intensive process. Here we address this limitation with a new rapid polarimetry system, and use this polarimetry platform to investigate a variety of tissue changes associated with breast cancer. We have recently developed a rapid polarimetry imaging platform based on four photoelastic modulators (PEMs). The PEMs generate fast polarization modulations that allow the complete sample Mueller matrix to be imaged over a large field of view, with no moving parts. This polarimetry system is then demonstrated to be sensitive to a variety of tissue changes that are relevant to breast cancer. Specifically, we show that changes in depolarization can reveal tumor margins, and can differentiate between viable and necrotic breast cancer metastasized to the lymph nodes. Furthermore, the polarimetric property of linear retardance (related to birefringence) is dependent on collagen organization in the extracellular matrix. These findings indicate that our polarimetry platform may have future applications in fields such as breast cancer diagnosis, improving the speed and efficacy of intraoperative pathology, and providing prognostic information that may be beneficial for guiding treatment.

  15. Development of Image Analysis Software of MAXI

    Science.gov (United States)

    Eguchi, S.; Ueda, Y.; Hiroi, K.; Isobe, N.; Sugizaki, M.; Suzuki, M.; Tomida, H.; Maxi Team

    2010-12-01

    Monitor of All-sky X-ray Image (MAXI) is an X-ray all-sky monitor, attached to the Japanese experiment module Kibo on the International Space Station. The main scientific goals of the MAXI mission include the discovery of X-ray novae followed by prompt alerts to the community (Negoro et al., in this conference), and production of X-ray all-sky maps and new source catalogs with unprecedented sensitivities. To extract the best capabilities of the MAXI mission, we are working on the development of detailed image analysis tools. We utilize maximum likelihood fitting to a projected sky image, where we take account of the complicated detector responses, such as the background and point spread functions (PSFs). The modeling of PSFs, which strongly depend on the orbit and attitude of MAXI, is a key element in the image analysis. In this paper, we present the status of our software development.

  16. Quantitative methods for the analysis of electron microscope images

    DEFF Research Database (Denmark)

    Skands, Peter Ulrik Vallø

    1996-01-01

    The topic of this thesis is an general introduction to quantitative methods for the analysis of digital microscope images. The images presented are primarily been acquired from Scanning Electron Microscopes (SEM) and interfermeter microscopes (IFM). The topic is approached though several examples...... foundation of the thesis fall in the areas of: 1) Mathematical Morphology; 2) Distance transforms and applications; and 3) Fractal geometry. Image analysis opens in general the possibility of a quantitative and statistical well founded measurement of digital microscope images. Herein lies also the conditions...

  17. OpenComet: An automated tool for comet assay image analysis

    Directory of Open Access Journals (Sweden)

    Benjamin M. Gyori

    2014-01-01

    Full Text Available Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.

  18. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    Science.gov (United States)

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  19. Temporal Noise Analysis of Charge-Domain Sampling Readout Circuits for CMOS Image Sensors

    Directory of Open Access Journals (Sweden)

    Xiaoliang Ge

    2018-02-01

    Full Text Available This paper presents a temporal noise analysis of charge-domain sampling readout circuits for Complementary Metal-Oxide Semiconductor (CMOS image sensors. In order to address the trade-off between the low input-referred noise and high dynamic range, a Gm-cell-based pixel together with a charge-domain correlated-double sampling (CDS technique has been proposed to provide a way to efficiently embed a tunable conversion gain along the read-out path. Such readout topology, however, operates in a non-stationery large-signal behavior, and the statistical properties of its temporal noise are a function of time. Conventional noise analysis methods for CMOS image sensors are based on steady-state signal models, and therefore cannot be readily applied for Gm-cell-based pixels. In this paper, we develop analysis models for both thermal noise and flicker noise in Gm-cell-based pixels by employing the time-domain linear analysis approach and the non-stationary noise analysis theory, which help to quantitatively evaluate the temporal noise characteristic of Gm-cell-based pixels. Both models were numerically computed in MATLAB using design parameters of a prototype chip, and compared with both simulation and experimental results. The good agreement between the theoretical and measurement results verifies the effectiveness of the proposed noise analysis models.

  20. Breast cancer histopathology image analysis : a review

    NARCIS (Netherlands)

    Veta, M.; Pluim, J.P.W.; Diest, van P.J.; Viergever, M.A.

    2014-01-01

    This paper presents an overview of methods that have been proposed for the analysis of breast cancer histopathology images. This research area has become particularly relevant with the advent of whole slide imaging (WSI) scanners, which can perform cost-effective and high-throughput histopathology

  1. Analysis of Two-Dimensional Electrophoresis Gel Images

    DEFF Research Database (Denmark)

    Pedersen, Lars

    2002-01-01

    This thesis describes and proposes solutions to some of the currently most important problems in pattern recognition and image analysis of two-dimensional gel electrophoresis (2DGE) images. 2DGE is the leading technique to separate individual proteins in biological samples with many biological...

  2. Quantitative Analysis of Rat Dorsal Root Ganglion Neurons Cultured on Microelectrode Arrays Based on Fluorescence Microscopy Image Processing.

    Science.gov (United States)

    Mari, João Fernando; Saito, José Hiroki; Neves, Amanda Ferreira; Lotufo, Celina Monteiro da Cruz; Destro-Filho, João-Batista; Nicoletti, Maria do Carmo

    2015-12-01

    Microelectrode Arrays (MEA) are devices for long term electrophysiological recording of extracellular spontaneous or evocated activities on in vitro neuron culture. This work proposes and develops a framework for quantitative and morphological analysis of neuron cultures on MEAs, by processing their corresponding images, acquired by fluorescence microscopy. The neurons are segmented from the fluorescence channel images using a combination of segmentation by thresholding, watershed transform, and object classification. The positioning of microelectrodes is obtained from the transmitted light channel images using the circular Hough transform. The proposed method was applied to images of dissociated culture of rat dorsal root ganglion (DRG) neuronal cells. The morphological and topological quantitative analysis carried out produced information regarding the state of culture, such as population count, neuron-to-neuron and neuron-to-microelectrode distances, soma morphologies, neuron sizes, neuron and microelectrode spatial distributions. Most of the analysis of microscopy images taken from neuronal cultures on MEA only consider simple qualitative analysis. Also, the proposed framework aims to standardize the image processing and to compute quantitative useful measures for integrated image-signal studies and further computational simulations. As results show, the implemented microelectrode identification method is robust and so are the implemented neuron segmentation and classification one (with a correct segmentation rate up to 84%). The quantitative information retrieved by the method is highly relevant to assist the integrated signal-image study of recorded electrophysiological signals as well as the physical aspects of the neuron culture on MEA. Although the experiments deal with DRG cell images, cortical and hippocampal cell images could also be processed with small adjustments in the image processing parameter estimation.

  3. An Integrative Object-Based Image Analysis Workflow for Uav Images

    Science.gov (United States)

    Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong

    2016-06-01

    In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  4. AN INTEGRATIVE OBJECT-BASED IMAGE ANALYSIS WORKFLOW FOR UAV IMAGES

    Directory of Open Access Journals (Sweden)

    H. Yu

    2016-06-01

    Full Text Available In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA. More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC. Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya’an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  5. Structural Image Analysis of the Brain in Neuropsychology Using Magnetic Resonance Imaging (MRI) Techniques.

    Science.gov (United States)

    Bigler, Erin D

    2015-09-01

    Magnetic resonance imaging (MRI) of the brain provides exceptional image quality for visualization and neuroanatomical classification of brain structure. A variety of image analysis techniques provide both qualitative as well as quantitative methods to relate brain structure with neuropsychological outcome and are reviewed herein. Of particular importance are more automated methods that permit analysis of a broad spectrum of anatomical measures including volume, thickness and shape. The challenge for neuropsychology is which metric to use, for which disorder and the timing of when image analysis methods are applied to assess brain structure and pathology. A basic overview is provided as to the anatomical and pathoanatomical relations of different MRI sequences in assessing normal and abnormal findings. Some interpretive guidelines are offered including factors related to similarity and symmetry of typical brain development along with size-normalcy features of brain anatomy related to function. The review concludes with a detailed example of various quantitative techniques applied to analyzing brain structure for neuropsychological outcome studies in traumatic brain injury.

  6. Analysis of live cell images: Methods, tools and opportunities.

    Science.gov (United States)

    Nketia, Thomas A; Sailem, Heba; Rohde, Gustavo; Machiraju, Raghu; Rittscher, Jens

    2017-02-15

    Advances in optical microscopy, biosensors and cell culturing technologies have transformed live cell imaging. Thanks to these advances live cell imaging plays an increasingly important role in basic biology research as well as at all stages of drug development. Image analysis methods are needed to extract quantitative information from these vast and complex data sets. The aim of this review is to provide an overview of available image analysis methods for live cell imaging, in particular required preprocessing image segmentation, cell tracking and data visualisation methods. The potential opportunities recent advances in machine learning, especially deep learning, and computer vision provide are being discussed. This review includes overview of the different available software packages and toolkits. Copyright © 2017. Published by Elsevier Inc.

  7. Image Sharing Technologies and Reduction of Imaging Utilization: A Systematic Review and Meta-analysis

    Science.gov (United States)

    Vest, Joshua R.; Jung, Hye-Young; Ostrovsky, Aaron; Das, Lala Tanmoy; McGinty, Geraldine B.

    2016-01-01

    Introduction Image sharing technologies may reduce unneeded imaging by improving provider access to imaging information. A systematic review and meta-analysis were conducted to summarize the impact of image sharing technologies on patient imaging utilization. Methods Quantitative evaluations of the effects of PACS, regional image exchange networks, interoperable electronic heath records, tools for importing physical media, and health information exchange systems on utilization were identified through a systematic review of the published and gray English-language literature (2004–2014). Outcomes, standard effect sizes (ESs), settings, technology, populations, and risk of bias were abstracted from each study. The impact of image sharing technologies was summarized with random-effects meta-analysis and meta-regression models. Results A total of 17 articles were included in the review, with a total of 42 different studies. Image sharing technology was associated with a significant decrease in repeat imaging (pooled effect size [ES] = −0.17; 95% confidence interval [CI] = [−0.25, −0.09]; P utilization (pooled ES = 0.20; 95% CI = [0.07, 0.32]; P = .002). For all outcomes combined, image sharing technology was not associated with utilization. Most studies were at risk for bias. Conclusions Image sharing technology was associated with reductions in repeat and unnecessary imaging, in both the overall literature and the most-rigorous studies. Stronger evidence is needed to further explore the role of specific technologies and their potential impact on various modalities, patient populations, and settings. PMID:26614882

  8. Computed image analysis of neutron radiographs

    International Nuclear Information System (INIS)

    Dinca, M.; Anghel, E.; Preda, M.; Pavelescu, M.

    2008-01-01

    Similar with X-radiography, using neutron like penetrating particle, there is in practice a nondestructive technique named neutron radiology. When the registration of information is done on a film with the help of a conversion foil (with high cross section for neutrons) that emits secondary radiation (β,γ) that creates a latent image, the technique is named neutron radiography. A radiographic industrial film that contains the image of the internal structure of an object, obtained by neutron radiography, must be subsequently analyzed to obtain qualitative and quantitative information about the structural integrity of that object. There is possible to do a computed analysis of a film using a facility with next main components: an illuminator for film, a CCD video camera and a computer (PC) with suitable software. The qualitative analysis intends to put in evidence possibly anomalies of the structure due to manufacturing processes or induced by working processes (for example, the irradiation activity in the case of the nuclear fuel). The quantitative determination is based on measurements of some image parameters: dimensions, optical densities. The illuminator has been built specially to perform this application but can be used for simple visual observation. The illuminated area is 9x40 cm. The frame of the system is a comparer of Abbe Carl Zeiss Jena type, which has been adapted to achieve this application. The video camera assures the capture of image that is stored and processed by computer. A special program SIMAG-NG has been developed at INR Pitesti that beside of the program SMTV II of the special acquisition module SM 5010 can analyze the images of a film. The major application of the system was the quantitative analysis of a film that contains the images of some nuclear fuel pins beside a dimensional standard. The system was used to measure the length of the pellets of the TRIGA nuclear fuel. (authors)

  9. Development of a Reference Image Collection Library for Histopathology Image Processing, Analysis and Decision Support Systems Research.

    Science.gov (United States)

    Kostopoulos, Spiros; Ravazoula, Panagiota; Asvestas, Pantelis; Kalatzis, Ioannis; Xenogiannopoulos, George; Cavouras, Dionisis; Glotsos, Dimitris

    2017-06-01

    Histopathology image processing, analysis and computer-aided diagnosis have been shown as effective assisting tools towards reliable and intra-/inter-observer invariant decisions in traditional pathology. Especially for cancer patients, decisions need to be as accurate as possible in order to increase the probability of optimal treatment planning. In this study, we propose a new image collection library (HICL-Histology Image Collection Library) comprising 3831 histological images of three different diseases, for fostering research in histopathology image processing, analysis and computer-aided diagnosis. Raw data comprised 93, 116 and 55 cases of brain, breast and laryngeal cancer respectively collected from the archives of the University Hospital of Patras, Greece. The 3831 images were generated from the most representative regions of the pathology, specified by an experienced histopathologist. The HICL Image Collection is free for access under an academic license at http://medisp.bme.teiath.gr/hicl/ . Potential exploitations of the proposed library may span over a board spectrum, such as in image processing to improve visualization, in segmentation for nuclei detection, in decision support systems for second opinion consultations, in statistical analysis for investigation of potential correlations between clinical annotations and imaging findings and, generally, in fostering research on histopathology image processing and analysis. To the best of our knowledge, the HICL constitutes the first attempt towards creation of a reference image collection library in the field of traditional histopathology, publicly and freely available to the scientific community.

  10. Antibody-Unfolding and Metastable-State Binding in Force Spectroscopy and Recognition Imaging

    Science.gov (United States)

    Kaur, Parminder; Qiang-Fu; Fuhrmann, Alexander; Ros, Robert; Kutner, Linda Obenauer; Schneeweis, Lumelle A.; Navoa, Ryman; Steger, Kirby; Xie, Lei; Yonan, Christopher; Abraham, Ralph; Grace, Michael J.; Lindsay, Stuart

    2011-01-01

    Force spectroscopy and recognition imaging are important techniques for characterizing and mapping molecular interactions. In both cases, an antibody is pulled away from its target in times that are much less than the normal residence time of the antibody on its target. The distribution of pulling lengths in force spectroscopy shows the development of additional peaks at high loading rates, indicating that part of the antibody frequently unfolds. This propensity to unfold is reversible, indicating that exposure to high loading rates induces a structural transition to a metastable state. Weakened interactions of the antibody in this metastable state could account for reduced specificity in recognition imaging where the loading rates are always high. The much weaker interaction between the partially unfolded antibody and target, while still specific (as shown by control experiments), results in unbinding on millisecond timescales, giving rise to rapid switching noise in the recognition images. At the lower loading rates used in force spectroscopy, we still find discrepancies between the binding kinetics determined by force spectroscopy and those determined by surface plasmon resonance—possibly a consequence of the short tethers used in recognition imaging. Recognition imaging is nonetheless a powerful tool for interpreting complex atomic force microscopy images, so long as specificity is calibrated in situ, and not inferred from equilibrium binding kinetics. PMID:21190677

  11. Evaluation of Yogurt Microstructure Using Confocal Laser Scanning Microscopy and Image Analysis.

    Science.gov (United States)

    Skytte, Jacob L; Ghita, Ovidiu; Whelan, Paul F; Andersen, Ulf; Møller, Flemming; Dahl, Anders B; Larsen, Rasmus

    2015-06-01

    The microstructure of protein networks in yogurts defines important physical properties of the yogurt and hereby partly its quality. Imaging this protein network using confocal scanning laser microscopy (CSLM) has shown good results, and CSLM has become a standard measuring technique for fermented dairy products. When studying such networks, hundreds of images can be obtained, and here image analysis methods are essential for using the images in statistical analysis. Previously, methods including gray level co-occurrence matrix analysis and fractal analysis have been used with success. However, a range of other image texture characterization methods exists. These methods describe an image by a frequency distribution of predefined image features (denoted textons). Our contribution is an investigation of the choice of image analysis methods by performing a comparative study of 7 major approaches to image texture description. Here, CSLM images from a yogurt fermentation study are investigated, where production factors including fat content, protein content, heat treatment, and incubation temperature are varied. The descriptors are evaluated through nearest neighbor classification, variance analysis, and cluster analysis. Our investigation suggests that the texton-based descriptors provide a fuller description of the images compared to gray-level co-occurrence matrix descriptors and fractal analysis, while still being as applicable and in some cases as easy to tune. © 2015 Institute of Food Technologists®

  12. Computational medical imaging and hemodynamics framework for functional analysis and assessment of cardiovascular structures.

    Science.gov (United States)

    Wong, Kelvin K L; Wang, Defeng; Ko, Jacky K L; Mazumdar, Jagannath; Le, Thu-Thao; Ghista, Dhanjoo

    2017-03-21

    Cardiac dysfunction constitutes common cardiovascular health issues in the society, and has been an investigation topic of strong focus by researchers in the medical imaging community. Diagnostic modalities based on echocardiography, magnetic resonance imaging, chest radiography and computed tomography are common techniques that provide cardiovascular structural information to diagnose heart defects. However, functional information of cardiovascular flow, which can in fact be used to support the diagnosis of many cardiovascular diseases with a myriad of hemodynamics performance indicators, remains unexplored to its full potential. Some of these indicators constitute important cardiac functional parameters affecting the cardiovascular abnormalities. With the advancement of computer technology that facilitates high speed computational fluid dynamics, the realization of a support diagnostic platform of hemodynamics quantification and analysis can be achieved. This article reviews the state-of-the-art medical imaging and high fidelity multi-physics computational analyses that together enable reconstruction of cardiovascular structures and hemodynamic flow patterns within them, such as of the left ventricle (LV) and carotid bifurcations. The combined medical imaging and hemodynamic analysis enables us to study the mechanisms of cardiovascular disease-causing dysfunctions, such as how (1) cardiomyopathy causes left ventricular remodeling and loss of contractility leading to heart failure, and (2) modeling of LV construction and simulation of intra-LV hemodynamics can enable us to determine the optimum procedure of surgical ventriculation to restore its contractility and health This combined medical imaging and hemodynamics framework can potentially extend medical knowledge of cardiovascular defects and associated hemodynamic behavior and their surgical restoration, by means of an integrated medical image diagnostics and hemodynamic performance analysis framework.

  13. Portable Imaging Polarimeter and Imaging Experiments; TOPICAL

    International Nuclear Information System (INIS)

    PHIPPS, GARY S.; KEMME, SHANALYN A.; SWEATT, WILLIAM C.; DESCOUR, M.R.; GARCIA, J.P.; DERENIAK, E.L.

    1999-01-01

    Polarimetry is the method of recording the state of polarization of light. Imaging polarimetry extends this method to recording the spatially resolved state of polarization within a scene. Imaging-polarimetry data have the potential to improve the detection of manmade objects in natural backgrounds. We have constructed a midwave infrared complete imaging polarimeter consisting of a fixed wire-grid polarizer and rotating form-birefringent retarder. The retardance and the orientation angles of the retarder were optimized to minimize the sensitivity of the instrument to noise in the measurements. The optimal retardance was found to be 132(degree) rather than the typical 90(degree). The complete imaging polarimeter utilized a liquid-nitrogen cooled PtSi camera. The fixed wire-grid polarizer was located at the cold stop inside the camera dewar. The complete imaging polarimeter was operated in the 4.42-5(micro)m spectral range. A series of imaging experiments was performed using as targets a surface of water, an automobile, and an aircraft. Further analysis of the polarization measurements revealed that in all three cases the magnitude of circular polarization was comparable to the noise in the calculated Stokes-vector components

  14. Muscle contraction analysis with MRI image

    International Nuclear Information System (INIS)

    Horio, Hideyuki; Kuroda, Yoshihiro; Imura, Masataka; Oshiro, Osamu

    2010-01-01

    The MRI measurement has been widely used from the advantage such as no radiation exposure and high resolution. In various measurement objects, the muscle is used for a research and clinical practice. But it was difficult to judge static state of a muscle contraction. In this study, we focused on a proton density change by the blood vessel pressure at the time of the muscle contraction, and aimed the judgments of muscle contraction from variance of the signal intensity. First, the background was removed from the measured images. Second, each signal divided into the low signal side and the high signal side, and variance values (σ H , σ L ) and the ratio (μ) were calculated. Finally, Relax and strain state ware judged from the ratio (μ). As a Result, in relax state, ratio (μ r ) was 0.9823±0.06133. And in strain state, ratio (μ s ) was 0.7547±0.10824. Therefore, a significant difference was obtained in relax state and strain state. Therefore, the strain state judgment of the muscle was possible by this study's method. (author)

  15. Geographic Object-Based Image Analysis: Towards a new paradigm

    NARCIS (Netherlands)

    Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.A.|info:eu-repo/dai/nl/224281216; Queiroz Feitosa, R.; van der Meer, F.D.|info:eu-repo/dai/nl/138940908; van der Werff, H.M.A.; van Coillie, F.; Tiede, A.

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis – GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature

  16. ANALYSIS OF THE RADIOMETRIC RESPONSE OF ORANGE TREE CROWN IN HYPERSPECTRAL UAV IMAGES

    Directory of Open Access Journals (Sweden)

    N. N. Imai

    2017-10-01

    Full Text Available High spatial resolution remote sensing images acquired by drones are highly relevant data source in many applications. However, strong variations of radiometric values are difficult to correct in hyperspectral images. Honkavaara et al. (2013 presented a radiometric block adjustment method in which hyperspectral images taken from remotely piloted aerial systems – RPAS were processed both geometrically and radiometrically to produce a georeferenced mosaic in which the standard Reflectance Factor for the nadir is represented. The plants crowns in permanent cultivation show complex variations since the density of shadows and the irradiance of the surface vary due to the geometry of illumination and the geometry of the arrangement of branches and leaves. An evaluation of the radiometric quality of the mosaic of an orange plantation produced using images captured by a hyperspectral imager based on a tunable Fabry-Pérot interferometer and applying the radiometric block adjustment method, was performed. A high-resolution UAV based hyperspectral survey was carried out in an orange-producing farm located in Santa Cruz do Rio Pardo, state of São Paulo, Brazil. A set of 25 narrow spectral bands with 2.5 cm of GSD images were acquired. Trend analysis was applied to the values of a sample of transects extracted from plants appearing in the mosaic. The results of these trend analysis on the pixels distributed along transects on orange tree crown showed the reflectance factor presented a slightly trend, but the coefficients of the polynomials are very small, so the quality of mosaic is good enough for many applications.

  17. Quantitative imaging biomarkers: the application of advanced image processing and analysis to clinical and preclinical decision making.

    Science.gov (United States)

    Prescott, Jeffrey William

    2013-02-01

    The importance of medical imaging for clinical decision making has been steadily increasing over the last four decades. Recently, there has also been an emphasis on medical imaging for preclinical decision making, i.e., for use in pharamaceutical and medical device development. There is also a drive towards quantification of imaging findings by using quantitative imaging biomarkers, which can improve sensitivity, specificity, accuracy and reproducibility of imaged characteristics used for diagnostic and therapeutic decisions. An important component of the discovery, characterization, validation and application of quantitative imaging biomarkers is the extraction of information and meaning from images through image processing and subsequent analysis. However, many advanced image processing and analysis methods are not applied directly to questions of clinical interest, i.e., for diagnostic and therapeutic decision making, which is a consideration that should be closely linked to the development of such algorithms. This article is meant to address these concerns. First, quantitative imaging biomarkers are introduced by providing definitions and concepts. Then, potential applications of advanced image processing and analysis to areas of quantitative imaging biomarker research are described; specifically, research into osteoarthritis (OA), Alzheimer's disease (AD) and cancer is presented. Then, challenges in quantitative imaging biomarker research are discussed. Finally, a conceptual framework for integrating clinical and preclinical considerations into the development of quantitative imaging biomarkers and their computer-assisted methods of extraction is presented.

  18. Some selected quantitative methods of thermal image analysis in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Energy minimization in medical image analysis: Methodologies and applications.

    Science.gov (United States)

    Zhao, Feng; Xie, Xianghua

    2016-02-01

    Energy minimization is of particular interest in medical image analysis. In the past two decades, a variety of optimization schemes have been developed. In this paper, we present a comprehensive survey of the state-of-the-art optimization approaches. These algorithms are mainly classified into two categories: continuous method and discrete method. The former includes Newton-Raphson method, gradient descent method, conjugate gradient method, proximal gradient method, coordinate descent method, and genetic algorithm-based method, while the latter covers graph cuts method, belief propagation method, tree-reweighted message passing method, linear programming method, maximum margin learning method, simulated annealing method, and iterated conditional modes method. We also discuss the minimal surface method, primal-dual method, and the multi-objective optimization method. In addition, we review several comparative studies that evaluate the performance of different minimization techniques in terms of accuracy, efficiency, or complexity. These optimization techniques are widely used in many medical applications, for example, image segmentation, registration, reconstruction, motion tracking, and compressed sensing. We thus give an overview on those applications as well. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Multiband multi-echo imaging of simultaneous oxygenation and flow timeseries for resting state connectivity.

    Science.gov (United States)

    Cohen, Alexander D; Nencka, Andrew S; Lebel, R Marc; Wang, Yang

    2017-01-01

    A novel sequence has been introduced that combines multiband imaging with a multi-echo acquisition for simultaneous high spatial resolution pseudo-continuous arterial spin labeling (ASL) and blood-oxygenation-level dependent (BOLD) echo-planar imaging (MBME ASL/BOLD). Resting-state connectivity in healthy adult subjects was assessed using this sequence. Four echoes were acquired with a multiband acceleration of four, in order to increase spatial resolution, shorten repetition time, and reduce slice-timing effects on the ASL signal. In addition, by acquiring four echoes, advanced multi-echo independent component analysis (ME-ICA) denoising could be employed to increase the signal-to-noise ratio (SNR) and BOLD sensitivity. Seed-based and dual-regression approaches were utilized to analyze functional connectivity. Cerebral blood flow (CBF) and BOLD coupling was also evaluated by correlating the perfusion-weighted timeseries with the BOLD timeseries. These metrics were compared between single echo (E2), multi-echo combined (MEC), multi-echo combined and denoised (MECDN), and perfusion-weighted (PW) timeseries. Temporal SNR increased for the MECDN data compared to the MEC and E2 data. Connectivity also increased, in terms of correlation strength and network size, for the MECDN compared to the MEC and E2 datasets. CBF and BOLD coupling was increased in major resting-state networks, and that correlation was strongest for the MECDN datasets. These results indicate our novel MBME ASL/BOLD sequence, which collects simultaneous high-resolution ASL/BOLD data, could be a powerful tool for detecting functional connectivity and dynamic neurovascular coupling during the resting state. The collection of more than two echoes facilitates the use of ME-ICA denoising to greatly improve the quality of resting state functional connectivity MRI.

  1. Modulated electron-multiplied fluorescence lifetime imaging microscope: all-solid-state camera for fluorescence lifetime imaging.

    Science.gov (United States)

    Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; Geert Sander de Jong, Jan; van Geest, Bert; Stoop, Karel; Young, Ian Ted

    2012-12-01

    We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.

  2. V-SIPAL - A VIRTUAL LABORATORY FOR SATELLITE IMAGE PROCESSING AND ANALYSIS

    Directory of Open Access Journals (Sweden)

    K. M. Buddhiraju

    2012-09-01

    Full Text Available In this paper a virtual laboratory for the Satellite Image Processing and Analysis (v-SIPAL being developed at the Indian Institute of Technology Bombay is described. v-SIPAL comprises a set of experiments that are normally carried out by students learning digital processing and analysis of satellite images using commercial software. Currently, the experiments that are available on the server include Image Viewer, Image Contrast Enhancement, Image Smoothing, Edge Enhancement, Principal Component Transform, Texture Analysis by Co-occurrence Matrix method, Image Indices, Color Coordinate Transforms, Fourier Analysis, Mathematical Morphology, Unsupervised Image Classification, Supervised Image Classification and Accuracy Assessment. The virtual laboratory includes a theory module for each option of every experiment, a description of the procedure to perform each experiment, the menu to choose and perform the experiment, a module on interpretation of results when performed with a given image and pre-specified options, bibliography, links to useful internet resources and user-feedback. The user can upload his/her own images for performing the experiments and can also reuse outputs of one experiment in another experiment where applicable. Some of the other experiments currently under development include georeferencing of images, data fusion, feature evaluation by divergence andJ-M distance, image compression, wavelet image analysis and change detection. Additions to the theory module include self-assessment quizzes, audio-video clips on selected concepts, and a discussion of elements of visual image interpretation. V-SIPAL is at the satge of internal evaluation within IIT Bombay and will soon be open to selected educational institutions in India for evaluation.

  3. Imaging the equilibrium state and magnetization dynamics of partially built hard disk write heads

    Energy Technology Data Exchange (ETDEWEB)

    Valkass, R. A. J., E-mail: rajv202@ex.ac.uk; Yu, W.; Shelford, L. R.; Keatley, P. S.; Loughran, T. H. J.; Hicken, R. J. [School of Physics, University of Exeter, Stocker Road, Exeter EX4 4QL (United Kingdom); Cavill, S. A. [Diamond Light Source, Harwell Science and Innovation Campus, Didcot OX11 0DE (United Kingdom); Department of Physics, University of York, Heslington, York YO10 5DD (United Kingdom); Laan, G. van der; Dhesi, S. S. [Diamond Light Source, Harwell Science and Innovation Campus, Didcot OX11 0DE (United Kingdom); Bashir, M. A.; Gubbins, M. A. [Research and Development, Seagate Technology, 1 Disc Drive, Springtown Industrial Estate, Derry BT48 0BF (United Kingdom); Czoschke, P. J.; Lopusnik, R. [Recording Heads Operation, Seagate Technology, 7801 Computer Avenue South, Bloomington, Minnesota 55435 (United States)

    2015-06-08

    Four different designs of partially built hard disk write heads with a yoke comprising four repeats of NiFe (1 nm)/CoFe (50 nm) were studied by both x-ray photoemission electron microscopy (XPEEM) and time-resolved scanning Kerr microscopy (TRSKM). These techniques were used to investigate the static equilibrium domain configuration and the magnetodynamic response across the entire structure, respectively. Simulations and previous TRSKM studies have made proposals for the equilibrium domain configuration of similar structures, but no direct observation of the equilibrium state of the writers has yet been made. In this study, static XPEEM images of the equilibrium state of writer structures were acquired using x-ray magnetic circular dichroism as the contrast mechanism. These images suggest that the crystalline anisotropy dominates the equilibrium state domain configuration, but competition with shape anisotropy ultimately determines the stability of the equilibrium state. Dynamic TRSKM images were acquired from nominally identical devices. These images suggest that a longer confluence region may hinder flux conduction from the yoke into the pole tip: the shorter confluence region exhibits clear flux beaming along the symmetry axis, whereas the longer confluence region causes flux to conduct along one edge of the writer. The observed variations in dynamic response agree well with the differences in the equilibrium magnetization configuration visible in the XPEEM images, confirming that minor variations in the geometric design of the writer structure can have significant effects on the process of flux beaming.

  4. Radiomics and its emerging role in lung cancer research, imaging biomarkers and clinical management: State of the art

    International Nuclear Information System (INIS)

    Lee, Geewon; Lee, Ho Yun; Park, Hyunjin; Schiebler, Mark L.; Beek, Edwin J.R. van; Ohno, Yoshiharu; Seo, Joon Beom; Leung, Ann

    2017-01-01

    Highlights: • Radiomics is the post-processing and analysis of large amounts of quantitative imaging features that can be derived from medical images. • Radiomics features can reflect the spatial complexity, genomic heterogeneity, and subregional identification of lung cancer. • Currently available radiomic features can be divided into four major categories. • The major challenge is to integrate radiomic data with clinical, pathological, and genomic information. - Abstract: With the development of functional imaging modalities we now have the ability to study the microenvironment of lung cancer and its genomic instability. Radiomics is defined as the use of automated or semi-automated post-processing and analysis of large amounts of quantitative imaging features that can be derived from medical images. The automated generation of these analytical features helps to quantify a number of variables in the imaging assessment of lung malignancy. These imaging features include: tumor spatial complexity, elucidation of the tumor genomic heterogeneity and composition, subregional identification in terms of tumor viability or aggressiveness, and response to chemotherapy and/or radiation. Therefore, a radiomic approach can help to reveal unique information about tumor behavior. Currently available radiomic features can be divided into four major classes: (a) morphological, (b) statistical, (c) regional, and (d) model-based. Each category yields quantitative parameters that reflect specific aspects of a tumor. The major challenge is to integrate radiomic data with clinical, pathological, and genomic information to decode the different types of tissue biology. There are many currently available radiomic studies on lung cancer for which there is a need to summarize the current state of the art.

  5. Radiomics and its emerging role in lung cancer research, imaging biomarkers and clinical management: State of the art

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Geewon [Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of); Department of Radiology and Medical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Busan (Korea, Republic of); Lee, Ho Yun, E-mail: hoyunlee96@gmail.com [Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of); Park, Hyunjin [School of Electronic and Electrical Engineering and Center for Neuroscience Imaging Research, Sungkyunkwan University, Suwon (Korea, Republic of); Schiebler, Mark L. [Department of Radiology, UW-Madison School of Medicine and Public Health, Madison, WI (United States); Beek, Edwin J.R. van [Clinical Research Imaging Centre, Edinburgh Imaging, Queen' s Medical Research Institute, University of Edinburgh, Edinburgh (United Kingdom); Ohno, Yoshiharu [Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe-shi 650-0017 (Japan); Advanced Biomedical Imaging Research Center, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe-shi 650-0017 (Japan); Seo, Joon Beom [Department of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of); Leung, Ann [Department of Radiology, Stanford University, Palo Alto, CA (United States)

    2017-01-15

    Highlights: • Radiomics is the post-processing and analysis of large amounts of quantitative imaging features that can be derived from medical images. • Radiomics features can reflect the spatial complexity, genomic heterogeneity, and subregional identification of lung cancer. • Currently available radiomic features can be divided into four major categories. • The major challenge is to integrate radiomic data with clinical, pathological, and genomic information. - Abstract: With the development of functional imaging modalities we now have the ability to study the microenvironment of lung cancer and its genomic instability. Radiomics is defined as the use of automated or semi-automated post-processing and analysis of large amounts of quantitative imaging features that can be derived from medical images. The automated generation of these analytical features helps to quantify a number of variables in the imaging assessment of lung malignancy. These imaging features include: tumor spatial complexity, elucidation of the tumor genomic heterogeneity and composition, subregional identification in terms of tumor viability or aggressiveness, and response to chemotherapy and/or radiation. Therefore, a radiomic approach can help to reveal unique information about tumor behavior. Currently available radiomic features can be divided into four major classes: (a) morphological, (b) statistical, (c) regional, and (d) model-based. Each category yields quantitative parameters that reflect specific aspects of a tumor. The major challenge is to integrate radiomic data with clinical, pathological, and genomic information to decode the different types of tissue biology. There are many currently available radiomic studies on lung cancer for which there is a need to summarize the current state of the art.

  6. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    Science.gov (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  7. Intrasubject registration for change analysis in medical imaging

    NARCIS (Netherlands)

    Staring, M.

    2008-01-01

    Image matching is important for the comparison of medical images. Comparison is of clinical relevance for the analysis of differences due to changes in the health of a patient. For example, when a disease is imaged at two time points, then one wants to know if it is stable, has regressed, or

  8. Profiling stem cell states in three-dimensional biomaterial niches using high content image informatics.

    Science.gov (United States)

    Dhaliwal, Anandika; Brenner, Matthew; Wolujewicz, Paul; Zhang, Zheng; Mao, Yong; Batish, Mona; Kohn, Joachim; Moghe, Prabhas V

    2016-11-01

    A predictive framework for the evolution of stem cell biology in 3-D is currently lacking. In this study we propose deep image informatics of the nuclear biology of stem cells to elucidate how 3-D biomaterials steer stem cell lineage phenotypes. The approach is based on high content imaging informatics to capture minute variations in the 3-D spatial organization of splicing factor SC-35 in the nucleoplasm as a marker to classify emergent cell phenotypes of human mesenchymal stem cells (hMSCs). The cells were cultured in varied 3-D culture systems including hydrogels, electrospun mats and salt leached scaffolds. The approach encompasses high resolution 3-D imaging of SC-35 domains and high content image analysis (HCIA) to compute quantitative 3-D nuclear metrics for SC-35 organization in single cells in concert with machine learning approaches to construct a predictive cell-state classification model. Our findings indicate that hMSCs cultured in collagen hydrogels and induced to differentiate into osteogenic or adipogenic lineages could be classified into the three lineages (stem, adipogenic, osteogenic) with ⩾80% precision and sensitivity, within 72h. Using this framework, the augmentation of osteogenesis by scaffold design exerted by porogen leached scaffolds was also profiled within 72h with ∼80% high sensitivity. Furthermore, by employing 3-D SC-35 organizational metrics, differential osteogenesis induced by novel electrospun fibrous polymer mats incorporating decellularized matrix could also be elucidated and predictably modeled at just 3days with high precision. We demonstrate that 3-D SC-35 organizational metrics can be applied to model the stem cell state in 3-D scaffolds. We propose that this methodology can robustly discern minute changes in stem cell states within complex 3-D architectures and map single cell biological readouts that are critical to assessing population level cell heterogeneity. The sustained development and validation of bioactive

  9. Digital Airborne Photogrammetry—A New Tool for Quantitative Remote Sensing?—A State-of-the-Art Review On Radiometric Aspects of Digital Photogrammetric Images

    Directory of Open Access Journals (Sweden)

    Nikolaj Veje

    2009-09-01

    Full Text Available The transition from film imaging to digital imaging in photogrammetric data capture is opening interesting possibilities for photogrammetric processes. A great advantage of digital sensors is their radiometric potential. This article presents a state-of-the-art review on the radiometric aspects of digital photogrammetric images. The analysis is based on a literature research and a questionnaire submitted to various interest groups related to the photogrammetric process. An important contribution to this paper is a characterization of the photogrammetric image acquisition and image product generation systems. The questionnaire revealed many weaknesses in current processes, but the future prospects of radiometrically quantitative photogrammetry are promising.

  10. [Evaluation of dental plaque by quantitative digital image analysis system].

    Science.gov (United States)

    Huang, Z; Luan, Q X

    2016-04-18

    To analyze the plaque staining image by using image analysis software, to verify the maneuverability, practicability and repeatability of this technique, and to evaluate the influence of different plaque stains. In the study, 30 volunteers were enrolled from the new dental students of Peking University Health Science Center in accordance with the inclusion criteria. The digital images of the anterior teeth were acquired after plaque stained according to filming standardization.The image analysis was performed using Image Pro Plus 7.0, and the Quigley-Hein plaque indexes of the anterior teeth were evaluated. The plaque stain area percentage and the corresponding dental plaque index were highly correlated,and the Spearman correlation coefficient was 0.776 (Pchart showed only a few spots outside the 95% consistency boundaries. The different plaque stains image analysis results showed that the difference of the tooth area measurements was not significant, while the difference of the plaque area measurements significant (P<0.01). This method is easy in operation and control,highly related to the calculated percentage of plaque area and traditional plaque index, and has good reproducibility.The different plaque staining method has little effect on image segmentation results.The sensitive plaque stain for image analysis is suggested.

  11. Difference Image Analysis of Galactic Microlensing. I. Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Alcock, C.; Allsman, R. A.; Alves, D.; Axelrod, T. S.; Becker, A. C.; Bennett, D. P.; Cook, K. H.; Drake, A. J.; Freeman, K. C.; Griest, K. (and others)

    1999-08-20

    This is a preliminary report on the application of Difference Image Analysis (DIA) to Galactic bulge images. The aim of this analysis is to increase the sensitivity to the detection of gravitational microlensing. We discuss how the DIA technique simplifies the process of discovering microlensing events by detecting only objects that have variable flux. We illustrate how the DIA technique is not limited to detection of so-called ''pixel lensing'' events but can also be used to improve photometry for classical microlensing events by removing the effects of blending. We will present a method whereby DIA can be used to reveal the true unblended colors, positions, and light curves of microlensing events. We discuss the need for a technique to obtain the accurate microlensing timescales from blended sources and present a possible solution to this problem using the existing Hubble Space Telescope color-magnitude diagrams of the Galactic bulge and LMC. The use of such a solution with both classical and pixel microlensing searches is discussed. We show that one of the major causes of systematic noise in DIA is differential refraction. A technique for removing this systematic by effectively registering images to a common air mass is presented. Improvements to commonly used image differencing techniques are discussed. (c) 1999 The American Astronomical Society.

  12. Theoretical investigation of image states of the hydrogen covered Cu (100)

    International Nuclear Information System (INIS)

    Steslicka, M.; Zagorski, M.; Jurczyszyn, L.

    1987-08-01

    A model of atomic hydrogen covered Cu(100) is presented and the calculated energy spectrum of localized electronic states in the X gap of Cu(100) is discussed. These states form a series of unoccupied adsorption image states (for n α = 2,3,...) lying below the vacuum level V 0 and having energies E nα which satisfy the formula E nα = V 0 - 10/n n 2 (eV). The lowest state (n α = 1) is expected to lie about 5.5 (eV) below Fermi level. (author). 19 refs, 6 figs, 1 tab

  13. Using spectral imaging for the analysis of abnormalities for colorectal cancer: When is it helpful?

    Science.gov (United States)

    Awan, Ruqayya; Al-Maadeed, Somaya; Al-Saady, Rafif

    2018-01-01

    The spectral imaging technique has been shown to provide more discriminative information than the RGB images and has been proposed for a range of problems. There are many studies demonstrating its potential for the analysis of histopathology images for abnormality detection but there have been discrepancies among previous studies as well. Many multispectral based methods have been proposed for histopathology images but the significance of the use of whole multispectral cube versus a subset of bands or a single band is still arguable. We performed comprehensive analysis using individual bands and different subsets of bands to determine the effectiveness of spectral information for determining the anomaly in colorectal images. Our multispectral colorectal dataset consists of four classes, each represented by infra-red spectrum bands in addition to the visual spectrum bands. We performed our analysis of spectral imaging by stratifying the abnormalities using both spatial and spectral information. For our experiments, we used a combination of texture descriptors with an ensemble classification approach that performed best on our dataset. We applied our method to another dataset and got comparable results with those obtained using the state-of-the-art method and convolutional neural network based method. Moreover, we explored the relationship of the number of bands with the problem complexity and found that higher number of bands is required for a complex task to achieve improved performance. Our results demonstrate a synergy between infra-red and visual spectrum by improving the classification accuracy (by 6%) on incorporating the infra-red representation. We also highlight the importance of how the dataset should be divided into training and testing set for evaluating the histopathology image-based approaches, which has not been considered in previous studies on multispectral histopathology images.

  14. Analysis of RTM extended images for VTI media

    KAUST Repository

    Li, Vladimir; Tsvankin, Ilya; Alkhalifah, Tariq Ali

    2015-01-01

    velocity analysis remain generally valid in the extended image space for complex media. The dependence of RMO on errors in the anisotropy parameters provides essential insights for anisotropic wavefield tomography using extended images.

  15. Peripheral blood smear image analysis: A comprehensive review

    Directory of Open Access Journals (Sweden)

    Emad A Mohammed

    2014-01-01

    Full Text Available Peripheral blood smear image examination is a part of the routine work of every laboratory. The manual examination of these images is tedious, time-consuming and suffers from interobserver variation. This has motivated researchers to develop different algorithms and methods to automate peripheral blood smear image analysis. Image analysis itself consists of a sequence of steps consisting of image segmentation, features extraction and selection and pattern classification. The image segmentation step addresses the problem of extraction of the object or region of interest from the complicated peripheral blood smear image. Support vector machine (SVM and artificial neural networks (ANNs are two common approaches to image segmentation. Features extraction and selection aims to derive descriptive characteristics of the extracted object, which are similar within the same object class and different between different objects. This will facilitate the last step of the image analysis process: pattern classification. The goal of pattern classification is to assign a class to the selected features from a group of known classes. There are two types of classifier learning algorithms: supervised and unsupervised. Supervised learning algorithms predict the class of the object under test using training data of known classes. The training data have a predefined label for every class and the learning algorithm can utilize this data to predict the class of a test object. Unsupervised learning algorithms use unlabeled training data and divide them into groups using similarity measurements. Unsupervised learning algorithms predict the group to which a new test object belong to, based on the training data without giving an explicit class to that object. ANN, SVM, decision tree and K-nearest neighbor are possible approaches to classification algorithms. Increased discrimination may be obtained by combining several classifiers together.

  16. Heritability estimates on resting state fMRI data using ENIGMA analysis pipeline.

    Science.gov (United States)

    Adhikari, Bhim M; Jahanshad, Neda; Shukla, Dinesh; Glahn, David C; Blangero, John; Reynolds, Richard C; Cox, Robert W; Fieremans, Els; Veraart, Jelle; Novikov, Dmitry S; Nichols, Thomas E; Hong, L Elliot; Thompson, Paul M; Kochunov, Peter

    2018-01-01

    Big data initiatives such as the Enhancing NeuroImaging Genetics through Meta-Analysis consortium (ENIGMA), combine data collected by independent studies worldwide to achieve more generalizable estimates of effect sizes and more reliable and reproducible outcomes. Such efforts require harmonized image analyses protocols to extract phenotypes consistently. This harmonization is particularly challenging for resting state fMRI due to the wide variability of acquisition protocols and scanner platforms; this leads to site-to-site variance in quality, resolution and temporal signal-to-noise ratio (tSNR). An effective harmonization should provide optimal measures for data of different qualities. We developed a multi-site rsfMRI analysis pipeline to allow research groups around the world to process rsfMRI scans in a harmonized way, to extract consistent and quantitative measurements of connectivity and to perform coordinated statistical tests. We used the single-modality ENIGMA rsfMRI preprocessing pipeline based on modelfree Marchenko-Pastur PCA based denoising to verify and replicate resting state network heritability estimates. We analyzed two independent cohorts, GOBS (Genetics of Brain Structure) and HCP (the Human Connectome Project), which collected data using conventional and connectomics oriented fMRI protocols, respectively. We used seed-based connectivity and dual-regression approaches to show that the rsfMRI signal is consistently heritable across twenty major functional network measures. Heritability values of 20-40% were observed across both cohorts.

  17. Point defect characterization in HAADF-STEM images using multivariate statistical analysis

    International Nuclear Information System (INIS)

    Sarahan, Michael C.; Chi, Miaofang; Masiel, Daniel J.; Browning, Nigel D.

    2011-01-01

    Quantitative analysis of point defects is demonstrated through the use of multivariate statistical analysis. This analysis consists of principal component analysis for dimensional estimation and reduction, followed by independent component analysis to obtain physically meaningful, statistically independent factor images. Results from these analyses are presented in the form of factor images and scores. Factor images show characteristic intensity variations corresponding to physical structure changes, while scores relate how much those variations are present in the original data. The application of this technique is demonstrated on a set of experimental images of dislocation cores along a low-angle tilt grain boundary in strontium titanate. A relationship between chemical composition and lattice strain is highlighted in the analysis results, with picometer-scale shifts in several columns measurable from compositional changes in a separate column. -- Research Highlights: → Multivariate analysis of HAADF-STEM images. → Distinct structural variations among SrTiO 3 dislocation cores. → Picometer atomic column shifts correlated with atomic column population changes.

  18. Utilizing Minkowski functionals for image analysis: a marching square algorithm

    International Nuclear Information System (INIS)

    Mantz, Hubert; Jacobs, Karin; Mecke, Klaus

    2008-01-01

    Comparing noisy experimental image data with statistical models requires a quantitative analysis of grey-scale images beyond mean values and two-point correlations. A real-space image analysis technique is introduced for digitized grey-scale images, based on Minkowski functionals of thresholded patterns. A novel feature of this marching square algorithm is the use of weighted side lengths for pixels, so that boundary lengths are captured accurately. As examples to illustrate the technique we study surface topologies emerging during the dewetting process of thin films and analyse spinodal decomposition as well as turbulent patterns in chemical reaction–diffusion systems. The grey-scale value corresponds to the height of the film or to the concentration of chemicals, respectively. Comparison with analytic calculations in stochastic geometry models reveals a remarkable agreement of the examples with a Gaussian random field. Thus, a statistical test for non-Gaussian features in experimental data becomes possible with this image analysis technique—even for small image sizes. Implementations of the software used for the analysis are offered for download

  19. State-of-the-art radiation detectors for medical imaging: Demands and trends

    International Nuclear Information System (INIS)

    Darambara, Dimitra G.

    2006-01-01

    Over the last half-century a variety of significant technical advances in several scientific fields has been pointing to an exploding growth in the field of medical imaging leading to a better interpretation of more specific anatomical, biochemical and molecular pathways. In particular, the development of novel imaging detectors and readout electronics has been critical to the advancement of medical imaging allowing the invention of breakthrough platforms for simultaneous acquisition of multi-modality images at molecular level. The present paper presents a review of the challenges, demands and constraints on radiation imaging detectors imposed by the nature of the modality and the physics of the imaging source. This is followed by a concise review and perspective on various types of state-of-the-art detector technologies that have been developed to meet these requirements. Trends, prospects and new concepts for future imaging detectors are also highlighted

  20. State-of-the-art radiation detectors for medical imaging: Demands and trends

    Energy Technology Data Exchange (ETDEWEB)

    Darambara, Dimitra G. [Joint Department of Physics, Royal Marsden Foundation Trust and Institute of Cancer Research, Fulham Road, London SW3 6JJ (United Kingdom)]. E-mail: dimitra.darambara@icr.ac.uk

    2006-12-20

    Over the last half-century a variety of significant technical advances in several scientific fields has been pointing to an exploding growth in the field of medical imaging leading to a better interpretation of more specific anatomical, biochemical and molecular pathways. In particular, the development of novel imaging detectors and readout electronics has been critical to the advancement of medical imaging allowing the invention of breakthrough platforms for simultaneous acquisition of multi-modality images at molecular level. The present paper presents a review of the challenges, demands and constraints on radiation imaging detectors imposed by the nature of the modality and the physics of the imaging source. This is followed by a concise review and perspective on various types of state-of-the-art detector technologies that have been developed to meet these requirements. Trends, prospects and new concepts for future imaging detectors are also highlighted.

  1. Etching and image analysis of the microstructure in marble

    DEFF Research Database (Denmark)

    Alm, Ditte; Brix, Susanne; Howe-Rasmussen, Helle

    2005-01-01

    of grains exposed on that surface are measured on the microscope images using image analysis by the program Adobe Photoshop 7.0 with Image Processing Toolkit 4.0. The parameters measured by the program on microscope images of thin sections of two marble types are used for calculation of the coefficient...

  2. Objective analysis of image quality of video image capture systems

    Science.gov (United States)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give

  3. Hyperspectral Image Analysis of Food Quality

    DEFF Research Database (Denmark)

    Arngren, Morten

    inspection.Near-infrared spectroscopy can address these issues by offering a fast and objectiveanalysis of the food quality. A natural extension to these single spectrumNIR systems is to include image information such that each pixel holds a NIRspectrum. This augmented image information offers several......Assessing the quality of food is a vital step in any food processing line to ensurethe best food quality and maximum profit for the farmer and food manufacturer.Traditional quality evaluation methods are often destructive and labourintensive procedures relying on wet chemistry or subjective human...... extensions to the analysis offood quality. This dissertation is concerned with hyperspectral image analysisused to assess the quality of single grain kernels. The focus is to highlight thebenefits and challenges of using hyperspectral imaging for food quality presentedin two research directions. Initially...

  4. Probability Density Components Analysis: A New Approach to Treatment and Classification of SAR Images

    Directory of Open Access Journals (Sweden)

    Osmar Abílio de Carvalho Júnior

    2014-04-01

    Full Text Available Speckle noise (salt and pepper is inherent to synthetic aperture radar (SAR, which causes a usual noise-like granular aspect and complicates the image classification. In SAR image analysis, the spatial information might be a particular benefit for denoising and mapping classes characterized by a statistical distribution of the pixel intensities from a complex and heterogeneous spectral response. This paper proposes the Probability Density Components Analysis (PDCA, a new alternative that combines filtering and frequency histogram to improve the classification procedure for the single-channel synthetic aperture radar (SAR images. This method was tested on L-band SAR data from the Advanced Land Observation System (ALOS Phased-Array Synthetic-Aperture Radar (PALSAR sensor. The study area is localized in the Brazilian Amazon rainforest, northern Rondônia State (municipality of Candeias do Jamari, containing forest and land use patterns. The proposed algorithm uses a moving window over the image, estimating the probability density curve in different image components. Therefore, a single input image generates an output with multi-components. Initially the multi-components should be treated by noise-reduction methods, such as maximum noise fraction (MNF or noise-adjusted principal components (NAPCs. Both methods enable reducing noise as well as the ordering of multi-component data in terms of the image quality. In this paper, the NAPC applied to multi-components provided large reductions in the noise levels, and the color composites considering the first NAPC enhance the classification of different surface features. In the spectral classification, the Spectral Correlation Mapper and Minimum Distance were used. The results obtained presented as similar to the visual interpretation of optical images from TM-Landsat and Google Maps.

  5. Teaching image analysis at DIKU

    DEFF Research Database (Denmark)

    Johansen, Peter

    2010-01-01

    The early development of computer vision at Department of Computer Science at University of Copenhagen (DIKU) is briefly described. The different disciplines in computer vision are introduced, and the principles for teaching two courses, an image analysis course, and a robot lab class are outlined....

  6. Soft x-ray imaging by a commercial solid-state television camera

    International Nuclear Information System (INIS)

    Matsushima, I.; Koyama, K.; Tanimoto, M.; Yano, M.

    1987-01-01

    A commerical, solid-state television camera has been used to record images of soft x radiation (0.8--12 keV). The performance of the camera is theoretically analyzed and experimentally evaluated compared with an x-ray photographic film (Kodak direct exposure film). In the application, the camera has been used to provide image patterns of x rays from laser-produced plasmas. It is demonstrated that the camera has several advantages over x-ray photographic film

  7. From Digital Imaging to Computer Image Analysis of Fine Art

    Science.gov (United States)

    Stork, David G.

    An expanding range of techniques from computer vision, pattern recognition, image analysis, and computer graphics are being applied to problems in the history of art. The success of these efforts is enabled by the growing corpus of high-resolution multi-spectral digital images of art (primarily paintings and drawings), sophisticated computer vision methods, and most importantly the engagement of some art scholars who bring questions that may be addressed through computer methods. This paper outlines some general problem areas and opportunities in this new inter-disciplinary research program.

  8. Melanie II--a third-generation software package for analysis of two-dimensional electrophoresis images: II. Algorithms.

    Science.gov (United States)

    Appel, R D; Vargas, J R; Palagi, P M; Walther, D; Hochstrasser, D F

    1997-12-01

    After two generations of software systems for the analysis of two-dimensional electrophoresis (2-DE) images, a third generation of such software packages has recently emerged that combines state-of-the-art graphical user interfaces with comprehensive spot data analysis capabilities. A key characteristic common to most of these software packages is that many of their tools are implementations of algorithms that resulted from research areas such as image processing, vision, artificial intelligence or machine learning. This article presents the main algorithms implemented in the Melanie II 2-D PAGE software package. The applications of these algorithms, embodied as the feature of the program, are explained in an accompanying article (R. D. Appel et al.; Electrophoresis 1997, 18, 2724-2734).

  9. Signal and image multiresolution analysis

    CERN Document Server

    Ouahabi, Abdelialil

    2012-01-01

    Multiresolution analysis using the wavelet transform has received considerable attention in recent years by researchers in various fields. It is a powerful tool for efficiently representing signals and images at multiple levels of detail with many inherent advantages, including compression, level-of-detail display, progressive transmission, level-of-detail editing, filtering, modeling, fractals and multifractals, etc.This book aims to provide a simple formalization and new clarity on multiresolution analysis, rendering accessible obscure techniques, and merging, unifying or completing

  10. Theoretical analysis of radiographic images by nonstationary Poisson processes

    International Nuclear Information System (INIS)

    Tanaka, Kazuo; Uchida, Suguru; Yamada, Isao.

    1980-01-01

    This paper deals with the noise analysis of radiographic images obtained in the usual fluorescent screen-film system. The theory of nonstationary Poisson processes is applied to the analysis of the radiographic images containing the object information. The ensemble averages, the autocorrelation functions, and the Wiener spectrum densities of the light-energy distribution at the fluorescent screen and of the film optical-density distribution are obtained. The detection characteristics of the system are evaluated theoretically. Numerical examples one-dimensional image are shown and the results are compared with those obtained under the assumption that the object image is related to the background noise by the additive process. (author)

  11. Role of image analysis in quantitative characterisation of nuclear fuel materials

    International Nuclear Information System (INIS)

    Dubey, J.N.; Rao, T.S.; Pandey, V.D.; Majumdar, S.

    2005-01-01

    Image analysis is one of the important techniques, widely used for materials characterization. It provides the quantitative estimation of the microstructural features present in the material. This information is very much valuable for finding out the criteria for taking up the fuel for high burn up. Radiometallurgy Division has been carrying out development and fabrication of plutonium related fuels for different type of reactors viz. Purnima, Fast Breeder Test Reactor (FBTR), Prototype Fast Breeder Reactor (PFBR), Boiling Water Reactor (BWR), Advanced Heavy Water Reactor (AHWR), Pressurised Heavy Water Reactor (PHWR) and KAMINI Reactor. Image analysis has been carried out on microstructures of PHWR, AHWR, FBTR and KAMINI fuels. Samples were prepared as per standard ASTM metallographic procedure. Digital images of the microstructure of these specimens were obtained using CCD camera, attached to the optical microscope. These images are stores on computer and used for detection and analysis of features of interest with image analysis software. Quantitative image analysis technique has been standardised and used for finding put type of the porosity, its size, shape and distribution in the above sintered oxide and carbide fuels. This technique has also been used for quantitative estimation of different phases present in KAMINI fuel. Image analysis results have been summarised and presented in this paper. (author)

  12. Dissociative Recombination of HD+ - State-to-State Experimental Investigation Using Fragment Imaging and Storage Ring Techniques

    International Nuclear Information System (INIS)

    Amitay, Z.; Baer, A.; Dahan, M.; Levin, J.; Vager, Z.; Zajfman, D.

    1998-01-01

    When a molecular ion collides with a free electron it can capture the electron and dissociate. The resulting process of Dissociative Recombination (DR) is a process of great significance in a wide variety of plasma environments. In this process, the capture of a free electron leads to the formation of an highly excited state of the neutral molecule, which then dissociates into neutral fragments with kinetic energy and, possibly, internal excitation depending on the energy balance of the reaction. Despite its importance, the DR process is still not yet completely understood theoretically. This is mainly due to the complexity of the nature and dynamics of highly excited molecular states, especially when several channels are involved, as is usually the situation in DR. from experimental point of view, for direct comparison between experiment and theory, this complexity requires detailed experimental data, including the knowledge of both the initial state of the molecular ion, to which DR is very sensitive, and of the final quantum states of the DR products. Inherent un- certainties in the initial vibrational excitation of the laboratory molecular ions was the main drawback of the experiments conducted over the years to, study DR. A substantial progress in the understanding of the DR process was achieved with the introduction (about five years ago) of a new experimental approach, which uses heavy-ion storage ring technique. In a storage ring, one can store many molecular ions for a time which is long enough to allow complete radiative deexcitation of tile initial electronic and vibrational excitation coming from the ion source. Those vibrationally cold ions are then merged with an intense electron beam to measure their DR cross section. Further experimental progress was the inclusion of two and three-dimensional molecular imaging techniques [1] for the measurement of the branching ratio to different final quantum states of the neutral DR fragments. This talk will

  13. Biomedical Imaging and Computational Modeling in Biomechanics

    CERN Document Server

    Iacoviello, Daniela

    2013-01-01

    This book collects the state-of-art and new trends in image analysis and biomechanics. It covers a wide field of scientific and cultural topics, ranging from remodeling of bone tissue under the mechanical stimulus up to optimizing the performance of sports equipment, through the patient-specific modeling in orthopedics, microtomography and its application in oral and implant research, computational modeling in the field of hip prostheses, image based model development and analysis of the human knee joint, kinematics of the hip joint, micro-scale analysis of compositional and mechanical properties of dentin, automated techniques for cervical cell image analysis, and iomedical imaging and computational modeling in cardiovascular disease.   The book will be of interest to researchers, Ph.D students, and graduate students with multidisciplinary interests related to image analysis and understanding, medical imaging, biomechanics, simulation and modeling, experimental analysis.

  14. Analysis of Variance in Statistical Image Processing

    Science.gov (United States)

    Kurz, Ludwik; Hafed Benteftifa, M.

    1997-04-01

    A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.

  15. Towards automatic quantitative analysis of cardiac MR perfusion images

    NARCIS (Netherlands)

    Breeuwer, M.; Quist, M.; Spreeuwers, Lieuwe Jan; Paetsch, I.; Al-Saadi, N.; Nagel, E.

    2001-01-01

    Magnetic Resonance Imaging (MRI) is a powerful technique for imaging cardiovascular diseases. The introduction of cardiovascular MRI into clinical practice is however hampered by the lack of efficient and reliable automatic image analysis methods. This paper focuses on the automatic evaluation of

  16. Liver CT image processing: a short introduction of the technical elements.

    Science.gov (United States)

    Masutani, Y; Uozumi, K; Akahane, Masaaki; Ohtomo, Kuni

    2006-05-01

    In this paper, we describe the technical aspects of image analysis for liver diagnosis and treatment, including the state-of-the-art of liver image analysis and its applications. After discussion on modalities for liver image analysis, various technical elements for liver image analysis such as registration, segmentation, modeling, and computer-assisted detection are covered with examples performed with clinical data sets. Perspective in the imaging technologies is also reviewed and discussed.

  17. Liver CT image processing: A short introduction of the technical elements

    International Nuclear Information System (INIS)

    Masutani, Y.; Uozumi, K.; Akahane, Masaaki; Ohtomo, Kuni

    2006-01-01

    In this paper, we describe the technical aspects of image analysis for liver diagnosis and treatment, including the state-of-the-art of liver image analysis and its applications. After discussion on modalities for liver image analysis, various technical elements for liver image analysis such as registration, segmentation, modeling, and computer-assisted detection are covered with examples performed with clinical data sets. Perspective in the imaging technologies is also reviewed and discussed

  18. Knowledge-based analysis and understanding of 3D medical images

    International Nuclear Information System (INIS)

    Dhawan, A.P.; Juvvadi, S.

    1988-01-01

    The anatomical three-dimensional (3D) medical imaging modalities, such as X-ray CT and MRI, have been well recognized in the diagnostic radiology for several years while the nuclear medicine modalities, such as PET, have just started making a strong impact through functional imaging. Though PET images provide the functional information about the human organs, they are hard to interpret because of the lack of anatomical information. The authors objective is to develop a knowledge-based biomedical image analysis system which can interpret the anatomical images (such as CT). The anatomical information thus obtained can then be used in analyzing PET images of the same patient. This will not only help in interpreting PET images but it will also provide a means of studying the correlation between the anatomical and functional imaging. This paper presents the preliminary results of the knowledge based biomedical image analysis system for interpreting CT images of the chest

  19. Image Post-Processing and Analysis. Chapter 17

    Energy Technology Data Exchange (ETDEWEB)

    Yushkevich, P. A. [University of Pennsylvania, Philadelphia (United States)

    2014-09-15

    For decades, scientists have used computers to enhance and analyse medical images. At first, they developed simple computer algorithms to enhance the appearance of interesting features in images, helping humans read and interpret them better. Later, they created more advanced algorithms, where the computer would not only enhance images but also participate in facilitating understanding of their content. Segmentation algorithms were developed to detect and extract specific anatomical objects in images, such as malignant lesions in mammograms. Registration algorithms were developed to align images of different modalities and to find corresponding anatomical locations in images from different subjects. These algorithms have made computer aided detection and diagnosis, computer guided surgery and other highly complex medical technologies possible. Nowadays, the field of image processing and analysis is a complex branch of science that lies at the intersection of applied mathematics, computer science, physics, statistics and biomedical sciences. This chapter will give a general overview of the most common problems in this field and the algorithms that address them.

  20. MR imaging of sickle cell patients: Comparison during pain-free and crisis states

    International Nuclear Information System (INIS)

    Brogdon, B.G.; Williams, J.P.; Mankad, V.N.; Harpen, M.D.; Moore, R.B.

    1986-01-01

    The MR imaging appearance of long bones and femoral heads of patients with sickle cell disease during a pain-free steady state and during a crisis-pain state was compared with the MR imaging appearance of matched healthy control subjects. A distinctive signal change in the narrow spaces of the long bones of patients with sickle cell disease was seen at all times. Distinct signal changes during pain crises were found in the marrow of a significant number of patients. Changes associated with aseptic necrosis, when present, did not differ from changes seen in aseptic necrosis of other causes

  1. Statistical dynamic image reconstruction in state-of-the-art high-resolution PET

    International Nuclear Information System (INIS)

    Rahmim, Arman; Cheng, J-C; Blinder, Stephan; Camborde, Maurie-Laure; Sossi, Vesna

    2005-01-01

    Modern high-resolution PET is now more than ever in need of scrutiny into the nature and limitations of the imaging modality itself as well as image reconstruction techniques. In this work, we have reviewed, analysed and addressed the following three considerations within the particular context of state-of-the-art dynamic PET imaging: (i) the typical average numbers of events per line-of-response (LOR) are now (much) less than unity (ii) due to the physical and biological decay of the activity distribution, one requires robust and efficient reconstruction algorithms applicable to a wide range of statistics and (iii) the computational considerations in dynamic imaging are much enhanced (i.e., more frames to be stored and reconstructed). Within the framework of statistical image reconstruction, we have argued theoretically and shown experimentally that the sinogram non-negativity constraint (when using the delayed-coincidence and/or scatter-subtraction techniques) is especially expected to result in an overestimation bias. Subsequently, two schemes are considered: (a) subtraction techniques in which an image non-negativity constraint has been imposed and (b) implementation of random and scatter estimates inside the reconstruction algorithms, thus enabling direct processing of Poisson-distributed prompts. Both techniques are able to remove the aforementioned bias, while the latter, being better conditioned theoretically, is able to exhibit superior noise characteristics. We have also elaborated upon and verified the applicability of the accelerated list-mode image reconstruction method as a powerful solution for accurate, robust and efficient dynamic reconstructions of high-resolution data (as well as a number of additional benefits in the context of state-of-the-art PET)

  2. Textural Analysis of Fatique Crack Surfaces: Image Pre-processing

    Directory of Open Access Journals (Sweden)

    H. Lauschmann

    2000-01-01

    Full Text Available For the fatique crack history reconstitution, new methods of quantitative microfractography are beeing developed based on the image processing and textural analysis. SEM magnifications between micro- and macrofractography are used. Two image pre-processing operatins were suggested and proved to prepare the crack surface images for analytical treatment: 1. Normalization is used to transform the image to a stationary form. Compared to the generally used equalization, it conserves the shape of brightness distribution and saves the character of the texture. 2. Binarization is used to transform the grayscale image to a system of thick fibres. An objective criterion for the threshold brightness value was found as that resulting into the maximum number of objects. Both methods were succesfully applied together with the following textural analysis.

  3. A comparison of autonomous techniques for multispectral image analysis and classification

    Science.gov (United States)

    Valdiviezo-N., Juan C.; Urcid, Gonzalo; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso

    2012-10-01

    Multispectral imaging has given place to important applications related to classification and identification of objects from a scene. Because of multispectral instruments can be used to estimate the reflectance of materials in the scene, these techniques constitute fundamental tools for materials analysis and quality control. During the last years, a variety of algorithms has been developed to work with multispectral data, whose main purpose has been to perform the correct classification of the objects in the scene. The present study introduces a brief review of some classical as well as a novel technique that have been used for such purposes. The use of principal component analysis and K-means clustering techniques as important classification algorithms is here discussed. Moreover, a recent method based on the min-W and max-M lattice auto-associative memories, that was proposed for endmember determination in hyperspectral imagery, is introduced as a classification method. Besides a discussion of their mathematical foundation, we emphasize their main characteristics and the results achieved for two exemplar images conformed by objects similar in appearance, but spectrally different. The classification results state that the first components computed from principal component analysis can be used to highlight areas with different spectral characteristics. In addition, the use of lattice auto-associative memories provides good results for materials classification even in the cases where some spectral similarities appears in their spectral responses.

  4. Methodology for correlations between doses and detectability in standard mammographic images: application in Sao Paulo state

    International Nuclear Information System (INIS)

    Furquim, Tania Aparecida Correia

    2005-01-01

    Measurements using mammography units were performed in loco in 50 health establishments, randomly sampled from an equipment list of the Cadastro Nacional de Estabelecimentos de Saude (Health Establishments Brazilian Catalog). For the measurements six phantoms were utilized to establish different quality criteria and to evaluate doses in different breast thicknesses. Two different methods of measuring average glandular doses (AGD) were applied, and measurements of entrance surface doses (ESD) were also realized, in order to obtain mean values to Sao Paulo State. A study relating distribution and properties of different mammography trademarks with doses was performed. The sensitometry of processors allowed a quantification of the film-processing contrast index, A g , establishing a state mean value. The phantom images allowed the evaluation of detection limits of structures as microcalcifications, fibers, and masses, and state mean values were established for: spatial resolution (on surface and glandular breast position); image contrast; and detection expert ability from phantom images in two situations: before knowing the image targets and after viewing of a target map. Then, the results were compared to target detections in laboratory environment. Based on dose results, A g , image contrast, maximum contrast, and detection ratio, a relationship between them was determined. The results show that, in Sao Paulo State, mean glandular doses were lower than reference levels considering the Wu method, and close to or above reference levels for ail phantoms considering the Dance method. The ESD was always close to or above reference levels. The A g presented a mean value of (10,42 ± 0,20) for Sao Paulo State, and the image contrast was lower than the required limits established by the phantom manufacturers. The high contrast resolution showed that mammography units presented the expected values of line pair per mm in the State. The detectability evaluation of local

  5. Astronomical Image and Data Analysis

    CERN Document Server

    Starck, J.-L

    2006-01-01

    With information and scale as central themes, this comprehensive survey explains how to handle real problems in astronomical data analysis using a modern arsenal of powerful techniques. It treats those innovative methods of image, signal, and data processing that are proving to be both effective and widely relevant. The authors are leaders in this rapidly developing field and draw upon decades of experience. They have been playing leading roles in international projects such as the Virtual Observatory and the Grid. The book addresses not only students and professional astronomers and astrophysicists, but also serious amateur astronomers and specialists in earth observation, medical imaging, and data mining. The coverage includes chapters or appendices on: detection and filtering; image compression; multichannel, multiscale, and catalog data analytical methods; wavelets transforms, Picard iteration, and software tools. This second edition of Starck and Murtagh's highly appreciated reference again deals with to...

  6. Secure thin client architecture for DICOM image analysis

    Science.gov (United States)

    Mogatala, Harsha V. R.; Gallet, Jacqueline

    2005-04-01

    This paper presents a concept of Secure Thin Client (STC) Architecture for Digital Imaging and Communications in Medicine (DICOM) image analysis over Internet. STC Architecture provides in-depth analysis and design of customized reports for DICOM images using drag-and-drop and data warehouse technology. Using a personal computer and a common set of browsing software, STC can be used for analyzing and reporting detailed patient information, type of examinations, date, Computer Tomography (CT) dose index, and other relevant information stored within the images header files as well as in the hospital databases. STC Architecture is three-tier architecture. The First-Tier consists of drag-and-drop web based interface and web server, which provides customized analysis and reporting ability to the users. The Second-Tier consists of an online analytical processing (OLAP) server and database system, which serves fast, real-time, aggregated multi-dimensional data using OLAP technology. The Third-Tier consists of a smart algorithm based software program which extracts DICOM tags from CT images in this particular application, irrespective of CT vendor's, and transfers these tags into a secure database system. This architecture provides Winnipeg Regional Health Authorities (WRHA) with quality indicators for CT examinations in the hospitals. It also provides health care professionals with analytical tool to optimize radiation dose and image quality parameters. The information is provided to the user by way of a secure socket layer (SSL) and role based security criteria over Internet. Although this particular application has been developed for WRHA, this paper also discusses the effort to extend the Architecture to other hospitals in the region. Any DICOM tag from any imaging modality could be tracked with this software.

  7. Microarray BASICA: Background Adjustment, Segmentation, Image Compression and Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jianping Hua

    2004-01-01

    Full Text Available This paper presents microarray BASICA: an integrated image processing tool for background adjustment, segmentation, image compression, and analysis of cDNA microarray images. BASICA uses a fast Mann-Whitney test-based algorithm to segment cDNA microarray images, and performs postprocessing to eliminate the segmentation irregularities. The segmentation results, along with the foreground and background intensities obtained with the background adjustment, are then used for independent compression of the foreground and background. We introduce a new distortion measurement for cDNA microarray image compression and devise a coding scheme by modifying the embedded block coding with optimized truncation (EBCOT algorithm (Taubman, 2000 to achieve optimal rate-distortion performance in lossy coding while still maintaining outstanding lossless compression performance. Experimental results show that the bit rate required to ensure sufficiently accurate gene expression measurement varies and depends on the quality of cDNA microarray images. For homogeneously hybridized cDNA microarray images, BASICA is able to provide from a bit rate as low as 5 bpp the gene expression data that are 99% in agreement with those of the original 32 bpp images.

  8. Determination of fish gender using fractal analysis of ultrasound images

    DEFF Research Database (Denmark)

    McEvoy, Fintan J.; Tomkiewicz, Jonna; Støttrup, Josianne

    2009-01-01

    The gender of cod Gadus morhua can be determined by considering the complexity in their gonadal ultrasonographic appearance. The fractal dimension (DB) can be used to describe this feature in images. B-mode gonadal ultrasound images in 32 cod, where gender was known, were collected. Fractal...... by subjective analysis alone. The mean (and standard deviation) of the fractal dimension DB for male fish was 1.554 (0.073) while for female fish it was 1.468 (0.061); the difference was statistically significant (P=0.001). The area under the ROC curve was 0.84 indicating the value of fractal analysis in gender...... result. Fractal analysis is useful for gender determination in cod. This or a similar form of analysis may have wide application in veterinary imaging as a tool for quantification of complexity in images...

  9. Web Based Distributed Coastal Image Analysis System, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project develops Web based distributed image analysis system processing the Moderate Resolution Imaging Spectroradiometer (MODIS) data to provide decision...

  10. Ultrahigh-resolution imaging of the human brain with phase-cycled balanced steady-state free precession at 7 T.

    Science.gov (United States)

    Zeineh, Michael M; Parekh, Mansi B; Zaharchuk, Greg; Su, Jason H; Rosenberg, Jarrett; Fischbein, Nancy J; Rutt, Brian K

    2014-05-01

    The objectives of this study were to acquire ultra-high resolution images of the brain using balanced steady-state free precession (bSSFP) at 7 T and to identify the potential utility of this sequence. Eight volunteers participated in this study after providing informed consent. Each volunteer was scanned with 8 phase cycles of bSSFP at 0.4-mm isotropic resolution using 0.5 number of excitations and 2-dimensional parallel acceleration of 1.75 × 1.75. Each phase cycle required 5 minutes of scanning, with pauses between the phase cycles allowing short periods of rest. The individual phase cycles were aligned and then averaged. The same volunteers underwent scanning using 3-dimensional (3D) multiecho gradient recalled echo at 0.8-mm isotropic resolution, 3D Cube T2 at 0.7-mm isotropic resolution, and thin-section coronal oblique T2-weighted fast spin echo at 0.22 × 0.22 × 2.0-mm resolution for comparison. Two neuroradiologists assessed image quality and potential research and clinical utility. The volunteers generally tolerated the scan sessions well, and composite high-resolution bSSFP images were produced for each volunteer. Rater analysis demonstrated that bSSFP had a superior 3D visualization of the microarchitecture of the hippocampus, very good contrast to delineate the borders of the subthalamic nucleus, and relatively good B1 homogeneity throughout. In addition to an excellent visualization of the cerebellum, subtle details of the brain and skull base anatomy were also easier to identify on the bSSFP images, including the line of Gennari, membrane of Liliequist, and cranial nerves. Balanced steady-state free precession had a strong iron contrast similar to or better than the comparison sequences. However, cortical gray-white contrast was significantly better with Cube T2 and T2-weighted fast spin echo. Balanced steady-state free precession can facilitate ultrahigh-resolution imaging of the brain. Although total imaging times are long, the individually short

  11. Working to make an image: an analysis of three Philip Morris corporate image media campaigns.

    Science.gov (United States)

    Szczypka, Glen; Wakefield, Melanie A; Emery, Sherry; Terry-McElrath, Yvonne M; Flay, Brian R; Chaloupka, Frank J

    2007-10-01

    To describe the nature and timing of, and population exposure to, Philip Morris USA's three explicit corporate image television advertising campaigns and explore the motivations behind each campaign. Analysis of television ratings from the largest 75 media markets in the United States, which measure the reach and frequency of population exposure to advertising; copies of all televised commercials produced by Philip Morris; and tobacco industry documents, which provide insights into the specific goals of each campaign. Household exposure to the "Working to Make a Difference: the People of Philip Morris" averaged 5.37 ads/month for 27 months from 1999-2001; the "Tobacco Settlement" campaign averaged 10.05 ads/month for three months in 2000; and "PMUSA" averaged 3.11 ads/month for the last six months in 2003. The percentage of advertising exposure that was purchased in news programming in order to reach opinion leaders increased over the three campaigns from 20%, 39% and 60%, respectively. These public relations campaigns were designed to counter negative images, increase brand recognition, and improve the financial viability of the company. Only one early media campaign focused on issues other than tobacco, whereas subsequent campaigns have been specifically concerned with tobacco issues, and more targeted to opinion leaders. The size and timing of the advertising buys appeared to be strategically crafted to maximise advertising exposure for these population subgroups during critical threats to Philip Morris's public image.

  12. Phase contribution of image potential on empty quantum well States in pb islands on the cu(111) surface.

    Science.gov (United States)

    Yang, M C; Lin, C L; Su, W B; Lin, S P; Lu, S M; Lin, H Y; Chang, C S; Hsu, W K; Tsong, Tien T

    2009-05-15

    We use scanning tunneling spectroscopy to explore the quantum well states in the Pb islands grown on a Cu(111) surface. Our observation demonstrates that the empty quantum well states, whose energy levels lie beyond 1.2 eV above the Fermi level, are significantly affected by the image potential. As the quantum number increases, the energy separation between adjacent states is shrinking rather than widening, contrary to the prediction for a square potential well. By simply introducing a phase factor to reckon the effect of the image potential, the shrinking behavior of the energy separation can be reasonably explained with the phase accumulation model. The model also reveals that there exists a quantum regime above the Pb surface in which the image potential is vanished. Moreover, the quasi-image-potential state in the tunneling gap is quenched because of the existence of the quantum well states.

  13. Imaging of continuum states of the He22+ quasimolecule

    International Nuclear Information System (INIS)

    Schmidt, L. Ph. H.; Schoeffler, M. S.; Stiebing, K. E.; Schmidt-Boecking, H.; Doerner, R.; Afaneh, F.; Weber, Th.

    2007-01-01

    Using cold target recoil ion momentum spectroscopy (COLTRIMS) we have investigated the production of one free electron in slow He 2+ +He(1s 2 ) collisions. At projectile velocities between 0.6 and 1.06 a.u. (9-28 keV/u), the fully differential cross section was measured state selective with respect to the second electron, which is bound either at the target or the projectile. We provide a comprehensive data set comprising state selective total cross section, scattering angle dependent single differential cross sections, and fully differential cross section. We show that the momentum distribution of the electron in the continuum image the relevant molecular orbitals for the reaction channel under consideration. By choosing the bound electron final state at the target or projectile and the impact parameter we can select these orbitals and manipulate their relative phase

  14. Time series analysis of brain regional volume by MR image

    International Nuclear Information System (INIS)

    Tanaka, Mika; Tarusawa, Ayaka; Nihei, Mitsuyo; Fukami, Tadanori; Yuasa, Tetsuya; Wu, Jin; Ishiwata, Kiichi; Ishii, Kenji

    2010-01-01

    The present study proposed a methodology of time series analysis of volumes of frontal, parietal, temporal and occipital lobes and cerebellum because such volumetric reports along the process of individual's aging have been scarcely presented. Subjects analyzed were brain images of 2 healthy males and 18 females of av. age of 69.0 y, of which T1-weighted 3D SPGR (spoiled gradient recalled in the steady state) acquisitions with a GE SIGNA EXCITE HD 1.5T machine were conducted for 4 times in the time series of 42-50 months. The image size was 256 x 256 x (86-124) voxels with digitization level 16 bits. As the template for the regions, the standard gray matter atlas (icbn452 a tlas p robability g ray) and its labeled one (icbn.Labels), provided by UCLA Laboratory of Neuro Imaging, were used for individual's standardization. Segmentation, normalization and coregistration were performed with the MR imaging software SPM8 (Statistic Parametric Mapping 8). Volumes of regions were calculated as their voxel ratio to the whole brain voxel in percent. It was found that the regional volumes decreased with aging in all above lobes examined and cerebellum in average percent per year of -0.11, -0.07, -0.04, -0.02, and -0.03, respectively. The procedure for calculation of the regional volumes, which has been manually operated hitherto, can be automatically conducted for the individual brain using the standard atlases above. (T.T.)

  15. Body image disturbance in adults treated for cancer - a concept analysis.

    Science.gov (United States)

    Rhoten, Bethany A

    2016-05-01

    To report an analysis of the concept of body image disturbance in adults who have been treated for cancer as a phenomenon of interest to nurses. Although the concept of body image disturbance has been clearly defined in adolescents and adults with eating disorders, adults who have been treated for cancer may also experience body image disturbance. In this context, the concept of body image disturbance has not been clearly defined. Concept analysis. PubMed, Psychological Information Database and Cumulative Index of Nursing and Allied Health Literature were searched for publications from 1937 - 2015. Search terms included body image, cancer, body image disturbance, adult and concept analysis. Walker and Avant's 8-step method of concept analysis was used. The defining attributes of body image disturbance in adults who have been treated for cancer are: (1) self-perception of a change in appearance and displeasure with the change or perceived change in appearance; (2) decline in an area of function; and (3) psychological distress regarding changes in appearance and/or function. This concept analysis provides a foundation for the development of multidimensional assessment tools and interventions to alleviate body image disturbance in this population. A better understanding of body image disturbance in adults treated for cancer will assist nurses and other clinicians in identifying this phenomenon and nurse scientists in developing instruments that accurately measure this condition, along with interventions that will promote a better quality of life for survivors. © 2016 John Wiley & Sons Ltd.

  16. Methods in quantitative image analysis.

    Science.gov (United States)

    Oberholzer, M; Ostreicher, M; Christen, H; Brühlmann, M

    1996-05-01

    The main steps of image analysis are image capturing, image storage (compression), correcting imaging defects (e.g. non-uniform illumination, electronic-noise, glare effect), image enhancement, segmentation of objects in the image and image measurements. Digitisation is made by a camera. The most modern types include a frame-grabber, converting the analog-to-digital signal into digital (numerical) information. The numerical information consists of the grey values describing the brightness of every point within the image, named a pixel. The information is stored in bits. Eight bits are summarised in one byte. Therefore, grey values can have a value between 0 and 256 (2(8)). The human eye seems to be quite content with a display of 5-bit images (corresponding to 64 different grey values). In a digitised image, the pixel grey values can vary within regions that are uniform in the original scene: the image is noisy. The noise is mainly manifested in the background of the image. For an optimal discrimination between different objects or features in an image, uniformity of illumination in the whole image is required. These defects can be minimised by shading correction [subtraction of a background (white) image from the original image, pixel per pixel, or division of the original image by the background image]. The brightness of an image represented by its grey values can be analysed for every single pixel or for a group of pixels. The most frequently used pixel-based image descriptors are optical density, integrated optical density, the histogram of the grey values, mean grey value and entropy. The distribution of the grey values existing within an image is one of the most important characteristics of the image. However, the histogram gives no information about the texture of the image. The simplest way to improve the contrast of an image is to expand the brightness scale by spreading the histogram out to the full available range. Rules for transforming the grey value

  17. Image analysis and machine learning for detecting malaria.

    Science.gov (United States)

    Poostchi, Mahdieh; Silamut, Kamolrat; Maude, Richard J; Jaeger, Stefan; Thoma, George

    2018-04-01

    Malaria remains a major burden on global health, with roughly 200 million cases worldwide and more than 400,000 deaths per year. Besides biomedical research and political efforts, modern information technology is playing a key role in many attempts at fighting the disease. One of the barriers toward a successful mortality reduction has been inadequate malaria diagnosis in particular. To improve diagnosis, image analysis software and machine learning methods have been used to quantify parasitemia in microscopic blood slides. This article gives an overview of these techniques and discusses the current developments in image analysis and machine learning for microscopic malaria diagnosis. We organize the different approaches published in the literature according to the techniques used for imaging, image preprocessing, parasite detection and cell segmentation, feature computation, and automatic cell classification. Readers will find the different techniques listed in tables, with the relevant articles cited next to them, for both thin and thick blood smear images. We also discussed the latest developments in sections devoted to deep learning and smartphone technology for future malaria diagnosis. Published by Elsevier Inc.

  18. Basic strategies for valid cytometry using image analysis

    NARCIS (Netherlands)

    Jonker, A.; Geerts, W. J.; Chieco, P.; Moorman, A. F.; Lamers, W. H.; van Noorden, C. J.

    1997-01-01

    The present review provides a starting point for setting up an image analysis system for quantitative densitometry and absorbance or fluorescence measurements in cell preparations, tissue sections or gels. Guidelines for instrumental settings that are essential for the valid application of image

  19. Chromatic Image Analysis For Quantitative Thermal Mapping

    Science.gov (United States)

    Buck, Gregory M.

    1995-01-01

    Chromatic image analysis system (CIAS) developed for use in noncontact measurements of temperatures on aerothermodynamic models in hypersonic wind tunnels. Based on concept of temperature coupled to shift in color spectrum for optical measurement. Video camera images fluorescence emitted by phosphor-coated model at two wavelengths. Temperature map of model then computed from relative brightnesses in video images of model at those wavelengths. Eliminates need for intrusive, time-consuming, contact temperature measurements by gauges, making it possible to map temperatures on complex surfaces in timely manner and at reduced cost.

  20. Mathematical foundations of image processing and analysis

    CERN Document Server

    Pinoli, Jean-Charles

    2014-01-01

    Mathematical Imaging is currently a rapidly growing field in applied mathematics, with an increasing need for theoretical mathematics. This book, the second of two volumes, emphasizes the role of mathematics as a rigorous basis for imaging sciences. It provides a comprehensive and convenient overview of the key mathematical concepts, notions, tools and frameworks involved in the various fields of gray-tone and binary image processing and analysis, by proposing a large, but coherent, set of symbols and notations, a complete list of subjects and a detailed bibliography. It establishes a bridg

  1. Feasibility and limitation of constructive interference in steady-state (CISS) MR imaging in neonates with lumbosacral myeloschisis

    Energy Technology Data Exchange (ETDEWEB)

    Hashiguchi, Kimiaki; Morioka, Takato; Yoshida, Fumiaki; Miyagi, Yasushi; Nagata, Shinji; Sasaki, Tomio [Kyushu University, Department of Neurosurgery, Graduate School of Medical Sciences, Fukuoka (Japan); Mihara, Futoshi; Yoshiura, Takashi [Kyushu University, Department of Clinical Radiology, Graduate School of Medical Sciences, Fukuoka (Japan)

    2007-07-15

    The aim of this study was to evaluate three-dimensional Fourier transformation-constructive interference in steady-state (CISS) imaging as a preoperative anatomical evaluation of the relationship between the placode, spinal nerve roots, CSF space, and the myelomeningocele sac in neonates with lumbosacral myeloschisis. Five consecutive patients with lumbosacral myeloschisis were included in this study. Magnetic resonance (MR) CISS, conventional T1-weighted (T1-W) and T2-weighted (T2-W) images were acquired on the day of birth to compare the anatomical findings with each sequence. We also performed curvilinear reconstruction of the CISS images, which can be reconstructed along the curved spinal cord and neural placode. Neural placodes were demonstrated in two patients on T1-W images and in three patients on T2-W images. T2-W images revealed a small number of nerve roots in two patients, while no nerve roots were demonstrated on T1-W images. In contrast, CISS images clearly demonstrated neural placodes and spinal nerve roots in four patients. These findings were in accordance with intraoperative findings. Curvilinear CISS images demonstrated the neuroanatomy around the myeloschisis in one slice. The resulting images were degraded by a band artifact that obstructed fine anatomical analysis of the nerve roots in the ventral CSF space. The placode and nerve roots could not be visualized in one patient in whom the CSF space was narrow due to the collapse of the myelomeningocele sac. MR CISS imaging is superior to T1-W and T2-W imaging for demonstrating the neural placode and nerve roots, although problems remain in terms of artifacts. (orig.)

  2. Low Cost Desktop Image Analysis Workstation With Enhanced Interactive User Interface

    Science.gov (United States)

    Ratib, Osman M.; Huang, H. K.

    1989-05-01

    A multimodality picture archiving and communication system (PACS) is in routine clinical use in the UCLA Radiology Department. Several types workstations are currently implemented for this PACS. Among them, the Apple Macintosh II personal computer was recently chosen to serve as a desktop workstation for display and analysis of radiological images. This personal computer was selected mainly because of its extremely friendly user-interface, its popularity among the academic and medical community and its low cost. In comparison to other microcomputer-based systems the Macintosh II offers the following advantages: the extreme standardization of its user interface, file system and networking, and the availability of a very large variety of commercial software packages. In the current configuration the Macintosh II operates as a stand-alone workstation where images are imported from a centralized PACS server through an Ethernet network using a standard TCP-IP protocol, and stored locally on magnetic disk. The use of high resolution screens (1024x768 pixels x 8bits) offer sufficient performance for image display and analysis. We focused our project on the design and implementation of a variety of image analysis algorithms ranging from automated structure and edge detection to sophisticated dynamic analysis of sequential images. Specific analysis programs were developed for ultrasound images, digitized angiograms, MRI and CT tomographic images and scintigraphic images.

  3. Automated detection of regions of interest for tissue microarray experiments: an image texture analysis

    International Nuclear Information System (INIS)

    Karaçali, Bilge; Tözeren, Aydin

    2007-01-01

    Recent research with tissue microarrays led to a rapid progress toward quantifying the expressions of large sets of biomarkers in normal and diseased tissue. However, standard procedures for sampling tissue for molecular profiling have not yet been established. This study presents a high throughput analysis of texture heterogeneity on breast tissue images for the purpose of identifying regions of interest in the tissue for molecular profiling via tissue microarray technology. Image texture of breast histology slides was described in terms of three parameters: the percentage of area occupied in an image block by chromatin (B), percentage occupied by stroma-like regions (P), and a statistical heterogeneity index H commonly used in image analysis. Texture parameters were defined and computed for each of the thousands of image blocks in our dataset using both the gray scale and color segmentation. The image blocks were then classified into three categories using the texture feature parameters in a novel statistical learning algorithm. These categories are as follows: image blocks specific to normal breast tissue, blocks specific to cancerous tissue, and those image blocks that are non-specific to normal and disease states. Gray scale and color segmentation techniques led to identification of same regions in histology slides as cancer-specific. Moreover the image blocks identified as cancer-specific belonged to those cell crowded regions in whole section image slides that were marked by two pathologists as regions of interest for further histological studies. These results indicate the high efficiency of our automated method for identifying pathologic regions of interest on histology slides. Automation of critical region identification will help minimize the inter-rater variability among different raters (pathologists) as hundreds of tumors that are used to develop an array have typically been evaluated (graded) by different pathologists. The region of interest

  4. Morphometric image analysis of giant vesicles

    DEFF Research Database (Denmark)

    Husen, Peter Rasmussen; Arriaga, Laura; Monroy, Francisco

    2012-01-01

    We have developed a strategy to determine lengths and orientations of tie lines in the coexistence region of liquid-ordered and liquid-disordered phases of cholesterol containing ternary lipid mixtures. The method combines confocal-fluorescence-microscopy image stacks of giant unilamellar vesicles...... (GUVs), a dedicated 3D-image analysis, and a quantitative analysis based in equilibrium thermodynamic considerations. This approach was tested in GUVs composed of 1,2-dioleoyl-sn-glycero-3-phosphocholine/1,2-palmitoyl-sn-glycero-3-phosphocholine/cholesterol. In general, our results show a reasonable...... agreement with previously reported data obtained by other methods. For example, our computed tie lines were found to be nonhorizontal, indicating a difference in cholesterol content in the coexisting phases. This new, to our knowledge, analytical strategy offers a way to further exploit fluorescence...

  5. Standardization of Image Quality Analysis – ISO 19264

    DEFF Research Database (Denmark)

    Wüller, Dietmar; Kejser, Ulla Bøgvad

    2016-01-01

    There are a variety of image quality analysis tools available for the archiving world, which are based on different test charts and analysis algorithms. ISO has formed a working group in 2012 to harmonize these approaches and create a standard way of analyzing the image quality for archiving...... systems. This has resulted in three documents that have been or are going to be published soon. ISO 19262 defines the terms used in the area of image capture to unify the language. ISO 19263 describes the workflow issues and provides detailed information on how the measurements are done. Last...... but not least ISO 19264 describes the measurements in detail and provides aims and tolerance levels for the different aspects. This paper will present the new ISO 19264 technical specification to analyze image quality based on a single capture of a multi-pattern test chart, and discuss the reasoning behind its...

  6. Telemetry Timing Analysis for Image Reconstruction of Kompsat Spacecraft

    Directory of Open Access Journals (Sweden)

    Jin-Ho Lee

    2000-06-01

    Full Text Available The KOMPSAT (KOrea Multi-Purpose SATellite has two optical imaging instruments called EOC (Electro-Optical Camera and OSMI (Ocean Scanning Multispectral Imager. The image data of these instruments are transmitted to ground station and restored correctly after post-processing with the telemetry data transferred from KOMPSAT spacecraft. The major timing information of the KOMPSAT is OBT (On-Board Time which is formatted by the on-board computer of the spacecraft, based on 1Hz sync. pulse coming from the GPS receiver involved. The OBT is transmitted to ground station with the house-keeping telemetry data of the spacecraft while it is distributed to the instruments via 1553B data bus for synchronization during imaging and formatting. The timing information contained in the spacecraft telemetry data would have direct relation to the image data of the instruments, which should be well explained to get a more accurate image. This paper addresses the timing analysis of the KOMPSAT spacecraft and instruments, including the gyro data timing analysis for the correct restoration of the EOC and OSMI image data at ground station.

  7. Analysis and clinical usefullness of cardiac ECT images

    International Nuclear Information System (INIS)

    Hayashi, Makoto; Kagawa, Masaaki; Yamada, Yukinori

    1983-01-01

    We estimated basically and clinically myocardial ECT image and ECG gated cardiac blood-pool ECT image. ROC curve is used for the evaluation of the accuracy in diagnostic myocardial infarction. The accuracy in diagnostic of MI is superior in myocardial ECT image and ECT estimation is unnecessary skillfulness and experience. We can absene the whole defect of MI than planar image by using ECT. LVEDV between estimated volume and contrast volume is according to it and get one step for automatic analysis of cardiac volume. (author)

  8. Fourier analysis: from cloaking to imaging

    Science.gov (United States)

    Wu, Kedi; Cheng, Qiluan; Wang, Guo Ping

    2016-04-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers.

  9. A software package for biomedical image processing and analysis

    International Nuclear Information System (INIS)

    Goncalves, J.G.M.; Mealha, O.

    1988-01-01

    The decreasing cost of computing power and the introduction of low cost imaging boards justifies the increasing number of applications of digital image processing techniques in the area of biomedicine. There is however a large software gap to be fulfilled, between the application and the equipment. The requirements to bridge this gap are twofold: good knowledge of the hardware provided and its interface to the host computer, and expertise in digital image processing and analysis techniques. A software package incorporating these two requirements was developed using the C programming language, in order to create a user friendly image processing programming environment. The software package can be considered in two different ways: as a data structure adapted to image processing and analysis, which acts as the backbone and the standard of communication for all the software; and as a set of routines implementing the basic algorithms used in image processing and analysis. Hardware dependency is restricted to a single module upon which all hardware calls are based. The data structure that was built has four main features: hierchical, open, object oriented, and object dependent dimensions. Considering the vast amount of memory needed by imaging applications and the memory available in small imaging systems, an effective image memory management scheme was implemented. This software package is being used for more than one and a half years by users with different applications. It proved to be an excellent tool for helping people to get adapted into the system, and for standardizing and exchanging software, yet preserving flexibility allowing for users' specific implementations. The philosophy of the software package is discussed and the data structure that was built is described in detail

  10. Terahertz spectroscopy and imaging for cultural heritage management: state of art and perspectives

    Science.gov (United States)

    Catapano, Ilaria; Soldovieri, Francesco

    2014-05-01

    Non-invasive diagnostic tools able to provide information on the materials and preservation state of artworks are crucial to help conservators, archaeologists and anthropologists to plan and carry out their tasks properly. In this frame, technological solutions exploiting Terahertz (THz) radiation, i.e., working at frequencies ranging from 0.1 to 10 THz, are currently deserving huge attention as complementary techniques to classical analysis methodologies based on electromagnetic radiations from X-rays to mid infrared [1]. The main advantage offered by THz spectroscopy and imaging systems is referred to their capability of providing information useful to determine the construction modality, the history life and the conservation state of artworks as well as to identify previous restoration actions [1,2]. In particular, unlike mid- and near-infrared spectroscopy, which provides fingerprint absorption spectra depending on the intramolecular behavior, THz spectroscopy is related to the structure of the molecules of the investigated object. Hence, it can discriminate, for instance, the different materials mixed in a paint [1,2]. Moreover, THz radiation is able to penetrate several materials which are opaque to both visible and infrared materials, such as varnish, paint, plaster, paper, wood, plastic, and so on. Accordingly, it is useful to detect hidden objects and characterize the inner structure of the artwork under test even in the direction of the depth, while avoiding core drillings. In this frame, THz systems allow us to discriminate different layers of materials present in artworks like paints, to obtain images providing information on the construction technique as well as to discover risk factors affecting the preservation state, such as non-visible cracks, hidden molds and air gaps between the paint layer and underlying structure. Furthermore, adopting a no-ionizing radiation, THz systems offer the not trivial benefit of negligible long term risks to the

  11. Dysconnection topography in schizophrenia revealed with state-space analysis of EEG.

    Science.gov (United States)

    Jalili, Mahdi; Lavoie, Suzie; Deppen, Patricia; Meuli, Reto; Do, Kim Q; Cuénod, Michel; Hasler, Martin; De Feo, Oscar; Knyazeva, Maria G

    2007-10-24

    The dysconnection hypothesis has been proposed to account for pathophysiological mechanisms underlying schizophrenia. Widespread structural changes suggesting abnormal connectivity in schizophrenia have been imaged. A functional counterpart of the structural maps would be the EEG synchronization maps. However, due to the limits of currently used bivariate methods, functional correlates of dysconnection are limited to the isolated measurements of synchronization between preselected pairs of EEG signals. To reveal a whole-head synchronization topography in schizophrenia, we applied a new method of multivariate synchronization analysis called S-estimator to the resting dense-array (128 channels) EEG obtained from 14 patients and 14 controls. This method determines synchronization from the embedding dimension in a state-space domain based on the theoretical consequence of the cooperative behavior of simultaneous time series-the shrinking of the state-space embedding dimension. The S-estimator imaging revealed a specific synchronization landscape in schizophrenia patients. Its main features included bilaterally increased synchronization over temporal brain regions and decreased synchronization over the postcentral/parietal region neighboring the midline. The synchronization topography was stable over the course of several months and correlated with the severity of schizophrenia symptoms. In particular, direct correlations linked positive, negative, and general psychopathological symptoms to the hyper-synchronized temporal clusters over both hemispheres. Along with these correlations, general psychopathological symptoms inversely correlated within the hypo-synchronized postcentral midline region. While being similar to the structural maps of cortical changes in schizophrenia, the S-maps go beyond the topography limits, demonstrating a novel aspect of the abnormalities of functional cooperation: namely, regionally reduced or enhanced connectivity. The new method of

  12. Dysconnection topography in schizophrenia revealed with state-space analysis of EEG.

    Directory of Open Access Journals (Sweden)

    Mahdi Jalili

    2007-10-01

    Full Text Available The dysconnection hypothesis has been proposed to account for pathophysiological mechanisms underlying schizophrenia. Widespread structural changes suggesting abnormal connectivity in schizophrenia have been imaged. A functional counterpart of the structural maps would be the EEG synchronization maps. However, due to the limits of currently used bivariate methods, functional correlates of dysconnection are limited to the isolated measurements of synchronization between preselected pairs of EEG signals.To reveal a whole-head synchronization topography in schizophrenia, we applied a new method of multivariate synchronization analysis called S-estimator to the resting dense-array (128 channels EEG obtained from 14 patients and 14 controls. This method determines synchronization from the embedding dimension in a state-space domain based on the theoretical consequence of the cooperative behavior of simultaneous time series-the shrinking of the state-space embedding dimension. The S-estimator imaging revealed a specific synchronization landscape in schizophrenia patients. Its main features included bilaterally increased synchronization over temporal brain regions and decreased synchronization over the postcentral/parietal region neighboring the midline. The synchronization topography was stable over the course of several months and correlated with the severity of schizophrenia symptoms. In particular, direct correlations linked positive, negative, and general psychopathological symptoms to the hyper-synchronized temporal clusters over both hemispheres. Along with these correlations, general psychopathological symptoms inversely correlated within the hypo-synchronized postcentral midline region. While being similar to the structural maps of cortical changes in schizophrenia, the S-maps go beyond the topography limits, demonstrating a novel aspect of the abnormalities of functional cooperation: namely, regionally reduced or enhanced connectivity.The new

  13. State-Space Formulation for Circuit Analysis

    Science.gov (United States)

    Martinez-Marin, T.

    2010-01-01

    This paper presents a new state-space approach for temporal analysis of electrical circuits. The method systematically obtains the state-space formulation of nondegenerate linear networks without using concepts of topology. It employs nodal/mesh systematic analysis to reduce the number of undesired variables. This approach helps students to…

  14. Automated daily quality control analysis for mammography in a multi-unit imaging center.

    Science.gov (United States)

    Sundell, Veli-Matti; Mäkelä, Teemu; Meaney, Alexander; Kaasalainen, Touko; Savolainen, Sauli

    2018-01-01

    Background The high requirements for mammography image quality necessitate a systematic quality assurance process. Digital imaging allows automation of the image quality analysis, which can potentially improve repeatability and objectivity compared to a visual evaluation made by the users. Purpose To develop an automatic image quality analysis software for daily mammography quality control in a multi-unit imaging center. Material and Methods An automated image quality analysis software using the discrete wavelet transform and multiresolution analysis was developed for the American College of Radiology accreditation phantom. The software was validated by analyzing 60 randomly selected phantom images from six mammography systems and 20 phantom images with different dose levels from one mammography system. The results were compared to a visual analysis made by four reviewers. Additionally, long-term image quality trends of a full-field digital mammography system and a computed radiography mammography system were investigated. Results The automated software produced feature detection levels comparable to visual analysis. The agreement was good in the case of fibers, while the software detected somewhat more microcalcifications and characteristic masses. Long-term follow-up via a quality assurance web portal demonstrated the feasibility of using the software for monitoring the performance of mammography systems in a multi-unit imaging center. Conclusion Automated image quality analysis enables monitoring the performance of digital mammography systems in an efficient, centralized manner.

  15. Resting-state abnormalities in amnestic mild cognitive impairment: a meta-analysis.

    Science.gov (United States)

    Lau, W K W; Leung, M-K; Lee, T M C; Law, A C K

    2016-04-26

    Amnestic mild cognitive impairment (aMCI) is a prodromal stage of Alzheimer's disease (AD). As no effective drug can cure AD, early diagnosis and intervention for aMCI are urgently needed. The standard diagnostic procedure for aMCI primarily relies on subjective neuropsychological examinations that require the judgment of experienced clinicians. The development of other objective and reliable aMCI markers, such as neural markers, is therefore required. Previous neuroimaging findings revealed various abnormalities in resting-state activity in MCI patients, but the findings have been inconsistent. The current study provides an updated activation likelihood estimation meta-analysis of resting-state functional magnetic resonance imaging (fMRI) data on aMCI. The authors searched on the MEDLINE/PubMed databases for whole-brain resting-state fMRI studies on aMCI published until March 2015. We included 21 whole-brain resting-state fMRI studies that reported a total of 156 distinct foci. Significant regional resting-state differences were consistently found in aMCI patients relative to controls, including the posterior cingulate cortex, right angular gyrus, right parahippocampal gyrus, left fusiform gyrus, left supramarginal gyrus and bilateral middle temporal gyri. Our findings support that abnormalities in resting-state activities of these regions may serve as neuroimaging markers for aMCI.

  16. Design and validation of Segment - freely available software for cardiovascular image analysis

    International Nuclear Information System (INIS)

    Heiberg, Einar; Sjögren, Jane; Ugander, Martin; Carlsson, Marcus; Engblom, Henrik; Arheden, Håkan

    2010-01-01

    Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page (http://segment.heiberg.se). Segment

  17. Quantitative analysis and classification of AFM images of human hair.

    Science.gov (United States)

    Gurden, S P; Monteiro, V F; Longo, E; Ferreira, M M C

    2004-07-01

    The surface topography of human hair, as defined by the outer layer of cellular sheets, termed cuticles, largely determines the cosmetic properties of the hair. The condition of the cuticles is of great cosmetic importance, but also has the potential to aid diagnosis in the medical and forensic sciences. Atomic force microscopy (AFM) has been demonstrated to offer unique advantages for analysis of the hair surface, mainly due to the high image resolution and the ease of sample preparation. This article presents an algorithm for the automatic analysis of AFM images of human hair. The cuticular structure is characterized using a series of descriptors, such as step height, tilt angle and cuticle density, allowing quantitative analysis and comparison of different images. The usefulness of this approach is demonstrated by a classification study. Thirty-eight AFM images were measured, consisting of hair samples from (a) untreated and bleached hair samples, and (b) the root and distal ends of the hair fibre. The multivariate classification technique partial least squares discriminant analysis is used to test the ability of the algorithm to characterize the images according to the properties of the hair samples. Most of the images (86%) were found to be classified correctly.

  18. SIMA: Python software for analysis of dynamic fluorescence imaging data

    Directory of Open Access Journals (Sweden)

    Patrick eKaifosh

    2014-09-01

    Full Text Available Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs, and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/.

  19. Multisource Images Analysis Using Collaborative Clustering

    Directory of Open Access Journals (Sweden)

    Pierre Gançarski

    2008-04-01

    Full Text Available The development of very high-resolution (VHR satellite imagery has produced a huge amount of data. The multiplication of satellites which embed different types of sensors provides a lot of heterogeneous images. Consequently, the image analyst has often many different images available, representing the same area of the Earth surface. These images can be from different dates, produced by different sensors, or even at different resolutions. The lack of machine learning tools using all these representations in an overall process constraints to a sequential analysis of these various images. In order to use all the information available simultaneously, we propose a framework where different algorithms can use different views of the scene. Each one works on a different remotely sensed image and, thus, produces different and useful information. These algorithms work together in a collaborative way through an automatic and mutual refinement of their results, so that all the results have almost the same number of clusters, which are statistically similar. Finally, a unique result is produced, representing a consensus among the information obtained by each clustering method on its own image. The unified result and the complementarity of the single results (i.e., the agreement between the clustering methods as well as the disagreement lead to a better understanding of the scene. The experiments carried out on multispectral remote sensing images have shown that this method is efficient to extract relevant information and to improve the scene understanding.

  20. Image quality preferences among radiographers and radiologists. A conjoint analysis

    International Nuclear Information System (INIS)

    Ween, Borgny; Kristoffersen, Doris Tove; Hamilton, Glenys A.; Olsen, Dag Rune

    2005-01-01

    Purpose: The aim of this study was to investigate the image quality preferences among radiographers and radiologists. The radiographers' preferences are mainly related to technical parameters, whereas radiologists assess image quality based on diagnostic value. Methods: A conjoint analysis was undertaken to survey image quality preferences; the study included 37 respondents: 19 radiographers and 18 radiologists. Digital urograms were post-processed into 8 images with different properties of image quality for 3 different patients. The respondents were asked to rank the images according to their personally perceived subjective image quality. Results: Nearly half of the radiographers and radiologists were consistent in their ranking of the image characterised as 'very best image quality'. The analysis showed, moreover, that chosen filtration level and image intensity were responsible for 72% and 28% of the preferences, respectively. The corresponding figures for each of the two professions were 76% and 24% for the radiographers, and 68% and 32% for the radiologists. In addition, there were larger variations in image preferences among the radiologists, as compared to the radiographers. Conclusions: Radiographers revealed a more consistent preference than the radiologists with respect to image quality. There is a potential for image quality improvement by developing sets of image property criteria

  1. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation

    International Nuclear Information System (INIS)

    Zhao, Zhanqi; Möller, Knut; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich

    2014-01-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton–Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR C ) and (4) GREIT with individual thorax geometry (GR T ). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal–Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms. (paper)

  2. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    Science.gov (United States)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  3. State Space Analysis of Hierarchical Coloured Petri Nets

    DEFF Research Database (Denmark)

    Christensen, Søren; Kristensen, Lars Michael

    2003-01-01

    In this paper, we consider state space analysis of Coloured Petri Nets. It is well-known that almost all dynamic properties of the considered system can be verified when the state space is finite. However, state space analysis is more than just formulating a set of formal requirements and invokin...... supporting computation and storage of state spaces which exploi the hierarchical structure of the models....... in which formal verification, partial state spaces, and analysis by means of graphical feedback and simulation are integrated entities. The focus of the paper is twofold: the support for graphical feedback and the way it has been integrated with simulation, and the underlying algorithms and data-structures......In this paper, we consider state space analysis of Coloured Petri Nets. It is well-known that almost all dynamic properties of the considered system can be verified when the state space is finite. However, state space analysis is more than just formulating a set of formal requirements and invoking...

  4. Determination of binder distributions in green-state ceramics by NMR imaging

    International Nuclear Information System (INIS)

    Garrido, L.; Ackerman, J.L.; Ellingson, W.A.; Weyand, J.D.

    1988-03-01

    The manufacture of reliable high performance structural ceramics requires a good understanding of the different steps involved in the process. The presence of nonuniformities in the distribution of the polymeric binder could give rise to local fluctuations of density that could produce failure of the ceramic piece. Specimens prepared from Al 2 O 3 with 15 and 2.5% ww binder were imaged using NMR in order to measure binder distribution maps. Results show that NMR imaging could be a useful technique to nondestructively evaluate the quality of green-state specimens. 5 refs., 5 figs

  5. Remote Sensing Digital Image Analysis An Introduction

    CERN Document Server

    Richards, John A

    2013-01-01

    Remote Sensing Digital Image Analysis provides the non-specialist with a treatment of the quantitative analysis of satellite and aircraft derived remotely sensed data. Since the first edition of the book there have been significant developments in the algorithms used for the processing and analysis of remote sensing imagery; nevertheless many of the fundamentals have substantially remained the same.  This new edition presents material that has retained value since those early days, along with new techniques that can be incorporated into an operational framework for the analysis of remote sensing data. The book is designed as a teaching text for the senior undergraduate and postgraduate student, and as a fundamental treatment for those engaged in research using digital image processing in remote sensing.  The presentation level is for the mathematical non-specialist.  Since the very great number of operational users of remote sensing come from the earth sciences communities, the text is pitched at a leve...

  6. Discriminative Nonlinear Analysis Operator Learning: When Cosparse Model Meets Image Classification.

    Science.gov (United States)

    Wen, Zaidao; Hou, Biao; Jiao, Licheng

    2017-05-03

    Linear synthesis model based dictionary learning framework has achieved remarkable performances in image classification in the last decade. Behaved as a generative feature model, it however suffers from some intrinsic deficiencies. In this paper, we propose a novel parametric nonlinear analysis cosparse model (NACM) with which a unique feature vector will be much more efficiently extracted. Additionally, we derive a deep insight to demonstrate that NACM is capable of simultaneously learning the task adapted feature transformation and regularization to encode our preferences, domain prior knowledge and task oriented supervised information into the features. The proposed NACM is devoted to the classification task as a discriminative feature model and yield a novel discriminative nonlinear analysis operator learning framework (DNAOL). The theoretical analysis and experimental performances clearly demonstrate that DNAOL will not only achieve the better or at least competitive classification accuracies than the state-of-the-art algorithms but it can also dramatically reduce the time complexities in both training and testing phases.

  7. Morphological images analysis and chromosomic aberrations classification based on fuzzy logic

    International Nuclear Information System (INIS)

    Souza, Leonardo Peres

    2011-01-01

    This work has implemented a methodology for automation of images analysis of chromosomes of human cells irradiated at IEA-R1 nuclear reactor (located at IPEN, Sao Paulo, Brazil), and therefore subject to morphological aberrations. This methodology intends to be a tool for helping cytogeneticists on identification, characterization and classification of chromosomal metaphasic analysis. The methodology development has included the creation of a software application based on artificial intelligence techniques using Fuzzy Logic combined with image processing techniques. The developed application was named CHRIMAN and is composed of modules that contain the methodological steps which are important requirements in order to achieve an automated analysis. The first step is the standardization of the bi-dimensional digital image acquisition procedure through coupling a simple digital camera to the ocular of the conventional metaphasic analysis microscope. Second step is related to the image treatment achieved through digital filters application; storing and organization of information obtained both from image content itself, and from selected extracted features, for further use on pattern recognition algorithms. The third step consists on characterizing, counting and classification of stored digital images and extracted features information. The accuracy in the recognition of chromosome images is 93.9%. This classification is based on classical standards obtained at Buckton [1973], and enables support to geneticist on chromosomic analysis procedure, decreasing analysis time, and creating conditions to include this method on a broader evaluation system on human cell damage due to ionizing radiation exposure. (author)

  8. Optical imaging of mitochondrial redox state in rodent model of retinitis pigmentosa

    Science.gov (United States)

    Maleki, Sepideh; Gopalakrishnan, Sandeep; Ghanian, Zahra; Sepehr, Reyhaneh; Schmitt, Heather; Eells, Janis; Ranji, Mahsa

    2013-01-01

    Oxidative stress (OS) and mitochondrial dysfunction contribute to photoreceptor cell loss in retinal degenerative disorders. The metabolic state of the retina in a rodent model of retinitis pigmentosa (RP) was investigated using a cryo-fluorescence imaging technique. The mitochondrial metabolic coenzymes nicotinamide adenine dinucleotide (NADH) and flavin adenine dinucleotide (FAD) are autofluorescent and can be monitored without exogenous labels using optical techniques. The cryo-fluorescence redox imaging technique provides a quantitative assessment of the metabolism. More specifically, the ratio of the fluorescence intensity of these fluorophores (NADH/FAD), the NADH redox ratio (RR), is a marker of the metabolic state of the tissue. The NADH RR and retinal function were examined in an established rodent model of RP, the P23H rat compared to that of nondystrophic Sprague-Dawley (SD) rats. The NADH RR mean values were 1.11±0.03 in the SD normal and 0.841±0.01 in the P23H retina, indicating increased OS in the P23H retina. Electroretinographic data revealed a significant reduction in photoreceptor function in P23H animals compared to SD nozrmal rats. Thus, cryo-fluorescence redox imaging was used as a quantitative marker of OS in eyes from transgenic rats and demonstrated that alterations in the oxidative state of eyes occur during the early stages of RP.

  9. On the applicability of numerical image mapping for PIV image analysis near curved interfaces

    International Nuclear Information System (INIS)

    Masullo, Alessandro; Theunissen, Raf

    2017-01-01

    This paper scrutinises the general suitability of image mapping for particle image velocimetry (PIV) applications. Image mapping can improve PIV measurement accuracy by eliminating overlap between the PIV interrogation windows and an interface, as illustrated by some examples in the literature. Image mapping transforms the PIV images using a curvilinear interface-fitted mesh prior to performing the PIV cross correlation. However, degrading effects due to particle image deformation and the Jacobian transformation inherent in the mapping along curvilinear grid lines have never been deeply investigated. Here, the implementation of image mapping from mesh generation to image resampling is presented in detail, and related error sources are analysed. Systematic comparison with standard PIV approaches shows that image mapping is effective only in a very limited set of flow conditions and geometries, and depends strongly on a priori knowledge of the boundary shape and streamlines. In particular, with strongly curved geometries or streamlines that are not parallel to the interface, the image-mapping approach is easily outperformed by more traditional image analysis methodologies invoking suitable spatial relocation of the obtained displacement vector. (paper)

  10. New approach to gallbladder ultrasonic images analysis and lesions recognition.

    Science.gov (United States)

    Bodzioch, Sławomir; Ogiela, Marek R

    2009-03-01

    This paper presents a new approach to gallbladder ultrasonic image processing and analysis towards detection of disease symptoms on processed images. First, in this paper, there is presented a new method of filtering gallbladder contours from USG images. A major stage in this filtration is to segment and section off areas occupied by the said organ. In most cases this procedure is based on filtration that plays a key role in the process of diagnosing pathological changes. Unfortunately ultrasound images present among the most troublesome methods of analysis owing to the echogenic inconsistency of structures under observation. This paper provides for an inventive algorithm for the holistic extraction of gallbladder image contours. The algorithm is based on rank filtration, as well as on the analysis of histogram sections on tested organs. The second part concerns detecting lesion symptoms of the gallbladder. Automating a process of diagnosis always comes down to developing algorithms used to analyze the object of such diagnosis and verify the occurrence of symptoms related to given affection. Usually the final stage is to make a diagnosis based on the detected symptoms. This last stage can be carried out through either dedicated expert systems or more classic pattern analysis approach like using rules to determine illness basing on detected symptoms. This paper discusses the pattern analysis algorithms for gallbladder image interpretation towards classification of the most frequent illness symptoms of this organ.

  11. Monitoring of urban growth in the state of Hidalgo using Landsat images

    Directory of Open Access Journals (Sweden)

    Laura Cano Salinas

    2017-03-01

    Given this background, this paper is focused on the generation of geographic information for regional urban planning and the overall aim is to examine urban growth rate during the period 2000-2014 in the state of Hidalgo, Mexico and identify potential areas of expansion from Landsat images. The methodology was based on techniques of remote sensing and Geographical Information System (GIS. The inputs used were six Landsat scenes: three for 2000 year and three for 2014. Image processing was performed on ERDAS Imagine® 9.1 and the spatial analysis of urban coverage statewide on ArcGIS 10.0 by ESRI®. First, the radiometric correction was made and we obtained the urban polygons of the 2000 year through of supervised classification. The 2014 urban layer was digitized manually due to the spectral incompatibility between the bands of the Landsat sensor 5 and 7, and the Landsat sensor 8. Then, we build a road density map and the spatial relationship of the urban centers with the road influence area was evaluated. For the year 2000, 103 urban polygons were mapped, whilst for 2014 were identified ten polygons more with a mapped minimum area of 24 ha. The main results indicated that in the state has increased 72.3 km2 urban area from 2000 to 2014. This represents an average growth rate of 1.8% per year. The most widespread municipalities are located in the region of Valle del Mezquital, however, Mineral de la Reforma, Tetepango, Tizayuca and Pachuca showed growth rates of 183.44%, 102% 94% and 68.5% in fourteen years, respectively. According to the road map density, these municipalities are located in areas of greatest influence of infrastructure as the Arco Norte highway in the state. The above findings, lead us to conclude that the Mezquital Valley and the Basin of Mexico are potential areas of urban spreading and it is associated with road development in the Central Mexico.

  12. Extended -Regular Sequence for Automated Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jin Hee-Jeong

    2006-01-01

    Full Text Available Microarray study enables us to obtain hundreds of thousands of expressions of genes or genotypes at once, and it is an indispensable technology for genome research. The first step is the analysis of scanned microarray images. This is the most important procedure for obtaining biologically reliable data. Currently most microarray image processing systems require burdensome manual block/spot indexing work. Since the amount of experimental data is increasing very quickly, automated microarray image analysis software becomes important. In this paper, we propose two automated methods for analyzing microarray images. First, we propose the extended -regular sequence to index blocks and spots, which enables a novel automatic gridding procedure. Second, we provide a methodology, hierarchical metagrid alignment, to allow reliable and efficient batch processing for a set of microarray images. Experimental results show that the proposed methods are more reliable and convenient than the commercial tools.

  13. Fourier analysis: from cloaking to imaging

    International Nuclear Information System (INIS)

    Wu, Kedi; Ping Wang, Guo; Cheng, Qiluan

    2016-01-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers. (review)

  14. An instructional guide for leaf color analysis using digital imaging software

    Science.gov (United States)

    Paula F. Murakami; Michelle R. Turner; Abby K. van den Berg; Paul G. Schaberg

    2005-01-01

    Digital color analysis has become an increasingly popular and cost-effective method utilized by resource managers and scientists for evaluating foliar nutrition and health in response to environmental stresses. We developed and tested a new method of digital image analysis that uses Scion Image or NIH image public domain software to quantify leaf color. This...

  15. Planning applications in image analysis

    Science.gov (United States)

    Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.

    1994-01-01

    We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.

  16. Spatial compression algorithm for the analysis of very large multivariate images

    Science.gov (United States)

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  17. Image analysis of multiple moving wood pieces in real time

    Science.gov (United States)

    Wang, Weixing

    2006-02-01

    This paper presents algorithms for image processing and image analysis of wood piece materials. The algorithms were designed for auto-detection of wood piece materials on a moving conveyor belt or a truck. When wood objects on moving, the hard task is to trace the contours of the objects in n optimal way. To make the algorithms work efficiently in the plant, a flexible online system was designed and developed, which mainly consists of image acquisition, image processing, object delineation and analysis. A number of newly-developed algorithms can delineate wood objects with high accuracy and high speed, and in the wood piece analysis part, each wood piece can be characterized by a number of visual parameters which can also be used for constructing experimental models directly in the system.

  18. Fast Virtual Fractional Flow Reserve Based Upon Steady-State Computational Fluid Dynamics Analysis

    Directory of Open Access Journals (Sweden)

    Paul D. Morris, PhD

    2017-08-01

    Full Text Available Fractional flow reserve (FFR-guided percutaneous intervention is superior to standard assessment but remains underused. The authors have developed a novel “pseudotransient” analysis protocol for computing virtual fractional flow reserve (vFFR based upon angiographic images and steady-state computational fluid dynamics. This protocol generates vFFR results in 189 s (cf >24 h for transient analysis using a desktop PC, with <1% error relative to that of full-transient computational fluid dynamics analysis. Sensitivity analysis demonstrated that physiological lesion significance was influenced less by coronary or lesion anatomy (33% and more by microvascular physiology (59%. If coronary microvascular resistance can be estimated, vFFR can be accurately computed in less time than it takes to make invasive measurements.

  19. Automated image analysis for quantification of filamentous bacteria

    DEFF Research Database (Denmark)

    Fredborg, Marlene; Rosenvinge, Flemming Schønning; Spillum, Erik

    2015-01-01

    in systems relying on colorimetry or turbidometry (such as Vitek-2, Phoenix, MicroScan WalkAway). The objective was to examine an automated image analysis algorithm for quantification of filamentous bacteria using the 3D digital microscopy imaging system, oCelloScope. Results Three E. coli strains displaying...

  20. Occupancy Analysis of Sports Arenas Using Thermal Imaging

    DEFF Research Database (Denmark)

    Gade, Rikke; Jørgensen, Anders; Moeslund, Thomas B.

    2012-01-01

    This paper presents a system for automatic analysis of the occupancy of sports arenas. By using a thermal camera for image capturing the number of persons and their location on the court are found without violating any privacy issues. The images are binarised with an automatic threshold method...

  1. Study of TCP densification via image analysis

    International Nuclear Information System (INIS)

    Silva, R.C.; Alencastro, F.S.; Oliveira, R.N.; Soares, G.A.

    2011-01-01

    Among ceramic materials that mimic human bone, β-type tri-calcium phosphate (β-TCP) has shown appropriate chemical stability and superior resorption rate when compared to hydroxyapatite. In order to increase its mechanical strength, the material is sintered, under controlled time and temperature conditions, to obtain densification without phase change. In the present work, tablets were produced via uniaxial compression and then sintered at 1150°C for 2h. The analysis via XRD and FTIR showed that the sintered tablets were composed only by β-TCP. The SEM images were used for quantification of grain size and volume fraction of pores, via digital image analysis. The tablets showed small pore fraction (between 0,67% and 6,38%) and homogeneous grain size distribution (∼2μm). Therefore, the analysis method seems viable to quantify porosity and grain size. (author)

  2. Paediatric x-ray radiation dose reduction and image quality analysis.

    Science.gov (United States)

    Martin, L; Ruddlesden, R; Makepeace, C; Robinson, L; Mistry, T; Starritt, H

    2013-09-01

    Collaboration of multiple staff groups has resulted in significant reduction in the risk of radiation-induced cancer from radiographic x-ray exposure during childhood. In this study at an acute NHS hospital trust, a preliminary audit identified initial exposure factors. These were compared with European and UK guidance, leading to the introduction of new factors that were in compliance with European guidance on x-ray tube potentials. Image quality was assessed using standard anatomical criteria scoring, and visual grading characteristics analysis assessed the impact on image quality of changes in exposure factors. This analysis determined the acceptability of gradual radiation dose reduction below the European and UK guidance levels. Chest and pelvis exposures were optimised, achieving dose reduction for each age group, with 7%-55% decrease in critical organ dose. Clinicians confirmed diagnostic image quality throughout the iterative process. Analysis of images acquired with preliminary and final exposure factors indicated an average visual grading analysis result of 0.5, demonstrating equivalent image quality. The optimisation process and final radiation doses are reported for Carestream computed radiography to aid other hospitals in minimising radiation risks to children.

  3. Paediatric x-ray radiation dose reduction and image quality analysis

    International Nuclear Information System (INIS)

    Martin, L; Ruddlesden, R; Mistry, T; Starritt, H; Makepeace, C; Robinson, L

    2013-01-01

    Collaboration of multiple staff groups has resulted in significant reduction in the risk of radiation-induced cancer from radiographic x-ray exposure during childhood. In this study at an acute NHS hospital trust, a preliminary audit identified initial exposure factors. These were compared with European and UK guidance, leading to the introduction of new factors that were in compliance with European guidance on x-ray tube potentials. Image quality was assessed using standard anatomical criteria scoring, and visual grading characteristics analysis assessed the impact on image quality of changes in exposure factors. This analysis determined the acceptability of gradual radiation dose reduction below the European and UK guidance levels. Chest and pelvis exposures were optimised, achieving dose reduction for each age group, with 7%–55% decrease in critical organ dose. Clinicians confirmed diagnostic image quality throughout the iterative process. Analysis of images acquired with preliminary and final exposure factors indicated an average visual grading analysis result of 0.5, demonstrating equivalent image quality. The optimisation process and final radiation doses are reported for Carestream computed radiography to aid other hospitals in minimising radiation risks to children. (paper)

  4. Different Imaging Strategies in Patients With Possible Basilar Artery Occlusion: Cost-Effectiveness Analysis.

    Science.gov (United States)

    Beyer, Sebastian E; Hunink, Myriam G; Schöberl, Florian; von Baumgarten, Louisa; Petersen, Steffen E; Dichgans, Martin; Janssen, Hendrik; Ertl-Wagner, Birgit; Reiser, Maximilian F; Sommer, Wieland H

    2015-07-01

    This study evaluated the cost-effectiveness of different noninvasive imaging strategies in patients with possible basilar artery occlusion. A Markov decision analytic model was used to evaluate long-term outcomes resulting from strategies using computed tomographic angiography (CTA), magnetic resonance imaging, nonenhanced CT, or duplex ultrasound with intravenous (IV) thrombolysis being administered after positive findings. The analysis was performed from the societal perspective based on US recommendations. Input parameters were derived from the literature. Costs were obtained from United States costing sources and published literature. Outcomes were lifetime costs, quality-adjusted life-years (QALYs), incremental cost-effectiveness ratios, and net monetary benefits, with a willingness-to-pay threshold of $80,000 per QALY. The strategy with the highest net monetary benefit was considered the most cost-effective. Extensive deterministic and probabilistic sensitivity analyses were performed to explore the effect of varying parameter values. In the reference case analysis, CTA dominated all other imaging strategies. CTA yielded 0.02 QALYs more than magnetic resonance imaging and 0.04 QALYs more than duplex ultrasound followed by CTA. At a willingness-to-pay threshold of $80,000 per QALY, CTA yielded the highest net monetary benefits. The probability that CTA is cost-effective was 96% at a willingness-to-pay threshold of $80,000/QALY. Sensitivity analyses showed that duplex ultrasound was cost-effective only for a prior probability of ≤0.02 and that these results were only minimally influenced by duplex ultrasound sensitivity and specificity. Nonenhanced CT and magnetic resonance imaging never became the most cost-effective strategy. Our results suggest that CTA in patients with possible basilar artery occlusion is cost-effective. © 2015 The Authors.

  5. Image enhancement of x-ray microscope using frequency spectrum analysis

    International Nuclear Information System (INIS)

    Li Wenjie; Chen Jie; Tian Jinping; Zhang Xiaobo; Liu Gang; Tian Yangchao; Liu Yijin; Wu Ziyu

    2009-01-01

    We demonstrate a new method for x-ray microscope image enhancement using frequency spectrum analysis. Fine sample characteristics are well enhanced with homogeneous visibility and better contrast from single image. This method is easy to implement and really helps to improve the quality of image taken by our imaging system.

  6. Image enhancement of x-ray microscope using frequency spectrum analysis

    Energy Technology Data Exchange (ETDEWEB)

    Li Wenjie; Chen Jie; Tian Jinping; Zhang Xiaobo; Liu Gang; Tian Yangchao [National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei, Anhui 230029 (China); Liu Yijin; Wu Ziyu, E-mail: wuzy@ihep.ac.c, E-mail: ychtian@ustc.edu.c [Institute of High Energy Physics, Chinese Academy of Science, Beijing 100049 (China)

    2009-09-01

    We demonstrate a new method for x-ray microscope image enhancement using frequency spectrum analysis. Fine sample characteristics are well enhanced with homogeneous visibility and better contrast from single image. This method is easy to implement and really helps to improve the quality of image taken by our imaging system.

  7. ANALYSIS OF SST IMAGES BY WEIGHTED ENSEMBLE TRANSFORM KALMAN FILTER

    OpenAIRE

    Sai , Gorthi; Beyou , Sébastien; Memin , Etienne

    2011-01-01

    International audience; This paper presents a novel, efficient scheme for the analysis of Sea Surface Temperature (SST) ocean images. We consider the estimation of the velocity fields and vorticity values from a sequence of oceanic images. The contribution of this paper lies in proposing a novel, robust and simple approach based onWeighted Ensemble Transform Kalman filter (WETKF) data assimilation technique for the analysis of real SST images, that may contain coast regions or large areas of ...

  8. A Blind Adaptive Color Image Watermarking Scheme Based on Principal Component Analysis, Singular Value Decomposition and Human Visual System

    Directory of Open Access Journals (Sweden)

    M. Imran

    2017-09-01

    Full Text Available A blind adaptive color image watermarking scheme based on principal component analysis, singular value decomposition, and human visual system is proposed. The use of principal component analysis to decorrelate the three color channels of host image, improves the perceptual quality of watermarked image. Whereas, human visual system and fuzzy inference system helped to improve both imperceptibility and robustness by selecting adaptive scaling factor, so that, areas more prone to noise can be added with more information as compared to less prone areas. To achieve security, location of watermark embedding is kept secret and used as key at the time of watermark extraction, whereas, for capacity both singular values and vectors are involved in watermark embedding process. As a result, four contradictory requirements; imperceptibility, robustness, security and capacity are achieved as suggested by results. Both subjective and objective methods are acquired to examine the performance of proposed schemes. For subjective analysis the watermarked images and watermarks extracted from attacked watermarked images are shown. For objective analysis of proposed scheme in terms of imperceptibility, peak signal to noise ratio, structural similarity index, visual information fidelity and normalized color difference are used. Whereas, for objective analysis in terms of robustness, normalized correlation, bit error rate, normalized hamming distance and global authentication rate are used. Security is checked by using different keys to extract the watermark. The proposed schemes are compared with state-of-the-art watermarking techniques and found better performance as suggested by results.

  9. Stuttering as a trait or state - an ALE meta-analysis of neuroimaging studies.

    Science.gov (United States)

    Belyk, Michel; Kraft, Shelly Jo; Brown, Steven

    2015-01-01

    Stuttering is a speech disorder characterised by repetitions, prolongations and blocks that disrupt the forward movement of speech. An earlier meta-analysis of brain imaging studies of stuttering (Brown et al., 2005) revealed a general trend towards rightward lateralization of brain activations and hyperactivity in the larynx motor cortex bilaterally. The present study sought not only to update that meta-analysis with recent work but to introduce an important distinction not present in the first study, namely the difference between 'trait' and 'state' stuttering. The analysis of trait stuttering compares people who stutter (PWS) with people who do not stutter when behaviour is controlled for, i.e., when speech is fluent in both groups. In contrast, the analysis of state stuttering examines PWS during episodes of stuttered speech compared with episodes of fluent speech. Seventeen studies were analysed using activation likelihood estimation. Trait stuttering was characterised by the well-known rightward shift in lateralization for language and speech areas. State stuttering revealed a more diverse pattern. Abnormal activation of larynx and lip motor cortex was common to the two analyses. State stuttering was associated with overactivation in the right hemisphere larynx and lip motor cortex. Trait stuttering was associated with overactivation of lip motor cortex in the right hemisphere but underactivation of larynx motor cortex in the left hemisphere. These results support a large literature highlighting laryngeal and lip involvement in the symptomatology of stuttering, and disambiguate two possible sources of activation in neuroimaging studies of persistent developmental stuttering. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  10. General Staining and Segmentation Procedures for High Content Imaging and Analysis.

    Science.gov (United States)

    Chambers, Kevin M; Mandavilli, Bhaskar S; Dolman, Nick J; Janes, Michael S

    2018-01-01

    Automated quantitative fluorescence microscopy, also known as high content imaging (HCI), is a rapidly growing analytical approach in cell biology. Because automated image analysis relies heavily on robust demarcation of cells and subcellular regions, reliable methods for labeling cells is a critical component of the HCI workflow. Labeling of cells for image segmentation is typically performed with fluorescent probes that bind DNA for nuclear-based cell demarcation or with those which react with proteins for image analysis based on whole cell staining. These reagents, along with instrument and software settings, play an important role in the successful segmentation of cells in a population for automated and quantitative image analysis. In this chapter, we describe standard procedures for labeling and image segmentation in both live and fixed cell samples. The chapter will also provide troubleshooting guidelines for some of the common problems associated with these aspects of HCI.

  11. Vector sparse representation of color image using quaternion matrix analysis.

    Science.gov (United States)

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain.

  12. Imaging analysis of direct alanine uptake by rice seedlings

    International Nuclear Information System (INIS)

    Nihei, Naoto; Masuda, Sayaka; Rai, Hiroki; Nakanishi, Tomoko M.

    2008-01-01

    We presented alanine, a kind of amino acids, uptake by a rice seedling to study the basic mechanism of the organic fertilizer effectiveness in organic farming. The rice grown in the culture solution containing alanine as a nitrogen source absorbed alanine approximately two times faster than that grown with NH 4 + from analysis of 14 C-alanine images by Imaging Plate method. It was suggested that the active transport ability of the rice seeding was induced in roots by existence of alanine in the rhizosphere. The alanine uptake images of the rice roots were acquired every 5 minutes successively by the real-time autoradiography system we developed. The analysis of the successive images showed that alanine uptake was not uniform throughout the root but especially active at the root tip. (author)

  13. A hyperspectral image analysis workbench for environmental science applications

    Energy Technology Data Exchange (ETDEWEB)

    Christiansen, J.H.; Zawada, D.G.; Simunich, K.L.; Slater, J.C.

    1992-10-01

    A significant challenge to the information sciences is to provide more powerful and accessible means to exploit the enormous wealth of data available from high-resolution imaging spectrometry, or ``hyperspectral`` imagery, for analysis, for mapping purposes, and for input to environmental modeling applications. As an initial response to this challenge, Argonne`s Advanced Computer Applications Center has developed a workstation-based prototype software workbench which employs Al techniques and other advanced approaches to deduce surface characteristics and extract features from the hyperspectral images. Among its current capabilities, the prototype system can classify pixels by abstract surface type. The classification process employs neural network analysis of inputs which include pixel spectra and a variety of processed image metrics, including image ``texture spectra`` derived from fractal signatures computed for subimage tiles at each wavelength.

  14. Intrinsic Resting-State Functional Connectivity in the Human Spinal Cord at 3.0 T.

    Science.gov (United States)

    San Emeterio Nateras, Oscar; Yu, Fang; Muir, Eric R; Bazan, Carlos; Franklin, Crystal G; Li, Wei; Li, Jinqi; Lancaster, Jack L; Duong, Timothy Q

    2016-04-01

    To apply resting-state functional magnetic resonance (MR) imaging to map functional connectivity of the human spinal cord. Studies were performed in nine self-declared healthy volunteers with informed consent and institutional review board approval. Resting-state functional MR imaging was performed to map functional connectivity of the human cervical spinal cord from C1 to C4 at 1 × 1 × 3-mm resolution with a 3.0-T clinical MR imaging unit. Independent component analysis (ICA) was performed to derive resting-state functional MR imaging z-score maps rendered on two-dimensional and three-dimensional images. Seed-based analysis was performed for cross validation with ICA networks by using Pearson correlation. Reproducibility analysis of resting-state functional MR imaging maps from four repeated trials in a single participant yielded a mean z score of 6 ± 1 (P 3, P 3.0-T clinical MR imaging unit and standard MR imaging protocols and hardware reveals prominent functional connectivity patterns within the spinal cord gray matter, consistent with known functional and anatomic layouts of the spinal cord.

  15. Mammographic quantitative image analysis and biologic image composition for breast lesion characterization and classification

    Energy Technology Data Exchange (ETDEWEB)

    Drukker, Karen, E-mail: kdrukker@uchicago.edu; Giger, Maryellen L.; Li, Hui [Department of Radiology, University of Chicago, Chicago, Illinois 60637 (United States); Duewer, Fred; Malkov, Serghei; Joe, Bonnie; Kerlikowske, Karla; Shepherd, John A. [Radiology Department, University of California, San Francisco, California 94143 (United States); Flowers, Chris I. [Department of Radiology, University of South Florida, Tampa, Florida 33612 (United States); Drukteinis, Jennifer S. [Department of Radiology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida 33612 (United States)

    2014-03-15

    Purpose: To investigate whether biologic image composition of mammographic lesions can improve upon existing mammographic quantitative image analysis (QIA) in estimating the probability of malignancy. Methods: The study population consisted of 45 breast lesions imaged with dual-energy mammography prior to breast biopsy with final diagnosis resulting in 10 invasive ductal carcinomas, 5 ductal carcinomain situ, 11 fibroadenomas, and 19 other benign diagnoses. Analysis was threefold: (1) The raw low-energy mammographic images were analyzed with an established in-house QIA method, “QIA alone,” (2) the three-compartment breast (3CB) composition measure—derived from the dual-energy mammography—of water, lipid, and protein thickness were assessed, “3CB alone”, and (3) information from QIA and 3CB was combined, “QIA + 3CB.” Analysis was initiated from radiologist-indicated lesion centers and was otherwise fully automated. Steps of the QIA and 3CB methods were lesion segmentation, characterization, and subsequent classification for malignancy in leave-one-case-out cross-validation. Performance assessment included box plots, Bland–Altman plots, and Receiver Operating Characteristic (ROC) analysis. Results: The area under the ROC curve (AUC) for distinguishing between benign and malignant lesions (invasive and DCIS) was 0.81 (standard error 0.07) for the “QIA alone” method, 0.72 (0.07) for “3CB alone” method, and 0.86 (0.04) for “QIA+3CB” combined. The difference in AUC was 0.043 between “QIA + 3CB” and “QIA alone” but failed to reach statistical significance (95% confidence interval [–0.17 to + 0.26]). Conclusions: In this pilot study analyzing the new 3CB imaging modality, knowledge of the composition of breast lesions and their periphery appeared additive in combination with existing mammographic QIA methods for the distinction between different benign and malignant lesion types.

  16. Complete Bell-state analysis for a single-photon hybrid entangled state

    International Nuclear Information System (INIS)

    Sheng Yu-Bo; Zhou Lan; Cheng Wei-Wen; Gong Long-Yan; Wang Lei; Zhao Sheng-Mei

    2013-01-01

    We propose a scheme capable of performing complete Bell-state analysis for a single-photon hybrid entangled state. Our single-photon state is encoded in both polarization and frequency degrees of freedom. The setup of the scheme is composed of polarizing beam splitters, half wave plates, frequency shifters, and independent wavelength division multiplexers, which are feasible using current technology. We also show that with this setup we can perform complete two-photon Bell-state analysis schemes for polarization degrees of freedom. Moreover, it can also be used to perform the teleportation scheme between different degrees of freedom. This setup may allow extensive applications in current quantum communications

  17. Determination of the polarization states of an arbitrary polarized terahertz beam: vectorial vortex analysis.

    Science.gov (United States)

    Wakayama, Toshitaka; Higashiguchi, Takeshi; Oikawa, Hiroki; Sakaue, Kazuyuki; Washio, Masakazu; Yonemura, Motoki; Yoshizawa, Toru; Tyo, J Scott; Otani, Yukitoshi

    2015-03-24

    Vectorial vortex analysis is used to determine the polarization states of an arbitrarily polarized terahertz (0.1-1.6 THz) beam using THz achromatic axially symmetric wave (TAS) plates, which have a phase retardance of Δ = 163° and are made of polytetrafluorethylene. Polarized THz beams are converted into THz vectorial vortex beams with no spatial or wavelength dispersion, and the unknown polarization states of the incident THz beams are reconstructed. The polarization determination is also demonstrated at frequencies of 0.16 and 0.36 THz. The results obtained by solving the inverse source problem agree with the values used in the experiments. This vectorial vortex analysis enables a determination of the polarization states of the incident THz beam from the THz image. The polarization states of the beams are estimated after they pass through the TAS plates. The results validate this new approach to polarization detection for intense THz sources. It could find application in such cutting edge areas of physics as nonlinear THz photonics and plasmon excitation, because TAS plates not only instantaneously elucidate the polarization of an enclosed THz beam but can also passively control THz vectorial vortex beams.

  18. Digital transplantation pathology: combining whole slide imaging, multiplex staining and automated image analysis.

    Science.gov (United States)

    Isse, K; Lesniak, A; Grama, K; Roysam, B; Minervini, M I; Demetris, A J

    2012-01-01

    Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. "-Omics" analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: (a) spatial-temporal relationships; (b) rare events/cells; (c) complex structural context; and (d) integration into a "systems" model. Nevertheless, except for immunostaining, no transformative advancements have "modernized" routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology-global "-omic" analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. ©Copyright 2011 The American Society of Transplantation and the American Society of Transplant Surgeons.

  19. Computerized analysis of brain perfusion parameter images

    International Nuclear Information System (INIS)

    Turowski, B.; Haenggi, D.; Wittsack, H.J.; Beck, A.; Aurich, V.

    2007-01-01

    Purpose: The development of a computerized method which allows a direct quantitative comparison of perfusion parameters. The display should allow a clear direct comparison of brain perfusion parameters in different vascular territories and over the course of time. The analysis is intended to be the basis for further evaluation of cerebral vasospasm after subarachnoid hemorrhage (SAH). The method should permit early diagnosis of cerebral vasospasm. Materials and Methods: The Angiotux 2D-ECCET software was developed with a close cooperation between computer scientists and clinicians. Starting from parameter images of brain perfusion, the cortex was marked, segmented and assigned to definite vascular territories. The underlying values were averages for each segment and were displayed in a graph. If a follow-up was available, the mean values of the perfusion parameters were displayed in relation to time. The method was developed under consideration of CT perfusion values but is applicable for other methods of perfusion imaging. Results: Computerized analysis of brain perfusion parameter images allows an immediate comparison of these parameters and follow-up of mean values in a clear and concise manner. Values are related to definite vascular territories. The tabular output facilitates further statistic evaluations. The computerized analysis is precisely reproducible, i. e., repetitions result in exactly the same output. (orig.)

  20. Evaluation of diffuse hepatic diseases by integrated image, SPECT and numerical taxonomic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, Shin

    1987-02-01

    In 135 patients with various hepatic diseases, cardiopulmonary circulation and hepatic accumulation of the activity were collected for 100 sec after bolus injection of 111-222 MBq (3 - 6 mci) of /sup 99m/Tc-phytate, and then integrated as a single image. Anterior, right lateral and posterior planar images, and hepatosplenic SPECT images were obtained thereafter. Lung to liver count ratio (P/L) was estimated by the integrated image. Liver volume (HV), spleen volume (SV) and liver to spleen count ratio (MHC/MSC) were calculated using the data obtained by SPECT. P/L was useful as an index of effective hepatic blood flow. MHC/MSC was closely correlated with the grade of portal hypertension. HV or SV alone shows low clinical value in discriminating liver diseases. Principal component analysis was applied to the 4 above-mentioned radinuclide data and the following 11 laboratory data ; total serum protein, serum albumine, glutamate oxaloacetate transaminase (GOT), glutamate pyruvate transaminase (GPT), lactic dehydrogenase (LDH), alkaline phosphatase (AL-P), zink sulfateturbidity test (ZTT), thymol turbidity test (TTT), r-glutamyl transpeptidase (r-GTP), cholinesterase (Ch-E), and total bilirubin (T-Bil). These fifteen data were condensed to 5 principal components. And then cluster analysis was carried out among 135 patients. The subjects were classified in 7 small groups. In group (G) I to GIII, frequency of liver cirrhosis was high, while on the contrary in GIV to GVII, the frequency of normal cases increased gradually. From the above results, cluster analysis seemed to reflect the pathophysiological state and the grade of the disease. This method might be useful for estimation of the grade of damage in diffuse hepatic disease and a good objective evaluation method in follow-up studies. (J.P.N.).

  1. Combination of diffusion tensor and functional magnetic resonance imaging during recovery from the vegetative state

    Directory of Open Access Journals (Sweden)

    Fernández-Espejo Davinia

    2010-09-01

    Full Text Available Abstract Background The rate of recovery from the vegetative state (VS is low. Currently, little is known of the mechanisms and cerebral changes that accompany those relatively rare cases of good recovery. Here, we combined functional magnetic resonance imaging (fMRI and diffusion tensor imaging (DTI to study the evolution of one VS patient at one month post-ictus and again twelve months later when he had recovered consciousness. Methods fMRI was used to investigate cortical responses to passive language stimulation as well as task-induced deactivations related to the default-mode network. DTI was used to assess the integrity of the global white matter and the arcuate fasciculus. We also performed a neuropsychological assessment at the time of the second MRI examination in order to characterize the profile of cognitive deficits. Results fMRI analysis revealed anatomically appropriate activation to speech in both the first and the second scans but a reduced pattern of task-induced deactivations in the first scan. In the second scan, following the recovery of consciousness, this pattern became more similar to that classically described for the default-mode network. DTI analysis revealed relative preservation of the arcuate fasciculus and of the global normal-appearing white matter at both time points. The neuropsychological assessment revealed recovery of receptive linguistic functioning by 12-months post-ictus. Conclusions These results suggest that the combination of different structural and functional imaging modalities may provide a powerful means for assessing the mechanisms involved in the recovery from the VS.

  2. Image decomposition as a tool for validating stress analysis models

    Directory of Open Access Journals (Sweden)

    Mottershead J.

    2010-06-01

    Full Text Available It is good practice to validate analytical and numerical models used in stress analysis for engineering design by comparison with measurements obtained from real components either in-service or in the laboratory. In reality, this critical step is often neglected or reduced to placing a single strain gage at the predicted hot-spot of stress. Modern techniques of optical analysis allow full-field maps of displacement, strain and, or stress to be obtained from real components with relative ease and at modest cost. However, validations continued to be performed only at predicted and, or observed hot-spots and most of the wealth of data is ignored. It is proposed that image decomposition methods, commonly employed in techniques such as fingerprinting and iris recognition, can be employed to validate stress analysis models by comparing all of the key features in the data from the experiment and the model. Image decomposition techniques such as Zernike moments and Fourier transforms have been used to decompose full-field distributions for strain generated from optical techniques such as digital image correlation and thermoelastic stress analysis as well as from analytical and numerical models by treating the strain distributions as images. The result of the decomposition is 101 to 102 image descriptors instead of the 105 or 106 pixels in the original data. As a consequence, it is relatively easy to make a statistical comparison of the image descriptors from the experiment and from the analytical/numerical model and to provide a quantitative assessment of the stress analysis.

  3. The analysis of image feature robustness using cometcloud

    Directory of Open Access Journals (Sweden)

    Xin Qi

    2012-01-01

    Full Text Available The robustness of image features is a very important consideration in quantitative image analysis. The objective of this paper is to investigate the robustness of a range of image texture features using hematoxylin stained breast tissue microarray slides which are assessed while simulating different imaging challenges including out of focus, changes in magnification and variations in illumination, noise, compression, distortion, and rotation. We employed five texture analysis methods and tested them while introducing all of the challenges listed above. The texture features that were evaluated include co-occurrence matrix, center-symmetric auto-correlation, texture feature coding method, local binary pattern, and texton. Due to the independence of each transformation and texture descriptor, a network structured combination was proposed and deployed on the Rutgers private cloud. The experiments utilized 20 randomly selected tissue microarray cores. All the combinations of the image transformations and deformations are calculated, and the whole feature extraction procedure was completed in 70 minutes using a cloud equipped with 20 nodes. Center-symmetric auto-correlation outperforms all the other four texture descriptors but also requires the longest computational time. It is roughly 10 times slower than local binary pattern and texton. From a speed perspective, both the local binary pattern and texton features provided excellent performance for classification and content-based image retrieval.

  4. CARS hyperspectral imaging of cartilage aiming for state discrimination of cell

    Science.gov (United States)

    Shiozawa, Manabu; Shirai, Masataka; Izumisawa, Junko; Tanabe, Maiko; Watanabe, Koichi

    2016-03-01

    Non-invasive cell analyses are increasingly important for medical field. A CARS microscope is one of the non-invasive imaging equipments and enables to obtain images indicating molecular distribution. Some studies on discrimination of cell state by using CARS images of lipid are reported. However, due to low signal intensity, it is still challenging to obtain images of the fingerprint region (800~1800 cm-1), in which many spectrum peaks correspond to compositions of a cell. Here, to identify cell differentiation by using multiplex CARS, we investigated hyperspectral imaging of fingerprint region of living cells. To perform multiplex CARS, we used a prototype of a compact light source, which consists of a microchip laser, a single-mode fiber, and a photonic crystal fiber to generate supercontinuum light. Assuming application to regenerative medicine, we chose a cartilage cell, whose differentiation is difficult to be identified by change of the cell morphology. Because one of the major components of cartilage is collagen, we focused on distribution of proline, which accounts for approximately 20% of collagen in general. The spectrum quality was improved by optical adjustments about power branching ratio and divergence of broadband Stokes light. Hyperspectral images were successfully obtained by the improvement. Periphery of a cartilage cell was highlighted in CARS image of proline, and this result suggests correspondence with collagen generated as extracellular matrix. A possibility of cell analyses by using CARS hyperspectral imaging was indicated.

  5. Method for evaluation of human induced pluripotent stem cell quality using image analysis based on the biological morphology of cells.

    Science.gov (United States)

    Wakui, Takashi; Matsumoto, Tsuyoshi; Matsubara, Kenta; Kawasaki, Tomoyuki; Yamaguchi, Hiroshi; Akutsu, Hidenori

    2017-10-01

    We propose an image analysis method for quality evaluation of human pluripotent stem cells based on biologically interpretable features. It is important to maintain the undifferentiated state of induced pluripotent stem cells (iPSCs) while culturing the cells during propagation. Cell culture experts visually select good quality cells exhibiting the morphological features characteristic of undifferentiated cells. Experts have empirically determined that these features comprise prominent and abundant nucleoli, less intercellular spacing, and fewer differentiating cellular nuclei. We quantified these features based on experts' visual inspection of phase contrast images of iPSCs and found that these features are effective for evaluating iPSC quality. We then developed an iPSC quality evaluation method using an image analysis technique. The method allowed accurate classification, equivalent to visual inspection by experts, of three iPSC cell lines.

  6. Quantitative Image Simulation and Analysis of Nanoparticles

    DEFF Research Database (Denmark)

    Madsen, Jacob; Hansen, Thomas Willum

    Microscopy (HRTEM) has become a routine analysis tool for structural characterization at atomic resolution, and with the recent development of in-situ TEMs, it is now possible to study catalytic nanoparticles under reaction conditions. However, the connection between an experimental image, and the underlying...... physical phenomena or structure is not always straightforward. The aim of this thesis is to use image simulation to better understand observations from HRTEM images. Surface strain is known to be important for the performance of nanoparticles. Using simulation, we estimate of the precision and accuracy...... of strain measurements from TEM images, and investigate the stability of these measurements to microscope parameters. This is followed by our efforts toward simulating metal nanoparticles on a metal-oxide support using the Charge Optimized Many Body (COMB) interatomic potential. The simulated interface...

  7. Simultaneous dual-radionuclide myocardial perfusion imaging with a solid-state dedicated cardiac camera.

    Science.gov (United States)

    Ben-Haim, Simona; Kacperski, Krzysztof; Hain, Sharon; Van Gramberg, Dean; Hutton, Brian F; Erlandsson, Kjell; Sharir, Tali; Roth, Nathaniel; Waddington, Wendy A; Berman, Daniel S; Ell, Peter J

    2010-08-01

    We compared simultaneous dual-radionuclide (DR) stress and rest myocardial perfusion imaging (MPI) with a novel solid-state cardiac camera and a conventional SPECT camera with separate stress and rest acquisitions. Of 27 consecutive patients recruited, 24 (64.5+/-11.8 years of age, 16 men) were injected with 74 MBq of (201)Tl (rest) and 250 MBq (99m)Tc-MIBI (stress). Conventional MPI acquisition times for stress and rest are 21 min and 16 min, respectively. Rest (201)Tl for 6 min and simultaneous DR 15-min list mode gated scans were performed on a D-SPECT cardiac scanner. In 11 patients DR D-SPECT was performed first and in 13 patients conventional stress (99m)Tc-MIBI SPECT imaging was performed followed by DR D-SPECT. The DR D-SPECT data were processed using a spill-over and scatter correction method. DR D-SPECT images were compared with rest (201)Tl D-SPECT and with conventional SPECT images by visual analysis employing the 17-segment model and a five-point scale (0 normal, 4 absent) to calculate the summed stress and rest scores. Image quality was assessed on a four-point scale (1 poor, 4 very good) and gut activity was assessed on a four-point scale (0 none, 3 high). Conventional MPI studies were abnormal at stress in 17 patients and at rest in 9 patients. In the 17 abnormal stress studies DR D-SPECT MPI showed 113 abnormal segments and conventional MPI showed 93 abnormal segments. In the nine abnormal rest studies DR D-SPECT showed 45 abnormal segments and conventional MPI showed 48 abnormal segments. The summed stress and rest scores on conventional SPECT and DR D-SPECT were highly correlated (r=0.9790 and 0.9694, respectively). The summed scores of rest (201)Tl D-SPECT and DR-DSPECT were also highly correlated (r=0.9968, pstress perfusion defects were significantly larger on stress DR D-SPECT images, and five of these patients were imaged earlier by D-SPECT than by conventional SPECT. Fast and high-quality simultaneous DR MPI is feasible with D-SPECT in a

  8. Computational methods in molecular imaging technologies

    CERN Document Server

    Gunjan, Vinit Kumar; Venkatesh, C; Amarnath, M

    2017-01-01

    This book highlights the experimental investigations that have been carried out on magnetic resonance imaging and computed tomography (MRI & CT) images using state-of-the-art Computational Image processing techniques, and tabulates the statistical values wherever necessary. In a very simple and straightforward way, it explains how image processing methods are used to improve the quality of medical images and facilitate analysis. It offers a valuable resource for researchers, engineers, medical doctors and bioinformatics experts alike.

  9. Fractal analysis in radiological and nuclear medicine perfusion imaging: a systematic review

    Energy Technology Data Exchange (ETDEWEB)

    Michallek, Florian; Dewey, Marc [Humboldt-Universitaet zu Berlin, Freie Universitaet Berlin, Charite - Universitaetsmedizin Berlin, Medical School, Department of Radiology, Berlin (Germany)

    2014-01-15

    To provide an overview of recent research in fractal analysis of tissue perfusion imaging, using standard radiological and nuclear medicine imaging techniques including computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET) and single-photon emission computed tomography (SPECT) and to discuss implications for different fields of application. A systematic review of fractal analysis for tissue perfusion imaging was performed by searching the databases MEDLINE (via PubMed), EMBASE (via Ovid) and ISI Web of Science. Thirty-seven eligible studies were identified. Fractal analysis was performed on perfusion imaging of tumours, lung, myocardium, kidney, skeletal muscle and cerebral diseases. Clinically, different aspects of tumour perfusion and cerebral diseases were successfully evaluated including detection and classification. In physiological settings, it was shown that perfusion under different conditions and in various organs can be properly described using fractal analysis. Fractal analysis is a suitable method for quantifying heterogeneity from radiological and nuclear medicine perfusion images under a variety of conditions and in different organs. Further research is required to exploit physiologically proven fractal behaviour in the clinical setting. (orig.)

  10. Fractal analysis in radiological and nuclear medicine perfusion imaging: a systematic review

    International Nuclear Information System (INIS)

    Michallek, Florian; Dewey, Marc

    2014-01-01

    To provide an overview of recent research in fractal analysis of tissue perfusion imaging, using standard radiological and nuclear medicine imaging techniques including computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET) and single-photon emission computed tomography (SPECT) and to discuss implications for different fields of application. A systematic review of fractal analysis for tissue perfusion imaging was performed by searching the databases MEDLINE (via PubMed), EMBASE (via Ovid) and ISI Web of Science. Thirty-seven eligible studies were identified. Fractal analysis was performed on perfusion imaging of tumours, lung, myocardium, kidney, skeletal muscle and cerebral diseases. Clinically, different aspects of tumour perfusion and cerebral diseases were successfully evaluated including detection and classification. In physiological settings, it was shown that perfusion under different conditions and in various organs can be properly described using fractal analysis. Fractal analysis is a suitable method for quantifying heterogeneity from radiological and nuclear medicine perfusion images under a variety of conditions and in different organs. Further research is required to exploit physiologically proven fractal behaviour in the clinical setting. (orig.)

  11. State of the art: noninvasive imaging and management of neurovascular trauma

    Directory of Open Access Journals (Sweden)

    Cothren C Clay

    2007-01-01

    Full Text Available Abstract Neurotrauma represents a significant public health problem, accounting for a significant proportion of the morbidity and mortality associated with all traumatic injuries. Both blunt and penetrating injuries to cervicocerebral vessels are significant and are likely more common than previously recognized. Imaging of such injuries is an important component in the evaluation of individuals presenting with such potential injuries, made all the more important since many of the vascular injuries are clinically silent. Management of injuries, particularly those caused by blunt trauma, is constantly evolving. This article addresses the current state of imaging and treatment of such injuries.

  12. Independent component analysis based filtering for penumbral imaging

    International Nuclear Information System (INIS)

    Chen Yenwei; Han Xianhua; Nozaki, Shinya

    2004-01-01

    We propose a filtering based on independent component analysis (ICA) for Poisson noise reduction. In the proposed filtering, the image is first transformed to ICA domain and then the noise components are removed by a soft thresholding (shrinkage). The proposed filter, which is used as a preprocessing of the reconstruction, has been successfully applied to penumbral imaging. Both simulation results and experimental results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters

  13. MORPHOLOGY BY IMAGE ANALYSIS K. Belaroui and M. N Pons ...

    African Journals Online (AJOL)

    31 déc. 2012 ... Keywords: Characterization; particle size; morphology; image analysis; porous media. 1. INTRODUCTION. La puissance de l'analyse d'images comme ... en une image numérique au moyen d'un convertisseur analogique digital (A/D). Les points de l'image sont disposés suivant une grille en réseau carré, ...

  14. Traffic analysis and control using image processing

    Science.gov (United States)

    Senthilkumar, K.; Ellappan, Vijayan; Arun, A. R.

    2017-11-01

    This paper shows the work on traffic analysis and control till date. It shows an approach to regulate traffic the use of image processing and MATLAB systems. This concept uses computational images that are to be compared with original images of the street taken in order to determine the traffic level percentage and set the timing for the traffic signal accordingly which are used to reduce the traffic stoppage on traffic lights. They concept proposes to solve real life scenarios in the streets, thus enriching the traffic lights by adding image receivers like HD cameras and image processors. The input is then imported into MATLAB to be used. as a method for calculating the traffic on roads. Their results would be computed in order to adjust the traffic light timings on a particular street, and also with respect to other similar proposals but with the added value of solving a real, big instance.

  15. Developments in Dynamic Analysis for quantitative PIXE true elemental imaging

    International Nuclear Information System (INIS)

    Ryan, C.G.

    2001-01-01

    Dynamic Analysis (DA) is a method for projecting quantitative major and trace element images from PIXE event data-streams (off-line or on-line) obtained using the Nuclear Microprobe. The method separates full elemental spectral signatures to produce images that strongly reject artifacts due to overlapping elements, detector effects (such as escape peaks and tailing) and background. The images are also quantitative, stored in ppm-charge units, enabling images to be directly interrogated for the concentrations of all elements in areas of the images. Recent advances in the method include the correction for changing X-ray yields due to varying sample compositions across the image area and the construction of statistical variance images. The resulting accuracy of major element concentrations extracted directly from these images is better than 3% relative as determined from comparisons with electron microprobe point analysis. These results are complemented by error estimates derived from the variance images together with detection limits. This paper provides an update of research on these issues, introduces new software designed to make DA more accessible, and illustrates the application of the method to selected geological problems.

  16. Feed particle size evaluation: conventional approach versus digital holography based image analysis

    Directory of Open Access Journals (Sweden)

    Vittorio Dell’Orto

    2010-01-01

    Full Text Available The aim of this study was to evaluate the application of image analysis approach based on digital holography in defining particle size in comparison with the sieve shaker method (sieving method as reference method. For this purpose ground corn meal was analyzed by a sieve shaker Retsch VS 1000 and by image analysis approach based on digital holography. Particle size from digital holography were compared with results obtained by screen (sieving analysis for each of size classes by a cumulative distribution plot. Comparison between particle size values obtained by sieving method and image analysis indicated that values were comparable in term of particle size information, introducing a potential application for digital holography and image analysis in feed industry.

  17. Altered Gray Matter Volume and Resting-State Connectivity in Individuals With Internet Gaming Disorder: A Voxel-Based Morphometry and Resting-State Functional Magnetic Resonance Imaging Study

    Science.gov (United States)

    Seok, Ji-Woo; Sohn, Jin-Hun

    2018-01-01

    Neuroimaging studies on the characteristics of individuals with Internet gaming disorder (IGD) have been accumulating due to growing concerns regarding the psychological and social problems associated with Internet use. However, relatively little is known about the brain characteristics underlying IGD, such as the associated functional connectivity and structure. The aim of this study was to investigate alterations in gray matter (GM) volume and functional connectivity during resting state in individuals with IGD using voxel-based morphometry and a resting-state connectivity analysis. The participants included 20 individuals with IGD and 20 age- and sex-matched healthy controls. Resting-state functional and structural images were acquired for all participants using 3 T magnetic resonance imaging. We also measured the severity of IGD and impulsivity using psychological scales. The results show that IGD severity was positively correlated with GM volume in the left caudate (p < 0.05, corrected for multiple comparisons), and negatively associated with functional connectivity between the left caudate and the right middle frontal gyrus (p < 0.05, corrected for multiple comparisons). This study demonstrates that IGD is associated with neuroanatomical changes in the right middle frontal cortex and the left caudate. These are important brain regions for reward and cognitive control processes, and structural and functional abnormalities in these regions have been reported for other addictions, such as substance abuse and pathological gambling. The findings suggest that structural deficits and resting-state functional impairments in the frontostriatal network may be associated with IGD and provide new insights into the underlying neural mechanisms of IGD. PMID:29636704

  18. Altered Gray Matter Volume and Resting-State Connectivity in Individuals With Internet Gaming Disorder: A Voxel-Based Morphometry and Resting-State Functional Magnetic Resonance Imaging Study

    Directory of Open Access Journals (Sweden)

    Ji-Woo Seok

    2018-03-01

    Full Text Available Neuroimaging studies on the characteristics of individuals with Internet gaming disorder (IGD have been accumulating due to growing concerns regarding the psychological and social problems associated with Internet use. However, relatively little is known about the brain characteristics underlying IGD, such as the associated functional connectivity and structure. The aim of this study was to investigate alterations in gray matter (GM volume and functional connectivity during resting state in individuals with IGD using voxel-based morphometry and a resting-state connectivity analysis. The participants included 20 individuals with IGD and 20 age- and sex-matched healthy controls. Resting-state functional and structural images were acquired for all participants using 3 T magnetic resonance imaging. We also measured the severity of IGD and impulsivity using psychological scales. The results show that IGD severity was positively correlated with GM volume in the left caudate (p < 0.05, corrected for multiple comparisons, and negatively associated with functional connectivity between the left caudate and the right middle frontal gyrus (p < 0.05, corrected for multiple comparisons. This study demonstrates that IGD is associated with neuroanatomical changes in the right middle frontal cortex and the left caudate. These are important brain regions for reward and cognitive control processes, and structural and functional abnormalities in these regions have been reported for other addictions, such as substance abuse and pathological gambling. The findings suggest that structural deficits and resting-state functional impairments in the frontostriatal network may be associated with IGD and provide new insights into the underlying neural mechanisms of IGD.

  19. Quantitative analysis of γ-oryzanol content in cold pressed rice bran oil by TLC-image analysis method.

    Science.gov (United States)

    Sakunpak, Apirak; Suksaeree, Jirapornchai; Monton, Chaowalit; Pathompak, Pathamaporn; Kraisintu, Krisana

    2014-02-01

    To develop and validate an image analysis method for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. TLC-densitometric and TLC-image analysis methods were developed, validated, and used for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. The results obtained by these two different quantification methods were compared by paired t-test. Both assays provided good linearity, accuracy, reproducibility and selectivity for determination of γ-oryzanol. The TLC-densitometric and TLC-image analysis methods provided a similar reproducibility, accuracy and selectivity for the quantitative determination of γ-oryzanol in cold pressed rice bran oil. A statistical comparison of the quantitative determinations of γ-oryzanol in samples did not show any statistically significant difference between TLC-densitometric and TLC-image analysis methods. As both methods were found to be equal, they therefore can be used for the determination of γ-oryzanol in cold pressed rice bran oil.

  20. An optimal big data workflow for biomedical image analysis

    Directory of Open Access Journals (Sweden)

    Aurelle Tchagna Kouanou

    Full Text Available Background and objective: In the medical field, data volume is increasingly growing, and traditional methods cannot manage it efficiently. In biomedical computation, the continuous challenges are: management, analysis, and storage of the biomedical data. Nowadays, big data technology plays a significant role in the management, organization, and analysis of data, using machine learning and artificial intelligence techniques. It also allows a quick access to data using the NoSQL database. Thus, big data technologies include new frameworks to process medical data in a manner similar to biomedical images. It becomes very important to develop methods and/or architectures based on big data technologies, for a complete processing of biomedical image data. Method: This paper describes big data analytics for biomedical images, shows examples reported in the literature, briefly discusses new methods used in processing, and offers conclusions. We argue for adapting and extending related work methods in the field of big data software, using Hadoop and Spark frameworks. These provide an optimal and efficient architecture for biomedical image analysis. This paper thus gives a broad overview of big data analytics to automate biomedical image diagnosis. A workflow with optimal methods and algorithm for each step is proposed. Results: Two architectures for image classification are suggested. We use the Hadoop framework to design the first, and the Spark framework for the second. The proposed Spark architecture allows us to develop appropriate and efficient methods to leverage a large number of images for classification, which can be customized with respect to each other. Conclusions: The proposed architectures are more complete, easier, and are adaptable in all of the steps from conception. The obtained Spark architecture is the most complete, because it facilitates the implementation of algorithms with its embedded libraries. Keywords: Biomedical images, Big

  1. A hyperspectral image analysis workbench for environmental science applications

    Energy Technology Data Exchange (ETDEWEB)

    Christiansen, J.H.; Zawada, D.G.; Simunich, K.L.; Slater, J.C.

    1992-01-01

    A significant challenge to the information sciences is to provide more powerful and accessible means to exploit the enormous wealth of data available from high-resolution imaging spectrometry, or hyperspectral'' imagery, for analysis, for mapping purposes, and for input to environmental modeling applications. As an initial response to this challenge, Argonne's Advanced Computer Applications Center has developed a workstation-based prototype software workbench which employs Al techniques and other advanced approaches to deduce surface characteristics and extract features from the hyperspectral images. Among its current capabilities, the prototype system can classify pixels by abstract surface type. The classification process employs neural network analysis of inputs which include pixel spectra and a variety of processed image metrics, including image texture spectra'' derived from fractal signatures computed for subimage tiles at each wavelength.

  2. Three-dimensional analysis and display of medical images

    International Nuclear Information System (INIS)

    Bajcsy, R.

    1985-01-01

    Until recently, the most common medical images were X-rays on film analyzed by an expert, ususally a radiologist, who used, in addition to his/her visual perceptual abilities, knowledge obtained through medical studies, and experience. Today, however, with the advent of various imaging techniques, X-ray computerized axial tomographs (CAT), positron emission tomographs (PET), ultrasound tomographs, nuclear magnetic resonance tomographs (NMR), just to mention a few, the images are generated by computers and displayed on computer-controlled devices; so it is appropriate to think about more quantitative and perhaps automated ways of data analysis. Furthermore, since the data are generated by computer, it is only natural to take advantage of the computer for analysis purposes. In addition, using the computer, one can analyze more data and relate different modalities from the same subject, such as, for example, comparing the CAT images with PET images from the same subject. In the next section (The PET Scanner) the authors shall only briefly mention with appropriate references the modeling of the positron emission tomographic scanner, since this imaging technique is not as widely described in the literature as the CAT scanner. The modeling of the interpreter is not going to be mentioned, since it is a topic that by itself deserves a full paper; see, for example, Pizer [1981]. The thrust of this chapter is on modeling the organs that are being imaged and the matching techniques between the model and the data. The image data is from CAT and PET scans. Although the authors believe that their techniques are applicable to any organ of the human body, the examples are only from the brain

  3. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    Science.gov (United States)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  4. Development of image analysis software for quantification of viable cells in microchips.

    Science.gov (United States)

    Georg, Maximilian; Fernández-Cabada, Tamara; Bourguignon, Natalia; Karp, Paola; Peñaherrera, Ana B; Helguera, Gustavo; Lerner, Betiana; Pérez, Maximiliano S; Mertelsmann, Roland

    2018-01-01

    Over the past few years, image analysis has emerged as a powerful tool for analyzing various cell biology parameters in an unprecedented and highly specific manner. The amount of data that is generated requires automated methods for the processing and analysis of all the resulting information. The software available so far are suitable for the processing of fluorescence and phase contrast images, but often do not provide good results from transmission light microscopy images, due to the intrinsic variation of the acquisition of images technique itself (adjustment of brightness / contrast, for instance) and the variability between image acquisition introduced by operators / equipment. In this contribution, it has been presented an image processing software, Python based image analysis for cell growth (PIACG), that is able to calculate the total area of the well occupied by cells with fusiform and rounded morphology in response to different concentrations of fetal bovine serum in microfluidic chips, from microscopy images in transmission light, in a highly efficient way.

  5. Semiautomated analysis of optical coherence tomography crystalline lens images under simulated accommodation.

    Science.gov (United States)

    Kim, Eon; Ehrmann, Klaus; Uhlhorn, Stephen; Borja, David; Arrieta-Quintero, Esdras; Parel, Jean-Marie

    2011-05-01

    Presbyopia is an age related, gradual loss of accommodation, mainly due to changes in the crystalline lens. As part of research efforts to understand and cure this condition, ex vivo, cross-sectional optical coherence tomography images of crystalline lenses were obtained by using the Ex-Vivo Accommodation Simulator (EVAS II) instrument and analyzed to extract their physical and optical properties. Various filters and edge detection methods were applied to isolate the edge contour. An ellipse is fitted to the lens outline to obtain central reference point for transforming the pixel data into the analysis coordinate system. This allows for the fitting of a high order equation to obtain a mathematical description of the edge contour, which obeys constraints of continuity as well as zero to infinite surface slopes from apex to equator. Geometrical parameters of the lens were determined for the lens images captured at different accommodative states. Various curve fitting functions were developed to mathematically describe the anterior and posterior surfaces of the lens. Their differences were evaluated and their suitability for extracting optical performance of the lens was assessed. The robustness of these algorithms was tested by analyzing the same images repeated times.

  6. 3D Image Analysis of Geomaterials using Confocal Microscopy

    Science.gov (United States)

    Mulukutla, G.; Proussevitch, A.; Sahagian, D.

    2009-05-01

    Confocal microscopy is one of the most significant advances in optical microscopy of the last century. It is widely used in biological sciences but its application to geomaterials lingers due to a number of technical problems. Potentially the technique can perform non-invasive testing on a laser illuminated sample that fluoresces using a unique optical sectioning capability that rejects out-of-focus light reaching the confocal aperture. Fluorescence in geomaterials is commonly induced using epoxy doped with a fluorochrome that is impregnated into the sample to enable discrimination of various features such as void space or material boundaries. However, for many geomaterials, this method cannot be used because they do not naturally fluoresce and because epoxy cannot be impregnated into inaccessible parts of the sample due to lack of permeability. As a result, the confocal images of most geomaterials that have not been pre-processed with extensive sample preparation techniques are of poor quality and lack the necessary image and edge contrast necessary to apply any commonly used segmentation techniques to conduct any quantitative study of its features such as vesicularity, internal structure, etc. In our present work, we are developing a methodology to conduct a quantitative 3D analysis of images of geomaterials collected using a confocal microscope with minimal amount of prior sample preparation and no addition of fluorescence. Two sample geomaterials, a volcanic melt sample and a crystal chip containing fluid inclusions are used to assess the feasibility of the method. A step-by-step process of image analysis includes application of image filtration to enhance the edges or material interfaces and is based on two segmentation techniques: geodesic active contours and region competition. Both techniques have been applied extensively to the analysis of medical MRI images to segment anatomical structures. Preliminary analysis suggests that there is distortion in the

  7. Semivariogram Analysis of Bone Images Implemented on FPGA Architectures.

    Science.gov (United States)

    Shirvaikar, Mukul; Lagadapati, Yamuna; Dong, Xuanliang

    2017-03-01

    Osteoporotic fractures are a major concern for the healthcare of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. A semi-variance, γ ( h ), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h . Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O ( n 2 ) Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current

  8. Imaging spectroscopic analysis at the Advanced Light Source

    International Nuclear Information System (INIS)

    MacDowell, A. A.; Warwick, T.; Anders, S.; Lamble, G.M.; Martin, M.C.; McKinney, W.R.; Padmore, H.A.

    1999-01-01

    One of the major advances at the high brightness third generation synchrotrons is the dramatic improvement of imaging capability. There is a large multi-disciplinary effort underway at the ALS to develop imaging X-ray, UV and Infra-red spectroscopic analysis on a spatial scale from. a few microns to 10nm. These developments make use of light that varies in energy from 6meV to 15KeV. Imaging and spectroscopy are finding applications in surface science, bulk materials analysis, semiconductor structures, particulate contaminants, magnetic thin films, biology and environmental science. This article is an overview and status report from the developers of some of these techniques at the ALS. The following table lists all the currently available microscopes at the. ALS. This article will describe some of the microscopes and some of the early applications

  9. Visual Analytics Applied to Image Analysis : From Segmentation to Classification

    NARCIS (Netherlands)

    Rauber, Paulo

    2017-01-01

    Image analysis is the field of study concerned with extracting information from images. This field is immensely important for commercial and scientific applications, from identifying people in photographs to recognizing diseases in medical images. The goal behind the work presented in this thesis is

  10. Analysis of engineering drawings and raster map images

    CERN Document Server

    Henderson, Thomas C

    2013-01-01

    Presents up-to-date methods and algorithms for the automated analysis of engineering drawings and digital cartographic maps Discusses automatic engineering drawing and map analysis techniques Covers detailed accounts of the use of unsupervised segmentation algorithms to map images

  11. Deep machine learning provides state-of-the-art performance in image-based plant phenotyping.

    Science.gov (United States)

    Pound, Michael P; Atkinson, Jonathan A; Townsend, Alexandra J; Wilson, Michael H; Griffiths, Marcus; Jackson, Aaron S; Bulat, Adrian; Tzimiropoulos, Georgios; Wells, Darren M; Murchie, Erik H; Pridmore, Tony P; French, Andrew P

    2017-10-01

    In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets. © The Authors 2017. Published by Oxford University Press.

  12. Operational Automatic Remote Sensing Image Understanding Systems: Beyond Geographic Object-Based and Object-Oriented Image Analysis (GEOBIA/GEOOIA. Part 2: Novel system Architecture, Information/Knowledge Representation, Algorithm Design and Implementation

    Directory of Open Access Journals (Sweden)

    Luigi Boschetti

    2012-09-01

    Full Text Available According to literature and despite their commercial success, state-of-the-art two-stage non-iterative geographic object-based image analysis (GEOBIA systems and three-stage iterative geographic object-oriented image analysis (GEOOIA systems, where GEOOIA/GEOBIA, remain affected by a lack of productivity, general consensus and research. To outperform the Quality Indexes of Operativeness (OQIs of existing GEOBIA/GEOOIA systems in compliance with the Quality Assurance Framework for Earth Observation (QA4EO guidelines, this methodological work is split into two parts. Based on an original multi-disciplinary Strengths, Weaknesses, Opportunities and Threats (SWOT analysis of the GEOBIA/GEOOIA approaches, the first part of this work promotes a shift of learning paradigm in the pre-attentive vision first stage of a remote sensing (RS image understanding system (RS-IUS, from sub-symbolic statistical model-based (inductive image segmentation to symbolic physical model-based (deductive image preliminary classification capable of accomplishing image sub-symbolic segmentation and image symbolic pre-classification simultaneously. In the present second part of this work, a novel hybrid (combined deductive and inductive RS-IUS architecture featuring a symbolic deductive pre-attentive vision first stage is proposed and discussed in terms of: (a computational theory (system design, (b information/knowledge representation, (c algorithm design and (d implementation. As proof-of-concept of symbolic physical model-based pre-attentive vision first stage, the spectral knowledge-based, operational, near real-time, multi-sensor, multi-resolution, application-independent Satellite Image Automatic Mapper™ (SIAM™ is selected from existing literature. To the best of these authors’ knowledge, this is the first time a symbolic syntactic inference system, like SIAM™, is made available to the RS community for operational use in a RS-IUS pre-attentive vision first stage

  13. Lipase Production in Solid-State Fermentation Monitoring Biomass Growth of Aspergillus niger Using Digital Image Processing

    Science.gov (United States)

    Dutra, Julio C. V.; da Terzi, Selma C.; Bevilaqua, Juliana Vaz; Damaso, Mônica C. T.; Couri, Sônia; Langone, Marta A. P.; Senna, Lilian F.

    The aim of this study was to monitor the biomass growth of Aspergillus niger in solid-state fermentation (SSF) for lipase production using digital image processing technique. The strain A. niger 11T53A14 was cultivated in SSF using wheat bran as support, which was enriched with 0.91% (m/v) of ammonium sulfate. The addition of several vegetable oils (castor, soybean, olive, corn, and palm oils) was investigated to enhance lipase production. The maximum lipase activity was obtained using 2% (m/m) castor oil. In these conditions, the growth was evaluated each 24 h for 5 days by the glycosamine content analysis and digital image processing. Lipase activity was also determined. The results indicated that the digital image process technique can be used to monitor biomass growth in a SSF process and to correlate biomass growth and enzyme activity. In addition, the immobilized esterification lipase activity was determined for the butyl oleate synthesis, with and without 50% v/v hexane, resulting in 650 and 120 U/g, respectively. The enzyme was also used for transesterification of soybean oil and ethanol with maximum yield of 2.4%, after 30 min of reaction.

  14. Lipase production in solid-state fermentation monitoring biomass growth of aspergillus niger using digital image processing.

    Science.gov (United States)

    Dutra, Júlio C V; da C Terzi, Selma; Bevilaqua, Juliana Vaz; Damaso, Mônica C T; Couri, Sônia; Langone, Marta A P; Senna, Lilian F

    2008-03-01

    The aim of this study was to monitor the biomass growth of Aspergillus niger in solid-state fermentation (SSF) for lipase production using digital image processing technique. The strain A. niger 11T53A14 was cultivated in SSF using wheat bran as support, which was enriched with 0.91% (m/v) of ammonium sulfate. The addition of several vegetable oils (castor, soybean, olive, corn, and palm oils) was investigated to enhance lipase production. The maximum lipase activity was obtained using 2% (m/m) castor oil. In these conditions, the growth was evaluated each 24 h for 5 days by the glycosamine content analysis and digital image processing. Lipase activity was also determined. The results indicated that the digital image process technique can be used to monitor biomass growth in a SSF process and to correlate biomass growth and enzyme activity. In addition, the immobilized esterification lipase activity was determined for the butyl oleate synthesis, with and without 50% v/v hexane, resulting in 650 and 120 U/g, respectively. The enzyme was also used for transesterification of soybean oil and ethanol with maximum yield of 2.4%, after 30 min of reaction.

  15. GANALYZER: A TOOL FOR AUTOMATIC GALAXY IMAGE ANALYSIS

    International Nuclear Information System (INIS)

    Shamir, Lior

    2011-01-01

    We describe Ganalyzer, a model-based tool that can automatically analyze and classify galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze ∼10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large data sets of galaxy images collected by autonomous sky surveys such as SDSS, LSST, or DES. The software is available for free download at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer, and the data used in the experiment are available at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer/GalaxyImages.zip.

  16. Ganalyzer: A Tool for Automatic Galaxy Image Analysis

    Science.gov (United States)

    Shamir, Lior

    2011-08-01

    We describe Ganalyzer, a model-based tool that can automatically analyze and classify galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze ~10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large data sets of galaxy images collected by autonomous sky surveys such as SDSS, LSST, or DES. The software is available for free download at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer, and the data used in the experiment are available at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer/GalaxyImages.zip.

  17. Fiji: an open-source platform for biological-image analysis.

    Science.gov (United States)

    Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert

    2012-06-28

    Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.

  18. Image Processing Tools for Improved Visualization and Analysis of Remotely Sensed Images for Agriculture and Forest Classifications

    OpenAIRE

    SINHA G. R.

    2017-01-01

    This paper suggests Image Processing tools for improved visualization and better analysis of remotely sensed images. There are methods already available in literature for the purpose but the most important challenge among the limitations is lack of robustness. We propose an optimal method for image enhancement of the images using fuzzy based approaches and few optimization tools. The segmentation images subsequently obtained after de-noising will be classified into distinct information and th...

  19. The Current State and Path Forward For Enterprise Image Viewing: HIMSS-SIIM Collaborative White Paper.

    Science.gov (United States)

    Roth, Christopher J; Lannum, Louis M; Dennison, Donald K; Towbin, Alexander J

    2016-10-01

    Clinical specialties have widely varied needs for diagnostic image interpretation, and clinical image and video image consumption. Enterprise viewers are being deployed as part of electronic health record implementations to present the broad spectrum of clinical imaging and multimedia content created in routine medical practice today. This white paper will describe the enterprise viewer use cases, drivers of recent growth, technical considerations, functionality differences between enterprise and specialty viewers, and likely future states. This white paper is aimed at CMIOs and CIOs interested in optimizing the image-enablement of their electronic health record or those who may be struggling with the many clinical image viewers their enterprises may employ today.

  20. Analysis of high-throughput plant image data with the information system IAP

    Directory of Open Access Journals (Sweden)

    Klukas Christian

    2012-06-01

    Full Text Available This work presents a sophisticated information system, the Integrated Analysis Platform (IAP, an approach supporting large-scale image analysis for different species and imaging systems. In its current form, IAP supports the investigation of Maize, Barley and Arabidopsis plants based on images obtained in different spectra.

  1. Insight into dynamic genome imaging: Canonical framework identification and high-throughput analysis.

    Science.gov (United States)

    Ronquist, Scott; Meixner, Walter; Rajapakse, Indika; Snyder, John

    2017-07-01

    The human genome is dynamic in structure, complicating researcher's attempts at fully understanding it. Time series "Fluorescent in situ Hybridization" (FISH) imaging has increased our ability to observe genome structure, but due to cell type and experimental variability this data is often noisy and difficult to analyze. Furthermore, computational analysis techniques are needed for homolog discrimination and canonical framework detection, in the case of time-series images. In this paper we introduce novel ideas for nucleus imaging analysis, present findings extracted using dynamic genome imaging, and propose an objective algorithm for high-throughput, time-series FISH imaging. While a canonical framework could not be detected beyond statistical significance in the analyzed dataset, a mathematical framework for detection has been outlined with extension to 3D image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Analysis of Magnetic Resonance Image Signal Fluctuations Acquired During MR-Guided Radiotherapy.

    Science.gov (United States)

    Breto, Adrian L; Padgett, Kyle R; Ford, John C; Kwon, Deukwoo; Chang, Channing; Fuss, Martin; Stoyanova, Radka; Mellon, Eric A

    2018-03-28

    Magnetic resonance-guided radiotherapy (MRgRT) is a new and evolving treatment modality that allows unprecedented visualization of the tumor and surrounding anatomy. MRgRT includes daily 3D magnetic resonance imaging (MRI) for setup and rapidly repeated near real-time MRI scans during treatment for target tracking. One of the more exciting potential benefits of MRgRT is the ability to analyze serial MRIs to monitor treatment response or predict outcomes. A typical radiation treatment (RT) over the span of 10-15 minutes on the MRIdian system (ViewRay, Cleveland, OH) yields thousands of "cine" images, each acquired in 250 ms. This unique data allows for a glimpse in image intensity changes during RT delivery. In this report, we analyze cine images from a single fraction RT of a glioblastoma patient on the ViewRay platform in order to characterize the dynamic signal changes occurring during RT therapy. The individual frames in the cines were saved into DICOM format and read into an MIM image analysis platform (MIM Software, Cleveland, OH) as a time series. The three possible states of the three Cobalt-60 radiation sources-OFF, READY, and ON-were also recorded. An in-house Java plugin for MIM was created in order to perform principal component analysis (PCA) on each of the datasets. The analysis resulted in first PC, related to monotonous signal increase over the course of the treatment fraction. We found several distortion patterns in the data that we postulate result from the perturbation of the magnetic field due to the moving metal parts in the platform while treatment was being administered. The largest variations were detected when all Cobalt-60 sources were OFF. During this phase of the treatment, the gantry and multi-leaf collimators (MLCs) are moving. Conversely, when all Cobalt-60 sources were in the ON position, the image signal fluctuations were minimal, relating to very little mechanical motion. At this phase, the gantry, the MLCs, and sources are fixed

  3. Image processing in radiology

    International Nuclear Information System (INIS)

    Dammann, F.

    2002-01-01

    Medical imaging processing and analysis methods have significantly improved during recent years and are now being increasingly used in clinical applications. Preprocessing algorithms are used to influence image contrast and noise. Three-dimensional visualization techniques including volume rendering and virtual endoscopy are increasingly available to evaluate sectional imaging data sets. Registration techniques have been developed to merge different examination modalities. Structures of interest can be extracted from the image data sets by various segmentation methods. Segmented structures are used for automated quantification analysis as well as for three-dimensional therapy planning, simulation and intervention guidance, including medical modelling, virtual reality environments, surgical robots and navigation systems. These newly developed methods require specialized skills for the production and postprocessing of radiological imaging data as well as new definitions of the roles of the traditional specialities. The aim of this article is to give an overview of the state-of-the-art of medical imaging processing methods, practical implications for the ragiologist's daily work and future aspects. (orig.) [de

  4. Container-Based Clinical Solutions for Portable and Reproducible Image Analysis.

    Science.gov (United States)

    Matelsky, Jordan; Kiar, Gregory; Johnson, Erik; Rivera, Corban; Toma, Michael; Gray-Roncal, William

    2018-05-08

    Medical imaging analysis depends on the reproducibility of complex computation. Linux containers enable the abstraction, installation, and configuration of environments so that software can be both distributed in self-contained images and used repeatably by tool consumers. While several initiatives in neuroimaging have adopted approaches for creating and sharing more reliable scientific methods and findings, Linux containers are not yet mainstream in clinical settings. We explore related technologies and their efficacy in this setting, highlight important shortcomings, demonstrate a simple use-case, and endorse the use of Linux containers for medical image analysis.

  5. THE INFLUENCE OF PRICE AND SERVICE QUALITY OF BRAND IMAGE AND ITS IMPACT ON CUSTOMER SATISFACTION GOJEK (STUDENTS STUDY ON A STATE UNIVERSITY OF JAKARTA

    Directory of Open Access Journals (Sweden)

    Mohammad Rizan

    2015-09-01

    Full Text Available The purpose of the research are to: test empirically influence of price to brand image on customer satisfaction Gojek, test empirically influence of service quality to brand image on customer satisfaction Gojek, test empirically influence of price on customer satisfaction Gojek, test empirically influence of service quality on customer satisfaction Gojek, test empirically influence of brand image on customer satisfaction Gojek. This study used confirmatory factor analysis. The research was conducted in State University of Jakarta and used purposive sampling techniques, while the data collecting technique used questionaire, SPSS and SEM LISREL for data processing. The result shows a significant influence of price and service quality for brand image and its impact on customer satisfaction.

  6. REDOX IMAGING OF THE p53-DEPENDENT MITOCHONDRIAL REDOX STATE IN COLON CANCER EX VIVO

    Science.gov (United States)

    XU, HE N.; FENG, MIN; MOON, LILY; DOLLOFF, NATHAN; EL-DEIRY, WAFIK; LI, LIN Z.

    2015-01-01

    The mitochondrial redox state and its heterogeneity of colon cancer at tissue level have not been previously reported. Nor has how p53 regulates mitochondrial respiration been measured at (deep) tissue level, presumably due to the unavailability of the technology that has sufficient spatial resolution and tissue penetration depth. Our prior work demonstrated that the mitochondrial redox state and its intratumor heterogeneity is associated with cancer aggressiveness in human melanoma and breast cancer in mouse models, with the more metastatic tumors exhibiting localized regions of more oxidized redox state. Using the Chance redox scanner with an in-plane spatial resolution of 200 μm, we imaged the mitochondrial redox state of the wild-type p53 colon tumors (HCT116 p53 wt) and the p53-deleted colon tumors (HCT116 p53−/−) by collecting the fluorescence signals of nicotinamide adenine dinucleotide (NADH) and oxidized flavoproteins [Fp, including flavin adenine dinucleotide (FAD)] from the mouse xenografts snap-frozen at low temperature. Our results show that: (1) both tumor lines have significant degree of intratumor heterogeneity of the redox state, typically exhibiting a distinct bi-modal distribution that either correlates with the spatial core–rim pattern or the “hot/cold” oxidation-reduction patches; (2) the p53−/− group is significantly more heterogeneous in the mitochondrial redox state and has a more oxidized tumor core compared to the p53 wt group when the tumor sizes of the two groups are matched; (3) the tumor size dependence of the redox indices (such as Fp and Fp redox ratio) is significant in the p53−/− group with the larger ones being more oxidized and more heterogeneous in their redox state, particularly more oxidized in the tumor central regions; (4) the H&E staining images of tumor sections grossly correlate with the redox images. The present work is the first to reveal at the submillimeter scale the intratumor heterogeneity pattern

  7. MO-FG-202-06: Improving the Performance of Gamma Analysis QA with Radiomics- Based Image Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wootton, L; Nyflot, M; Ford, E [University of Washington Department of Radiation Oncology, Seattle, WA (United States); Chaovalitwongse, A [University of Washington Department of Industrial and Systems Engineering, Seattle, Washington (United States); University of Washington Department of Radiology, Seattle, WA (United States); Li, N [University of Washington Department of Industrial and Systems Engineering, Seattle, Washington (United States)

    2016-06-15

    Purpose: The use of gamma analysis for IMRT quality assurance has well-known limitations. Traditionally, a simple thresholding technique is used to evaluated passing criteria. However, like any image the gamma distribution is rich in information which thresholding mostly discards. We therefore propose a novel method of analyzing gamma images that uses quantitative image features borrowed from radiomics, with the goal of improving error detection. Methods: 368 gamma images were generated from 184 clinical IMRT beams. For each beam the dose to a phantom was measured with EPID dosimetry and compared to the TPS dose calculated with and without normally distributed (2mm sigma) errors in MLC positions. The magnitude of 17 intensity histogram and size-zone radiomic features were derived from each image. The features that differed most significantly between image sets were determined with ROC analysis. A linear machine-learning model was trained on these features to classify images as with or without errors on 180 gamma images.The model was then applied to an independent validation set of 188 additional gamma distributions, half with and half without errors. Results: The most significant features for detecting errors were histogram kurtosis (p=0.007) and three size-zone metrics (p<1e-6 for each). The sizezone metrics detected clusters of high gamma-value pixels under mispositioned MLCs. The model applied to the validation set had an AUC of 0.8, compared to 0.56 for traditional gamma analysis with the decision threshold restricted to 98% or less. Conclusion: A radiomics-based image analysis method was developed that is more effective in detecting error than traditional gamma analysis. Though the pilot study here considers only MLC position errors, radiomics-based methods for other error types are being developed, which may provide better error detection and useful information on the source of detected errors. This work was partially supported by a grant from the Agency for

  8. Infrared and visible image fusion based on robust principal component analysis and compressed sensing

    Science.gov (United States)

    Li, Jun; Song, Minghui; Peng, Yuanxi

    2018-03-01

    Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.

  9. Galileo spacecraft solid-state imaging system view of Antarctica

    Science.gov (United States)

    1990-01-01

    Galileo spacecraft solid-state imaging system view of Antarctica was taken during its first encounter with the Earth. This color picture of Antarctica is part of a mosaic of pictures covering the entire polar continent showing the Ross Ice Shelf and its border with the sea and mountains poking through the ice near the McMurdo Station. From top to bottom, the frame looks across about half of Antarctica. View provided by the Jet Propulsion Laboratory (JPL) with alternate number P-37297.

  10. A Survey on Deep Learning in Medical Image Analysis

    NARCIS (Netherlands)

    Litjens, G.J.; Kooi, T.; Ehteshami Bejnordi, B.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Laak, J.A.W.M. van der; Ginneken, B. van; Sanchez, C.I.

    2017-01-01

    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared

  11. Development of motion image prediction method using principal component analysis

    International Nuclear Information System (INIS)

    Chhatkuli, Ritu Bhusal; Demachi, Kazuyuki; Kawai, Masaki; Sakakibara, Hiroshi; Kamiaka, Kazuma

    2012-01-01

    Respiratory motion can induce the limit in the accuracy of area irradiated during lung cancer radiation therapy. Many methods have been introduced to minimize the impact of healthy tissue irradiation due to the lung tumor motion. The purpose of this research is to develop an algorithm for the improvement of image guided radiation therapy by the prediction of motion images. We predict the motion images by using principal component analysis (PCA) and multi-channel singular spectral analysis (MSSA) method. The images/movies were successfully predicted and verified using the developed algorithm. With the proposed prediction method it is possible to forecast the tumor images over the next breathing period. The implementation of this method in real time is believed to be significant for higher level of tumor tracking including the detection of sudden abdominal changes during radiation therapy. (author)

  12. Subsurface offset behaviour in velocity analysis with extended reflectivity images

    NARCIS (Netherlands)

    Mulder, W.A.

    2013-01-01

    Migration velocity analysis with the constant-density acoustic wave equation can be accomplished by the focusing of extended migration images, obtained by introducing a subsurface shift in the imaging condition. A reflector in a wrong velocity model will show up as a curve in the extended image. In

  13. Bacterial growth on surfaces: Automated image analysis for quantification of growth rate-related parameters

    DEFF Research Database (Denmark)

    Møller, S.; Sternberg, Claus; Poulsen, L. K.

    1995-01-01

    species-specific hybridizations with fluorescence-labelled ribosomal probes to estimate the single-cell concentration of RNA. By automated analysis of digitized images of stained cells, we determined four independent growth rate-related parameters: cellular RNA and DNA contents, cell volume......, and the frequency of dividing cells in a cell population. These parameters were used to compare physiological states of liquid-suspended and surfacegrowing Pseudomonas putida KT2442 in chemostat cultures. The major finding is that the correlation between substrate availability and cellular growth rate found...

  14. Fast and objective detection and analysis of structures in downhole images

    Science.gov (United States)

    Wedge, Daniel; Holden, Eun-Jung; Dentith, Mike; Spadaccini, Nick

    2017-09-01

    Downhole acoustic and optical televiewer images, and formation microimager (FMI) logs are important datasets for structural and geotechnical analyses for the mineral and petroleum industries. Within these data, dipping planar structures appear as sinusoids, often in incomplete form and in abundance. Their detection is a labour intensive and hence expensive task and as such is a significant bottleneck in data processing as companies may have hundreds of kilometres of logs to process each year. We present an image analysis system that harnesses the power of automated image analysis and provides an interactive user interface to support the analysis of televiewer images by users with different objectives. Our algorithm rapidly produces repeatable, objective results. We have embedded it in an interactive workflow to complement geologists' intuition and experience in interpreting data to improve efficiency and assist, rather than replace the geologist. The main contributions include a new image quality assessment technique for highlighting image areas most suited to automated structure detection and for detecting boundaries of geological zones, and a novel sinusoid detection algorithm for detecting and selecting sinusoids with given confidence levels. Further tools are provided to perform rapid analysis of and further detection of structures e.g. as limited to specific orientations.

  15. Multi spectral imaging analysis for meat spoilage discrimination

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Carstensen, Jens Michael; Papadopoulou, Olga

    classification methods: Naive Bayes Classifier as a reference model, Canonical Discriminant Analysis (CDA) and Support Vector Classification (SVC). As the final step, generalization of the models was performed using k-fold validation (k=10). Results showed that image analysis provided good discrimination of meat......In the present study, fresh beef fillets were purchased from a local butcher shop and stored aerobically and in modified atmosphere packaging (MAP, CO2 40%/O2 30%/N2 30%) at six different temperatures (0, 4, 8, 12, 16 and 20°C). Microbiological analysis in terms of total viable counts (TVC......) was performed in parallel with videometer image snapshots and sensory analysis. Odour and colour characteristics of meat were determined by a test panel and attributed into three pre-characterized quality classes, namely Fresh; Semi Fresh and Spoiled during the days of its shelf life. So far, different...

  16. Correlating two-photon excited fluorescence imaging of breast cancer cellular redox state with seahorse flux analysis of normalized cellular oxygen consumption

    Science.gov (United States)

    Hou, Jue; Wright, Heather J.; Chan, Nicole; Tran, Richard; Razorenova, Olga V.; Potma, Eric O.; Tromberg, Bruce J.

    2016-06-01

    Two-photon excited fluorescence (TPEF) imaging of the cellular cofactors nicotinamide adenine dinucleotide and oxidized flavin adenine dinucleotide is widely used to measure cellular metabolism, both in normal and pathological cells and tissues. When dual-wavelength excitation is used, ratiometric TPEF imaging of the intrinsic cofactor fluorescence provides a metabolic index of cells-the "optical redox ratio" (ORR). With increased interest in understanding and controlling cellular metabolism in cancer, there is a need to evaluate the performance of ORR in malignant cells. We compare TPEF metabolic imaging with seahorse flux analysis of cellular oxygen consumption in two different breast cancer cell lines (MCF-7 and MDA-MB-231). We monitor metabolic index in living cells under both normal culture conditions and, for MCF-7, in response to cell respiration inhibitors and uncouplers. We observe a significant correlation between the TPEF-derived ORR and the flux analyzer measurements (R=0.7901, p<0.001). Our results confirm that the ORR is a valid dynamic index of cell metabolism under a range of oxygen consumption conditions relevant for cancer imaging.

  17. Computer analysis of gallbladder ultrasonic images towards recognition of pathological lesions

    Science.gov (United States)

    Ogiela, M. R.; Bodzioch, S.

    2011-06-01

    This paper presents a new approach to gallbladder ultrasonic image processing and analysis towards automatic detection and interpretation of disease symptoms on processed US images. First, in this paper, there is presented a new heuristic method of filtering gallbladder contours from images. A major stage in this filtration is to segment and section off areas occupied by the said organ. This paper provides for an inventive algorithm for the holistic extraction of gallbladder image contours, based on rank filtration, as well as on the analysis of line profile sections on tested organs. The second part concerns detecting the most important lesion symptoms of the gallbladder. Automating a process of diagnosis always comes down to developing algorithms used to analyze the object of such diagnosis and verify the occurrence of symptoms related to given affection. The methodology of computer analysis of US gallbladder images presented here is clearly utilitarian in nature and after standardising can be used as a technique for supporting the diagnostics of selected gallbladder disorders using the images of this organ.

  18. PIZZARO: Forensic analysis and restoration of image and video data

    Czech Academy of Sciences Publication Activity Database

    Kamenický, Jan; Bartoš, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozámský, Adam; Saic, Stanislav; Šroubek, Filip; Šorel, Michal; Zita, Aleš; Zitová, Barbara; Šíma, Z.; Švarc, P.; Hořínek, J.

    2016-01-01

    Roč. 264, č. 1 (2016), s. 153-166 ISSN 0379-0738 R&D Projects: GA MV VG20102013064; GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Image forensic analysis * Image restoration * Image tampering detection * Image source identification Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.989, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/kamenicky-0459504.pdf

  19. Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases.

    Science.gov (United States)

    Janowczyk, Andrew; Madabhushi, Anant

    2016-01-01

    Deep learning (DL) is a representation learning approach ideally suited for image analysis challenges in digital pathology (DP). The variety of image analysis tasks in the context of DP includes detection and counting (e.g., mitotic events), segmentation (e.g., nuclei), and tissue classification (e.g., cancerous vs. non-cancerous). Unfortunately, issues with slide preparation, variations in staining and scanning across sites, and vendor platforms, as well as biological variance, such as the presentation of different grades of disease, make these image analysis tasks particularly challenging. Traditional approaches, wherein domain-specific cues are manually identified and developed into task-specific "handcrafted" features, can require extensive tuning to accommodate these variances. However, DL takes a more domain agnostic approach combining both feature discovery and implementation to maximally discriminate between the classes of interest. While DL approaches have performed well in a few DP related image analysis tasks, such as detection and tissue classification, the currently available open source tools and tutorials do not provide guidance on challenges such as (a) selecting appropriate magnification, (b) managing errors in annotations in the training (or learning) dataset, and (c) identifying a suitable training set containing information rich exemplars. These foundational concepts, which are needed to successfully translate the DL paradigm to DP tasks, are non-trivial for (i) DL experts with minimal digital histology experience, and (ii) DP and image processing experts with minimal DL experience, to derive on their own, thus meriting a dedicated tutorial. This paper investigates these concepts through seven unique DP tasks as use cases to elucidate techniques needed to produce comparable, and in many cases, superior to results from the state-of-the-art hand-crafted feature-based classification approaches. Specifically, in this tutorial on DL for DP image

  20. Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases

    Directory of Open Access Journals (Sweden)

    Andrew Janowczyk

    2016-01-01

    Full Text Available Background: Deep learning (DL is a representation learning approach ideally suited for image analysis challenges in digital pathology (DP. The variety of image analysis tasks in the context of DP includes detection and counting (e.g., mitotic events, segmentation (e.g., nuclei, and tissue classification (e.g., cancerous vs. non-cancerous. Unfortunately, issues with slide preparation, variations in staining and scanning across sites, and vendor platforms, as well as biological variance, such as the presentation of different grades of disease, make these image analysis tasks particularly challenging. Traditional approaches, wherein domain-specific cues are manually identified and developed into task-specific "handcrafted" features, can require extensive tuning to accommodate these variances. However, DL takes a more domain agnostic approach combining both feature discovery and implementation to maximally discriminate between the classes of interest. While DL approaches have performed well in a few DP related image analysis tasks, such as detection and tissue classification, the currently available open source tools and tutorials do not provide guidance on challenges such as (a selecting appropriate magnification, (b managing errors in annotations in the training (or learning dataset, and (c identifying a suitable training set containing information rich exemplars. These foundational concepts, which are needed to successfully translate the DL paradigm to DP tasks, are non-trivial for (i DL experts with minimal digital histology experience, and (ii DP and image processing experts with minimal DL experience, to derive on their own, thus meriting a dedicated tutorial. Aims: This paper investigates these concepts through seven unique DP tasks as use cases to elucidate techniques needed to produce comparable, and in many cases, superior to results from the state-of-the-art hand-crafted feature-based classification approaches. Results : Specifically, in

  1. Image analysis in the evaluation of the physiological potential of maize seeds1

    Directory of Open Access Journals (Sweden)

    Crislaine Aparecida Gomes Pinto

    Full Text Available The Seed Analysis System (SAS is used in the image analysis of seeds and seedlings, and has the potential for use in the control of seed quality. The aim of this research was to adapt the methodology of image analysis of maize seedlings by SAS, and to verify the potential use of this equipment in the evaluation of the physiological potential of maize seeds. Nine batches of two maize hybrids were characterised by means of the following tests and determinations: germination, first count, accelerated ageing, cold test, seedling emergence at 25 and 30ºC, and speed of emergence index. The image analysis experiment was carried out in a factorial scheme of 9 batches x 4 methods of analysis of the seedling images (with and without the use of NWF as substrate, and with and without manual correction of the images. Images of the seedlings were evaluated using the average lengths of the coleoptile, roots and seedlings; and by the automatic and manual indices of vigour, uniformity and growth produced by the SAS. Use of blue NWF afffects the initial development of maize seedlings. The physiological potential of maize seeds can be evaluated in seedlings which are seeded on white paper towels at a temperature of 25 °C and evaluated on the third day. Image analysis should be carried out with the SAS software using automatic calibration and with no correction of the seedling images. Use of SAS equipment for the analysis of seedling images is a potential tool in evaluating the physiological quality of maize seeds.

  2. Extending a prototype knowledge and object based image analysis model to coarser spatial resolution imagery: an example from the Missouri River

    Science.gov (United States)

    Strong, Laurence L.

    2012-01-01

    A prototype knowledge- and object-based image analysis model was developed to inventory and map least tern and piping plover habitat on the Missouri River, USA. The model has been used to inventory the state of sandbars annually for 4 segments of the Missouri River since 2006 using QuickBird imagery. Interpretation of the state of sandbars is difficult when images for the segment are acquired at different river stages and different states of vegetation phenology and canopy cover. Concurrent QuickBird and RapidEye images were classified using the model and the spatial correspondence of classes in the land cover and sandbar maps were analysed for the spatial extent of the images and at nest locations for both bird species. Omission and commission errors were low for unvegetated land cover classes used for nesting by both bird species and for land cover types with continuous vegetation cover and water. Errors were larger for land cover classes characterized by a mixture of sand and vegetation. Sandbar classification decisions are made using information on land cover class proportions and disagreement between sandbar classes was resolved using fuzzy membership possibilities. Regression analysis of area for a paired sample of 47 sandbars indicated an average positive bias, 1.15 ha, for RapidEye that did not vary with sandbar size. RapidEye has potential to reduce temporal uncertainty about least tern and piping plover habitat but would not be suitable for mapping sandbar erosion, and characterization of sandbar shapes or vegetation patches at fine spatial resolution.

  3. Extending a prototype knowledge- and object-based image analysis model to coarser spatial resolution imagery: an example from the Missouri River

    Science.gov (United States)

    Strong, Laurence L.

    2012-01-01

    A prototype knowledge- and object-based image analysis model was developed to inventory and map least tern and piping plover habitat on the Missouri River, USA. The model has been used to inventory the state of sandbars annually for 4 segments of the Missouri River since 2006 using QuickBird imagery. Interpretation of the state of sandbars is difficult when images for the segment are acquired at different river stages and different states of vegetation phenology and canopy cover. Concurrent QuickBird and RapidEye images were classified using the model and the spatial correspondence of classes in the land cover and sandbar maps were analysed for the spatial extent of the images and at nest locations for both bird species. Omission and commission errors were low for unvegetated land cover classes used for nesting by both bird species and for land cover types with continuous vegetation cover and water. Errors were larger for land cover classes characterized by a mixture of sand and vegetation. Sandbar classification decisions are made using information on land cover class proportions and disagreement between sandbar classes was resolved using fuzzy membership possibilities. Regression analysis of area for a paired sample of 47 sandbars indicated an average positive bias, 1.15 ha, for RapidEye that did not vary with sandbar size. RapidEye has potential to reduce temporal uncertainty about least tern and piping plover habitat but would not be suitable for mapping sandbar erosion, and characterization of sandbar shapes or vegetation patches at fine spatial resolution.

  4. Resting-state theta band connectivity and graph analysis in generalized social anxiety disorder.

    Science.gov (United States)

    Xing, Mengqi; Tadayonnejad, Reza; MacNamara, Annmarie; Ajilore, Olusola; DiGangi, Julia; Phan, K Luan; Leow, Alex; Klumpp, Heide

    2017-01-01

    Functional magnetic resonance imaging (fMRI) resting-state studies show generalized social anxiety disorder (gSAD) is associated with disturbances in networks involved in emotion regulation, emotion processing, and perceptual functions, suggesting a network framework is integral to elucidating the pathophysiology of gSAD. However, fMRI does not measure the fast dynamic interconnections of functional networks. Therefore, we examined whole-brain functional connectomics with electroencephalogram (EEG) during resting-state. Resting-state EEG data was recorded for 32 patients with gSAD and 32 demographically-matched healthy controls (HC). Sensor-level connectivity analysis was applied on EEG data by using Weighted Phase Lag Index (WPLI) and graph analysis based on WPLI was used to determine clustering coefficient and characteristic path length to estimate local integration and global segregation of networks. WPLI results showed increased oscillatory midline coherence in the theta frequency band indicating higher connectivity in the gSAD relative to HC group during rest. Additionally, WPLI values positively correlated with state anxiety levels within the gSAD group but not the HC group. Our graph theory based connectomics analysis demonstrated increased clustering coefficient and decreased characteristic path length in theta-based whole brain functional organization in subjects with gSAD compared to HC. Theta-dependent interconnectivity was associated with state anxiety in gSAD and an increase in information processing efficiency in gSAD (compared to controls). Results may represent enhanced baseline self-focused attention, which is consistent with cognitive models of gSAD and fMRI studies implicating emotion dysregulation and disturbances in task negative networks (e.g., default mode network) in gSAD.

  5. Adaptive multiresolution Hermite-Binomial filters for image edge and texture analysis

    NARCIS (Netherlands)

    Gu, Y.H.; Katsaggelos, A.K.

    1994-01-01

    A new multiresolution image analysis approach using adaptive Hermite-Binomial filters is presented in this paper. According to the local image structural and textural properties, the analysis filter kernels are made adaptive both in their scales and orders. Applications of such an adaptive filtering

  6. Digital image sequence processing, compression, and analysis

    CERN Document Server

    Reed, Todd R

    2004-01-01

    IntroductionTodd R. ReedCONTENT-BASED IMAGE SEQUENCE REPRESENTATIONPedro M. Q. Aguiar, Radu S. Jasinschi, José M. F. Moura, andCharnchai PluempitiwiriyawejTHE COMPUTATION OF MOTIONChristoph Stiller, Sören Kammel, Jan Horn, and Thao DangMOTION ANALYSIS AND DISPLACEMENT ESTIMATION IN THE FREQUENCY DOMAINLuca Lucchese and Guido Maria CortelazzoQUALITY OF SERVICE ASSESSMENT IN NEW GENERATION WIRELESS VIDEO COMMUNICATIONSGaetano GiuntaERROR CONCEALMENT IN DIGITAL VIDEOFrancesco G.B. De NataleIMAGE SEQUENCE RESTORATION: A WIDER PERSPECTIVEAnil KokaramVIDEO SUMMARIZATIONCuneyt M. Taskiran and Edward

  7. Practical considerations of image analysis and quantification of signal transduction IHC staining.

    Science.gov (United States)

    Grunkin, Michael; Raundahl, Jakob; Foged, Niels T

    2011-01-01

    The dramatic increase in computer processing power in combination with the availability of high-quality digital cameras during the last 10 years has fertilized the grounds for quantitative microscopy based on digital image analysis. With the present introduction of robust scanners for whole slide imaging in both research and routine, the benefits of automation and objectivity in the analysis of tissue sections will be even more obvious. For in situ studies of signal transduction, the combination of tissue microarrays, immunohistochemistry, digital imaging, and quantitative image analysis will be central operations. However, immunohistochemistry is a multistep procedure including a lot of technical pitfalls leading to intra- and interlaboratory variability of its outcome. The resulting variations in staining intensity and disruption of original morphology are an extra challenge for the image analysis software, which therefore preferably should be dedicated to the detection and quantification of histomorphometrical end points.

  8. Transfer function analysis of positron-emitting tracer imaging system (PETIS) data

    International Nuclear Information System (INIS)

    Keutgen, N.; Matsuhashi, S.; Mizuniwa, C.; Ito, T.; Fujimura, T.; Ishioka, N.S.; Watanabe, S.; Sekine, T.; Uchida, H.; Hashimoto, S.

    2002-01-01

    Quantitative analysis of the two-dimensional image data obtained with the positron-emitting tracer imaging system (PETIS) for plant physiology has been carried out using a transfer function analysis method. While a cut leaf base of Chinese chive (Allium tuberosum Rottler) or a cut stem of soybean (Glycine max L.) was immersed in an aqueous solution containing the [ 18 F] F - ion or [ 13 N]NO 3 - ion, tracer images of the leaf of Chinese chive and the trifoliate of soybean were recorded with PETIS. From the time sequence of images, the tracer transfer function was estimated from which the speed of tracer transport and the fraction moved between specified image positions were deduced

  9. Multifractal analysis of three-dimensional histogram from color images

    International Nuclear Information System (INIS)

    Chauveau, Julien; Rousseau, David; Richard, Paul; Chapeau-Blondeau, Francois

    2010-01-01

    Natural images, especially color or multicomponent images, are complex information-carrying signals. To contribute to the characterization of this complexity, we investigate the possibility of multiscale organization in the colorimetric structure of natural images. This is realized by means of a multifractal analysis applied to the three-dimensional histogram from natural color images. The observed behaviors are confronted to those of reference models with known multifractal properties. We use for this purpose synthetic random images with trivial monofractal behavior, and multidimensional multiplicative cascades known for their actual multifractal behavior. The behaviors observed on natural images exhibit similarities with those of the multifractal multiplicative cascades and display the signature of elaborate multiscale organizations stemming from the histograms of natural color images. This type of characterization of colorimetric properties can be helpful to various tasks of digital image processing, as for instance modeling, classification, indexing.

  10. Low frequency fluctuations in resting-state functional magnetic resonance imaging and their applications

    International Nuclear Information System (INIS)

    Küblböck, M.

    2015-01-01

    Over the course of the last two decades, functional magnetic resonance imaging (fMRI) has emerged as a widely used, highly accepted and very popular method for the assessment of neuronal activity in the human brain. It is a completely non-invasive imaging technique with high temporal resolution, which relies on the measurement of local differences in magnetic susceptibility between oxygenated and deoxygenated blood. Therefore, fMRI can be regarded as an indirect measure of neuronal activity via measurement of localised changes in cerebral blood flow and cerebral oxygen consumption. Maps of neuronal activity are calculated from fMRI data acquired either in the presence of an explicit task (task-based fMRI) or in absence of a task (resting-state fMRI). While in task-based fMRI task-specific patterns of brain activity are subject to research, resting-state fMRI reveals fundamental networks of intrinsic brain activity. These networks are characterized by low-frequency oscillations in the power spectrum of resting-state fMRI data. In the present work, we first introduce the physical principles and the technical background that allow us to measure these changes in blood oxygenation, followed by an introduction to the blood oxygenation level dependent (BOLD) effect and to analysis methods for both task-based and resting-state fMRI data. We also analyse the temporal signal-to-noise ratio (tSNR) of a novel 2D-EPI sequence, which allows the experimenter to acquire several slices simultaneously in order to assess the optimal parameter settings for this sequence at 3T. We then proceed to investigate the temporal properties of measures for the amplitude of low-frequency oscillations in resting-state fMRI data, which are regarded as potential biomarkers for a wide range of mental diseases in various clinical studies and show the high stability and robustness of these data, which are important prerequisites for application as a biomarker as well as their dependency on head motion

  11. Facial Image Analysis in Anthropology: A Review

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2011-01-01

    Roč. 49, č. 2 (2011), s. 141-153 ISSN 0323-1119 Institutional support: RVO:67985807 Keywords : face * computer-assisted methods * template matching * geometric morphopetrics * robust image analysis Subject RIV: IN - Informatics, Computer Science

  12. Measurement of the quantum superposition state of an imaging ensemble of photons prepared in orbital angular momentum states using a phase-diversity method

    International Nuclear Information System (INIS)

    Uribe-Patarroyo, Nestor; Alvarez-Herrero, Alberto; Belenguer, Tomas

    2010-01-01

    We propose the use of a phase-diversity technique to estimate the orbital angular momentum (OAM) superposition state of an ensemble of photons that passes through an optical system, proceeding from an extended object. The phase-diversity technique permits the estimation of the optical transfer function (OTF) of an imaging optical system. As the OTF is derived directly from the wave-front characteristics of the observed light, we redefine the phase-diversity technique in terms of a superposition of OAM states. We test this new technique experimentally and find coherent results among different tests, which gives us confidence in the estimation of the photon ensemble state. We find that this technique not only allows us to estimate the square of the amplitude of each OAM state, but also the relative phases among all states, thus providing complete information about the quantum state of the photons. This technique could be used to measure the OAM spectrum of extended objects in astronomy or in an optical communication scheme using OAM states. In this sense, the use of extended images could lead to new techniques in which the communication is further multiplexed along the field.

  13. Flexibility analysis in adolescent idiopathic scoliosis on side-bending images using the EOS imaging system.

    Science.gov (United States)

    Hirsch, C; Ilharreborde, B; Mazda, K

    2016-06-01

    Analysis of preoperative flexibility in adolescent idiopathic scoliosis (AIS) is essential to classify the curves, determine their structurality, and select the fusion levels during preoperative planning. Side-bending x-rays are the gold standard for the analysis of preoperative flexibility. The objective of this study was to examine the feasibility and performance of side-bending images taken in the standing position using the EOS imaging system. All patients who underwent preoperative assessment between April 2012 and January 2013 for AIS were prospectively included in the study. The work-up included standing AP and lateral EOS x-rays of the spine, standard side-bending x-rays in the supine position, and standing bending x-rays in the EOS booth. The irradiation dose was measured for each of the tests. Two-dimensional reducibility of the Cobb angle was measured on both types of bending x-rays. The results were based on the 50 patients in the study. No significant difference was demonstrated for reducibility of the Cobb angle between the standing side-bending images with the EOS imaging system and those in the supine position for all types of Lenke deformation. The irradiation dose was five times lower during the EOS bending imaging. The standing side-bending images in the EOS device contributed the same results as the supine images, with five times less irradiation. They should therefore be used in clinical routine. 2. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  14. Pathological diagnosis of bladder cancer by image analysis of hypericin induced fluorescence cystoscopic images

    Science.gov (United States)

    Kah, James C. Y.; Olivo, Malini C.; Lau, Weber K. O.; Sheppard, Colin J. R.

    2005-08-01

    Photodynamic diagnosis of bladder carcinoma based on hypericin fluorescence cystoscopy has shown to have a higher degree of sensitivity for the detection of flat bladder carcinoma compared to white light cystoscopy. The potential of the photosensitizer hypericin-induced fluorescence in performing non-invasive optical biopsy to grade bladder cancer in vivo using fluorescence cystoscopic image analysis without surgical resection for tissue biopsy is investigated in this study. The correlation between tissue fluorescence and histopathology of diseased tissue was explored and a diagnostic algorithm based on fluorescence image analysis was developed to classify the bladder cancer without surgical resection for tissue biopsy. Preliminary results suggest a correlation between tissue fluorescence and bladder cancer grade. By combining both the red-to-blue and red-to-green intensity ratios into a 2D scatter plot yields an average sensitivity and specificity of around 70% and 85% respectively for pathological cancer grading of the three different grades of bladder cancer. Therefore, the diagnostic algorithm based on colorimetric intensity ratio analysis of hypericin fluorescence cystoscopic images developed in this preliminary study shows promising potential to optically diagnose and grade bladder cancer in vivo.

  15. Product code optimization for determinate state LDPC decoding in robust image transmission.

    Science.gov (United States)

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2006-08-01

    We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.

  16. Cutting-edge analysis of extracellular microparticles using ImageStream(X) imaging flow cytometry.

    Science.gov (United States)

    Headland, Sarah E; Jones, Hefin R; D'Sa, Adelina S V; Perretti, Mauro; Norling, Lucy V

    2014-06-10

    Interest in extracellular vesicle biology has exploded in the past decade, since these microstructures seem endowed with multiple roles, from blood coagulation to inter-cellular communication in pathophysiology. In order for microparticle research to evolve as a preclinical and clinical tool, accurate quantification of microparticle levels is a fundamental requirement, but their size and the complexity of sample fluids present major technical challenges. Flow cytometry is commonly used, but suffers from low sensitivity and accuracy. Use of Amnis ImageStream(X) Mk II imaging flow cytometer afforded accurate analysis of calibration beads ranging from 1 μm to 20 nm; and microparticles, which could be observed and quantified in whole blood, platelet-rich and platelet-free plasma and in leukocyte supernatants. Another advantage was the minimal sample preparation and volume required. Use of this high throughput analyzer allowed simultaneous phenotypic definition of the parent cells and offspring microparticles along with real time microparticle generation kinetics. With the current paucity of reliable techniques for the analysis of microparticles, we propose that the ImageStream(X) could be used effectively to advance this scientific field.

  17. Multi-Resolution Wavelet-Transformed Image Analysis of Histological Sections of Breast Carcinomas

    Directory of Open Access Journals (Sweden)

    Hae-Gil Hwang

    2005-01-01

    Full Text Available Multi-resolution images of histological sections of breast cancer tissue were analyzed using texture features of Haar- and Daubechies transform wavelets. Tissue samples analyzed were from ductal regions of the breast and included benign ductal hyperplasia, ductal carcinoma in situ (DCIS, and invasive ductal carcinoma (CA. To assess the correlation between computerized image analysis and visual analysis by a pathologist, we created a two-step classification system based on feature extraction and classification. In the feature extraction step, we extracted texture features from wavelet-transformed images at 10× magnification. In the classification step, we applied two types of classifiers to the extracted features, namely a statistics-based multivariate (discriminant analysis and a neural network. Using features from second-level Haar transform wavelet images in combination with discriminant analysis, we obtained classification accuracies of 96.67 and 87.78% for the training and testing set (90 images each, respectively. We conclude that the best classifier of carcinomas in histological sections of breast tissue are the texture features from the second-level Haar transform wavelet images used in a discriminant function.

  18. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J

    2008-01-01

    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  19. Complete chromogen separation and analysis in double immunohistochemical stains using Photoshop-based image analysis.

    Science.gov (United States)

    Lehr, H A; van der Loos, C M; Teeling, P; Gown, A M

    1999-01-01

    Simultaneous detection of two different antigens on paraffin-embedded and frozen tissues can be accomplished by double immunohistochemistry. However, many double chromogen systems suffer from signal overlap, precluding definite signal quantification. To separate and quantitatively analyze the different chromogens, we imported images into a Macintosh computer using a CCD camera attached to a diagnostic microscope and used Photoshop software for the recognition, selection, and separation of colors. We show here that Photoshop-based image analysis allows complete separation of chromogens not only on the basis of their RGB spectral characteristics, but also on the basis of information concerning saturation, hue, and luminosity intrinsic to the digitized images. We demonstrate that Photoshop-based image analysis provides superior results compared to color separation using bandpass filters. Quantification of the individual chromogens is then provided by Photoshop using the Histogram command, which supplies information on the luminosity (corresponding to gray levels of black-and-white images) and on the number of pixels as a measure of spatial distribution. (J Histochem Cytochem 47:119-125, 1999)

  20. Simultaneous dual-radionuclide myocardial perfusion imaging with a solid-state dedicated cardiac camera

    International Nuclear Information System (INIS)

    Ben-Haim, Simona; Kacperski, Krzysztof; Hain, Sharon; Van Gramberg, Dean; Hutton, Brian F.; Erlandsson, Kjell; Waddington, Wendy A.; Ell, Peter J.; Sharir, Tali; Roth, Nathaniel; Berman, Daniel S.

    2010-01-01

    We compared simultaneous dual-radionuclide (DR) stress and rest myocardial perfusion imaging (MPI) with a novel solid-state cardiac camera and a conventional SPECT camera with separate stress and rest acquisitions. Of 27 consecutive patients recruited, 24 (64.5±11.8 years of age, 16 men) were injected with 74 MBq of 201 Tl (rest) and 250 MBq 99m Tc-MIBI (stress). Conventional MPI acquisition times for stress and rest are 21 min and 16 min, respectively. Rest 201 Tl for 6 min and simultaneous DR 15-min list mode gated scans were performed on a D-SPECT cardiac scanner. In 11 patients DR D-SPECT was performed first and in 13 patients conventional stress 99m Tc-MIBI SPECT imaging was performed followed by DR D-SPECT. The DR D-SPECT data were processed using a spill-over and scatter correction method. DR D-SPECT images were compared with rest 201 Tl D-SPECT and with conventional SPECT images by visual analysis employing the 17-segment model and a five-point scale (0 normal, 4 absent) to calculate the summed stress and rest scores. Image quality was assessed on a four-point scale (1 poor, 4 very good) and gut activity was assessed on a four-point scale (0 none, 3 high). Conventional MPI studies were abnormal at stress in 17 patients and at rest in 9 patients. In the 17 abnormal stress studies DR D-SPECT MPI showed 113 abnormal segments and conventional MPI showed 93 abnormal segments. In the nine abnormal rest studies DR D-SPECT showed 45 abnormal segments and conventional MPI showed 48 abnormal segments. The summed stress and rest scores on conventional SPECT and DR D-SPECT were highly correlated (r=0.9790 and 0.9694, respectively). The summed scores of rest 201 Tl D-SPECT and DR-DSPECT were also highly correlated (r=0.9968, p 201 Tl D-SPECT acquisition. (orig.)

  1. Towards a framework for agent-based image analysis of remote-sensing data.

    Science.gov (United States)

    Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera

    2015-04-03

    Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects' properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA).

  2. Image-potential states on the metallic (111) surface of bismuth

    International Nuclear Information System (INIS)

    Muntwiler, Matthias; Zhu, X-Y

    2008-01-01

    An extended series (up to n=6, in quantum beats) of image-potential states (IPS) is observed in time-resolved two-photon photoelectron (TR-2PPE) spectroscopy of the Bi(111) surface. Although mainly located in the vacuum, these states probe various properties of the electronic structure of the surface as reflected in their energetics and dynamics. Based on the observation of IPS a projected gap in the surface normal direction is inferred in the region from 3.57 to 4.27 eV above the Fermi level. Despite this band gap, the lifetimes of the IPS are shorter than on comparable metals, which is an indication of the metallic character of the Bi(111) surface.

  3. Direct identification of pure penicillium species using image analysis

    DEFF Research Database (Denmark)

    Dørge, Thorsten Carlheim; Carstensen, Jens Michael; Frisvad, Jens Christian

    2000-01-01

    This paper presents a method for direct identification of fungal species solely by means of digital image analysis of colonies as seen after growth on a standard medium. The method described is completely automated and hence objective once digital images of the reference fungi have been establish...

  4. Error analysis of filtering operations in pixel-duplicated images of diabetic retinopathy

    Science.gov (United States)

    Mehrubeoglu, Mehrube; McLauchlan, Lifford

    2010-08-01

    In this paper, diabetic retinopathy is chosen for a sample target image to demonstrate the effectiveness of image enlargement through pixel duplication in identifying regions of interest. Pixel duplication is presented as a simpler alternative to data interpolation techniques for detecting small structures in the images. A comparative analysis is performed on different image processing schemes applied to both original and pixel-duplicated images. Structures of interest are detected and and classification parameters optimized for minimum false positive detection in the original and enlarged retinal pictures. The error analysis demonstrates the advantages as well as shortcomings of pixel duplication in image enhancement when spatial averaging operations (smoothing filters) are also applied.

  5. Automated discrimination of lower and higher grade gliomas based on histopathological image analysis

    Directory of Open Access Journals (Sweden)

    Hojjat Seyed Mousavi

    2015-01-01

    and MVP regions, (2 overall classification accuracy into LGG and HGG categories, and (3 receiver operating characteristic curves which can facilitate a desirable trade-off between HGG detection and false-alarm rates. Conclusion: The proposed method demonstrates fairly high accuracy and compares favorably against best-known alternatives such as the state-of-the-art WND-CHARM feature set provided by NIH combined with powerful support vector machine classifier. Our results reveal that the proposed method can be beneficial to a clinician in effectively separating histopathology slides into LGG and HGG categories, particularly where the analysis of a large number of slides is needed. Our work also reveals that MVP regions are much harder to detect than Pseudopalisading Necrosis and increasing accuracy of automated image processing for MVP detection emerges as a significant future research direction.

  6. Imaging the Where and When of Tic Generation and Resting State Networks in Adult Tourette Patients

    Directory of Open Access Journals (Sweden)

    Irene eNeuner

    2014-05-01

    Full Text Available Introduction: Tourette syndrome (TS is a neuropsychiatric disorder with the core phenomenon of tics, whose origin and temporal pattern are unclear. We investigated the When and Where of tic generation and resting state networks (RSNs via functional magnetic resonance imaging (fMRI.Methods: Tic-related activity and the underlying resting state networks in adult TS were studied within one fMRI session. Participants were instructed to lie in the scanner and to let tics occur freely. Tic onset times, as determined by video-observance were used as regressors and added to preceding time-bins of one second duration each to detect prior activation. RSN were identified by independent component analysis (ICA and correlated to disease severity by the means of dual regression.Results: Two seconds before a tic, the supplementary motor area (SMA, ventral primary motor cortex, primary sensorimotor cortex and parietal operculum exhibited activation; one second before a tic, the anterior cingulate, putamen, insula, amygdala, cerebellum and the extrastriatal-visual cortex exhibited activation; with tic-onset, the thalamus, central operculum, primary motor and somatosensory cortices exhibited activation. Analysis of resting state data resulted in 21 components including the so-called default-mode network. Network strength in those regions in SMA of two premotor ICA maps that were also active prior to tic occurrence, correlated significantly with disease severity according to the Yale Global Tic Severity Scale (YGTTS scores.Discussion: We demonstrate that the temporal pattern of tic generation follows the cortico-striato-thalamo-cortical circuit, and that cortical structures precede subcortical activation. The analysis of spontaneous fluctuations highlights the role of cortical premotor structures. Our study corroborates the notion of TS as a network disorder in which abnormal resting state network activity might contribute to the generation of tics in SMA.

  7. Image analysis of ocular fundus for retinopathy characterization

    Energy Technology Data Exchange (ETDEWEB)

    Ushizima, Daniela; Cuadros, Jorge

    2010-02-05

    Automated analysis of ocular fundus images is a common procedure in countries as England, including both nonemergency examination and retinal screening of patients with diabetes mellitus. This involves digital image capture and transmission of the images to a digital reading center for evaluation and treatment referral. In collaboration with the Optometry Department, University of California, Berkeley, we have tested computer vision algorithms to segment vessels and lesions in ground-truth data (DRIVE database) and hundreds of images of non-macular centric and nonuniform illumination views of the eye fundus from EyePACS program. Methods under investigation involve mathematical morphology (Figure 1) for image enhancement and pattern matching. Recently, we have focused in more efficient techniques to model the ocular fundus vasculature (Figure 2), using deformable contours. Preliminary results show accurate segmentation of vessels and high level of true-positive microaneurysms.

  8. Material Property Estimation for Direct Detection of DNAPL using Integrated Ground-Penetrating Radar Velocity, Imaging and Attribute Analysis

    Energy Technology Data Exchange (ETDEWEB)

    John H. Bradford; Stephen Holbrook; Scott B. Smithson

    2004-12-09

    The focus of this project is direct detection of DNAPL's specifically chlorinated solvents, via material property estimation from multi-fold surface ground-penetrating radar (GPR) data. We combine state-of-the-art GPR processing methodology with quantitative attribute analysis and material property estimation to determine the location and extent of residual and/or pooled DNAPL in both the vadose and saturated zones. An important byproduct of our research is state-of-the-art imaging which allows us to pinpoint attribute anomalies, characterize stratigraphy, identify fracture zones, and locate buried objects.

  9. NDVI and Panchromatic Image Correlation Using Texture Analysis

    Science.gov (United States)

    2010-03-01

    6 Figure 5. Spectral reflectance of vegetation and soil from 0.4 to 1.1 mm (From Perry...should help the classification methods to be able to classify kelp. Figure 5. Spectral reflectance of vegetation and soil from 0.4 to 1.1 mm...1988). Image processing software for imaging spectrometry analysis. Remote Sensing of Enviroment , 24: 201–210. Perry, C., & Lautenschlager, L. F

  10. Microstructural evolution of uranium dioxide following compression creep tests: An EBSD and image analysis study

    Energy Technology Data Exchange (ETDEWEB)

    Iltis, X., E-mail: xaviere.iltis@cea.fr [CEA, DEN, DEC, Cadarache, 13108 Saint-Paul-Lez-Durance (France); Gey, N. [Laboratoire d’Etude des Microstructures et de Mécanique des Matériaux (LEM3), CNRS UMR 7239, Université de Lorraine, Ile du Saulcy, 57045 Metz Cedex 1 (France); Cagna, C. [CEA, DEN, DEC, Cadarache, 13108 Saint-Paul-Lez-Durance (France); Hazotte, A. [Laboratoire d’Etude des Microstructures et de Mécanique des Matériaux (LEM3), CNRS UMR 7239, Université de Lorraine, Ile du Saulcy, 57045 Metz Cedex 1 (France); Sornay, Ph. [CEA, DEN, DEC, Cadarache, 13108 Saint-Paul-Lez-Durance (France)

    2015-01-15

    Highlights: • Image analysis and EBSD are performed on creep tested UO{sub 2} pellets. • Development of intergranular voids, with increasing strain, is quantified. • EBSD evidences a sub-structuration process within the grains and quantifies it. • Creep mechanisms are discussed on the basis of these results. - Abstract: Sintered UO{sub 2} pellets with relatively large grains (∼25 μm) are tested at 1500 °C under a compressive stress of 50 MPa, at different deformation levels up to 12%. Electron Back Scattered Diffraction (EBSD) is used to follow the evolution, with deformation, of grains (size, shape, orientation) and sub-grains. Image analyses of SEM images are performed to characterize emergence of a population of micron size voids. For the considered microstructure and test conditions, the results show that the deformation process of UO{sub 2} globally corresponds to grain boundary sliding, partly accommodated by a dislocational creep within the grains, leading to a highly sub-structured state.

  11. Proposal for a remote sensing trophic state index based upon Thematic Mapper/Landsat images

    Directory of Open Access Journals (Sweden)

    Evlyn Márcia Leão de Moraes Novo

    2013-12-01

    Full Text Available This work proposes a trophic state index based on the remote sensing retrieval of chlorophyll-α concentration. For that, in situ Bidirectional Reflectance Factor (BRF data acquired in the Ibitinga reservoir were resampled to match Landsat/TM spectral simulated bands (TM_sim bands and used to run linear correlation with concurrent measurements of chlorophyll-α concentration. Monte Carlo simulation was then applied to select the most suitable model relating chlorophyll-α concentration and simulated TM/Landsat reflectance. TM4_sim/TM3_sim ratio provided the best model with a R2 value of 0.78. The model was then inverted to create a look-up-table (LUT relating TM4_sim/TM3_sim ratio intervals to chlorophyll-α concentration trophic state classes covering the entire range measured in the reservoir. Atmospheric corrected Landsat TM images converted to surface reflectance were then used to generate a TM4/TM3 ratio image. The ratio image frequency distribution encompassed the range of TM4_sim/TM3_sim ratio indicating agreement between in situ and satellite data and supporting the use of satellite data to map chlorophyll- concentration trophic state distribution in the reservoir. Based on that, the LUT was applied to a Landsat/TM ratio image to map the spatial distribution of chlorophyll- trophic state classes in Ibitinga reservoir. Despite the stochastic selection of TM4_sim/TM3_sim ratio as the best input variable for modeling the chlorophyll-α concentration, it has a physical basis: high concentration of phytoplankton increases the reflectance in the near-infrared (TM4 and decreases the reflectance in the red (TM3. The band ratio, therefore, enhances the relationship between chlorophyll- concentration and remotely sensed reflectance.

  12. A framework for noise-power spectrum analysis of multidimensional images

    International Nuclear Information System (INIS)

    Siewerdsen, J.H.; Cunningham, I.A.; Jaffray, D.A.

    2002-01-01

    A methodological framework for experimental analysis of the noise-power spectrum (NPS) of multidimensional images is presented that employs well-known properties of the n-dimensional (nD) Fourier transform. The approach is generalized to n dimensions, reducing to familiar cases for n=1 (e.g., time series) and n=2 (e.g., projection radiography) and demonstrated experimentally for two cases in which n=3 (viz., using an active matrix flat-panel imager for x-ray fluoroscopy and cone-beam CT to form three-dimensional (3D) images in spatiotemporal and volumetric domains, respectively). The relationship between fully nD NPS analysis and various techniques for analyzing a 'central slice' of the NPS is formulated in a manner that is directly applicable to measured nD data, highlights the effects of correlation, and renders issues of NPS normalization transparent. The spatiotemporal NPS of fluoroscopic images is analyzed under varying conditions of temporal correlation (image lag) to investigate the degree to which the NPS is reduced by such correlation. For first-frame image lag of ∼5-8 %, the NPS is reduced by ∼20% compared to the lag-free case. A simple model is presented that results in an approximate rule of thumb for computing the effect of image lag on NPS under conditions of spatiotemporal separability. The volumetric NPS of cone-beam CT images is analyzed under varying conditions of spatial correlation, controlled by adjustment of the reconstruction filter. The volumetric NPS is found to be highly asymmetric, exhibiting a ramp characteristic in transverse planes (typical of filtered back-projection) and a band-limited characteristic in the longitudinal direction (resulting from low-pass characteristics of the imager). Such asymmetry could have implications regarding the detectability of structures visualized in transverse versus sagittal or coronal planes. In all cases, appreciation of the full dimensionality of the image data is essential to obtaining

  13. Satellite image analysis and a hybrid ESSS/ANN model to forecast solar irradiance in the tropics

    International Nuclear Information System (INIS)

    Dong, Zibo; Yang, Dazhi; Reindl, Thomas; Walsh, Wilfred M.

    2014-01-01

    Highlights: • Satellite image analysis is performed and cloud cover index is classified using self-organizing maps (SOM). • The ESSS model is used to forecast cloud cover index. • Solar irradiance is estimated using multi-layer perceptron (MLP). • The proposed model shows better accuracy than other investigated models. - Abstract: We forecast hourly solar irradiance time series using satellite image analysis and a hybrid exponential smoothing state space (ESSS) model together with artificial neural networks (ANN). Since cloud cover is the major factor affecting solar irradiance, cloud detection and classification are crucial to forecast solar irradiance. Geostationary satellite images provide cloud information, allowing a cloud cover index to be derived and analysed using self-organizing maps (SOM). Owing to the stochastic nature of cloud generation in tropical regions, the ESSS model is used to forecast cloud cover index. Among different models applied in ANN, we favour the multi-layer perceptron (MLP) to derive solar irradiance based on the cloud cover index. This hybrid model has been used to forecast hourly solar irradiance in Singapore and the technique is found to outperform traditional forecasting models

  14. Current state of molecular imaging research

    International Nuclear Information System (INIS)

    Grimm, J.; Wunder, A.

    2005-01-01

    The recent years have seen significant advances in both molecular biology, allowing the identification of genes and pathways related to disease, and imaging technologies that allow for improved spatial and temporal resolution, enhanced sensitivity, better depth penetration, improved image processing, and beneficial combinations of different imaging modalities. These advances have led to a paradigm shift in the scope of diagnostic imaging. The traditional role of radiological diagnostic imaging is to define gross anatomy and structure in order to detect pathological abnormalities. Available contrast agents are mostly non-specific and can be used to image physiological processes such as changes in blood volume, flow, and perfusion but not to demonstrate pathological alterations at molecular levels. However, alterations at the anatomical-morphological level are relatively late manifestations of underlying molecular changes. Using molecular probes or markers that bind specifically to molecular targets allows for the non-invasive visualization and quantitation of biological processes such as gene expression, apoptosis, or angiogenesis at the molecular level within intact living organisms. This rapidly evolving, multidisciplinary approach, referred to as molecular imaging, promises to enable early diagnosis, can provide improved classification of stage and severity of disease, an objective assessment of treatment efficacy, and a reliable prognosis. Furthermore, molecular imaging is an important tool for the evaluation of physiological and pathophysiological processes, and for the development of new therapies. This article comprises a review of current technologies of molecular imaging, describes the development of contrast agents and various imaging modalities, new applications in specific disease models, and potential future developments. (orig.)

  15. Label-free cell-cycle analysis by high-throughput quantitative phase time-stretch imaging flow cytometry

    Science.gov (United States)

    Mok, Aaron T. Y.; Lee, Kelvin C. M.; Wong, Kenneth K. Y.; Tsia, Kevin K.

    2018-02-01

    Biophysical properties of cells could complement and correlate biochemical markers to characterize a multitude of cellular states. Changes in cell size, dry mass and subcellular morphology, for instance, are relevant to cell-cycle progression which is prevalently evaluated by DNA-targeted fluorescence measurements. Quantitative-phase microscopy (QPM) is among the effective biophysical phenotyping tools that can quantify cell sizes and sub-cellular dry mass density distribution of single cells at high spatial resolution. However, limited camera frame rate and thus imaging throughput makes QPM incompatible with high-throughput flow cytometry - a gold standard in multiparametric cell-based assay. Here we present a high-throughput approach for label-free analysis of cell cycle based on quantitative-phase time-stretch imaging flow cytometry at a throughput of > 10,000 cells/s. Our time-stretch QPM system enables sub-cellular resolution even at high speed, allowing us to extract a multitude (at least 24) of single-cell biophysical phenotypes (from both amplitude and phase images). Those phenotypes can be combined to track cell-cycle progression based on a t-distributed stochastic neighbor embedding (t-SNE) algorithm. Using multivariate analysis of variance (MANOVA) discriminant analysis, cell-cycle phases can also be predicted label-free with high accuracy at >90% in G1 and G2 phase, and >80% in S phase. We anticipate that high throughput label-free cell cycle characterization could open new approaches for large-scale single-cell analysis, bringing new mechanistic insights into complex biological processes including diseases pathogenesis.

  16. Sensory analysis for magnetic resonance-image analysis: Using human perception and cognition to segment and assess the interior of potatoes

    DEFF Research Database (Denmark)

    Martens, Harald; Thybo, A.K.; Andersen, H.J.

    2002-01-01

    were developed by the panel during preliminary training sessions, and consisted in definitions of various biological compartments inside the tubers. The results from the sensory and the computer-assisted image analyses of the shape and interior structure of the tubers were related to the experimental...... able to detect differences between varieties as well as storage times. The sensory image analysis gave better discrimination between varieties than the computer-assisted image analysis presently employed, and was easier to interpret. Some sensory descriptors could be predicted from the computer......-assisted image analysis. The present results offer new information about using sensory analysis of MR-images not only for food science but also for medical applications for analysing MR and X-ray images and for training of personnel, such as radiologists and radiographers. (C) 2002 Elsevier Science Ltd....

  17. Quantitative Analysis in Nuclear Medicine Imaging

    CERN Document Server

    2006-01-01

    This book provides a review of image analysis techniques as they are applied in the field of diagnostic and therapeutic nuclear medicine. Driven in part by the remarkable increase in computing power and its ready and inexpensive availability, this is a relatively new yet rapidly expanding field. Likewise, although the use of radionuclides for diagnosis and therapy has origins dating back almost to the discovery of natural radioactivity itself, radionuclide therapy and, in particular, targeted radionuclide therapy has only recently emerged as a promising approach for therapy of cancer and, to a lesser extent, other diseases. As effort has, therefore, been made to place the reviews provided in this book in a broader context. The effort to do this is reflected by the inclusion of introductory chapters that address basic principles of nuclear medicine imaging, followed by overview of issues that are closely related to quantitative nuclear imaging and its potential role in diagnostic and therapeutic applications. ...

  18. Bipolar mood state reflected in cortico-amygdala resting state connectivity: A cohort and longitudinal study.

    Science.gov (United States)

    Brady, Roscoe O; Margolis, Allison; Masters, Grace A; Keshavan, Matcheri; Öngür, Dost

    2017-08-01

    Using resting-state functional magnetic resonance imaging (rsfMRI), we previously compared cohorts of bipolar I subjects in a manic state to those in a euthymic state to identify mood state-specific patterns of cortico-amygdala connectivity. Our results suggested that mania is reflected in the disruption of emotion regulation circuits. We sought to replicate this finding in a group of subjects with bipolar disorder imaged longitudinally across states of mania and euthymia METHODS: We divided our subjects into three groups: 26 subjects imaged in a manic state, 21 subjects imaged in a euthymic state, and 10 subjects imaged longitudinally across both mood states. We measured differences in amygdala connectivity between the mania and euthymia cohorts. We then used these regions of altered connectivity to examine connectivity in the longitudinal bipolar group using a within-subjects design. Our findings in the mania vs euthymia cohort comparison were replicated in the longitudinal analysis. Bipolar mania was differentiated from euthymia by decreased connectivity between the amygdala and pre-genual anterior cingulate cortex. Mania was also characterized by increased connectivity between amygdala and the supplemental motor area, a region normally anti-correlated to the amygdala in emotion regulation tasks. Stringent controls for movement effects limited the number of subjects in the longitudinal sample. In this first report of rsfMRI conducted longitudinally across mood states, we find that previously observed between-group differences in amygdala connectivity are also found longitudinally within subjects. These results suggest resting state cortico-amygdala connectivity is a biomarker of mood state in bipolar disorder. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Quantitative analysis of γ-oryzanol content in cold pressed rice bran oil by TLC-image analysis method

    OpenAIRE

    Sakunpak, Apirak; Suksaeree, Jirapornchai; Monton, Chaowalit; Pathompak, Pathamaporn; Kraisintu, Krisana

    2014-01-01

    Objective: To develop and validate an image analysis method for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. Methods: TLC-densitometric and TLC-image analysis methods were developed, validated, and used for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. The results obtained by these two different quantification methods were compared by paired t-test. Results: Both assays provided good linearity, accuracy, reproducibility and selectivity for dete...

  20. Total Mini-Mental State Examination score and regional cerebral blood flow using Z score imaging and automated ROI analysis software in subjects with memory impairment

    International Nuclear Information System (INIS)

    Ikeda, Eiji; Shiozaki, Kazumasa; Takahashi, Nobukazu; Togo, Takashi; Odawara, Toshinari; Oka, Takashi; Inoue, Tomio; Hirayasu, Yoshio

    2008-01-01

    The Mini-Mental State Examination (MMSE) is considered a useful supplementary method to diagnose dementia and evaluate the severity of cognitive disturbance. However, the region of the cerebrum that correlates with the MMSE score is not clear. Recently, a new method was developed to analyze regional cerebral blood flow (rCBF) using a Z score imaging system (eZIS). This system shows changes of rCBF when compared with a normal database. In addition, a three-dimensional stereotaxic region of interest (ROI) template (3DSRT), fully automated ROI analysis software was developed. The objective of this study was to investigate the correlation between rCBF changes and total MMSE score using these new methods. The association between total MMSE score and rCBF changes was investigated in 24 patients (mean age±standard deviation (SD) 71.5±9.2 years; 6 men and 18 women) with memory impairment using eZIS and 3DSRT. Step-wise multiple regression analysis was used for multivariate analysis, with the total MMSE score as the dependent variable and rCBF change in 24 areas as the independent variable. Total MMSE score was significantly correlated only with the reduction of left hippocampal perfusion but not with right (P<0.01). Total MMSE score is an important indicator of left hippocampal function. (author)

  1. Detailed analysis of latencies in image-based dynamic MLC tracking

    International Nuclear Information System (INIS)

    Poulsen, Per Rugaard; Cho, Byungchul; Sawant, Amit; Ruan, Dan; Keall, Paul J.

    2010-01-01

    Purpose: Previous measurements of the accuracy of image-based real-time dynamic multileaf collimator (DMLC) tracking show that the major contributor to errors is latency, i.e., the delay between target motion and MLC response. Therefore the purpose of this work was to develop a method for detailed analysis of latency contributions during image-based DMLC tracking. Methods: A prototype DMLC tracking system integrated with a linear accelerator was used for tracking a phantom with an embedded fiducial marker during treatment delivery. The phantom performed a sinusoidal motion. Real-time target localization was based on x-ray images acquired either with a portal imager or a kV imager mounted orthogonal to the treatment beam. Each image was stored in a file on the imaging workstation. A marker segmentation program opened the image file, determined the marker position in the image, and transferred it to the DMLC tracking program. This program estimated the three-dimensional target position by a single-imager method and adjusted the MLC aperture to the target position. Imaging intervals ΔT image from 150 to 1000 ms were investigated for both kV and MV imaging. After the experiments, the recorded images were synchronized with MLC log files generated by the MLC controller and tracking log files generated by the tracking program. This synchronization allowed temporal analysis of the information flow for each individual image from acquisition to completed MLC adjustment. The synchronization also allowed investigation of the MLC adjustment dynamics on a considerably finer time scale than the 50 ms time resolution of the MLC log files. Results: For ΔT image =150 ms, the total time from image acquisition to completed MLC adjustment was 380±9 ms for MV and 420±12 ms for kV images. The main part of this time was from image acquisition to completed image file writing (272 ms for MV and 309 ms for kV). Image file opening (38 ms), marker segmentation (4 ms), MLC position

  2. Detailed analysis of latencies in image-based dynamic MLC tracking

    Energy Technology Data Exchange (ETDEWEB)

    Poulsen, Per Rugaard; Cho, Byungchul; Sawant, Amit; Ruan, Dan; Keall, Paul J. [Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Department of Oncology and Department of Medical Physics, Aarhus University Hospital, 8000 Aarhus (Denmark); Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Department of Radiation Oncology, Asan Medical Center, Seoul 138-736 (Korea, Republic of); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States)

    2010-09-15

    Purpose: Previous measurements of the accuracy of image-based real-time dynamic multileaf collimator (DMLC) tracking show that the major contributor to errors is latency, i.e., the delay between target motion and MLC response. Therefore the purpose of this work was to develop a method for detailed analysis of latency contributions during image-based DMLC tracking. Methods: A prototype DMLC tracking system integrated with a linear accelerator was used for tracking a phantom with an embedded fiducial marker during treatment delivery. The phantom performed a sinusoidal motion. Real-time target localization was based on x-ray images acquired either with a portal imager or a kV imager mounted orthogonal to the treatment beam. Each image was stored in a file on the imaging workstation. A marker segmentation program opened the image file, determined the marker position in the image, and transferred it to the DMLC tracking program. This program estimated the three-dimensional target position by a single-imager method and adjusted the MLC aperture to the target position. Imaging intervals {Delta}T{sub image} from 150 to 1000 ms were investigated for both kV and MV imaging. After the experiments, the recorded images were synchronized with MLC log files generated by the MLC controller and tracking log files generated by the tracking program. This synchronization allowed temporal analysis of the information flow for each individual image from acquisition to completed MLC adjustment. The synchronization also allowed investigation of the MLC adjustment dynamics on a considerably finer time scale than the 50 ms time resolution of the MLC log files. Results: For {Delta}T{sub image}=150 ms, the total time from image acquisition to completed MLC adjustment was 380{+-}9 ms for MV and 420{+-}12 ms for kV images. The main part of this time was from image acquisition to completed image file writing (272 ms for MV and 309 ms for kV). Image file opening (38 ms), marker segmentation (4 ms

  3. Dimensionality Reduction of Hyperspectral Image with Graph-Based Discriminant Analysis Considering Spectral Similarity

    Directory of Open Access Journals (Sweden)

    Fubiao Feng

    2017-03-01

    Full Text Available Recently, graph embedding has drawn great attention for dimensionality reduction in hyperspectral imagery. For example, locality preserving projection (LPP utilizes typical Euclidean distance in a heat kernel to create an affinity matrix and projects the high-dimensional data into a lower-dimensional space. However, the Euclidean distance is not sufficiently correlated with intrinsic spectral variation of a material, which may result in inappropriate graph representation. In this work, a graph-based discriminant analysis with spectral similarity (denoted as GDA-SS measurement is proposed, which fully considers curves changing description among spectral bands. Experimental results based on real hyperspectral images demonstrate that the proposed method is superior to traditional methods, such as supervised LPP, and the state-of-the-art sparse graph-based discriminant analysis (SGDA.

  4. Automated image analysis in the study of collagenous colitis

    DEFF Research Database (Denmark)

    Fiehn, Anne-Marie Kanstrup; Kristensson, Martin; Engel, Ulla

    2016-01-01

    PURPOSE: The aim of this study was to develop an automated image analysis software to measure the thickness of the subepithelial collagenous band in colon biopsies with collagenous colitis (CC) and incomplete CC (CCi). The software measures the thickness of the collagenous band on microscopic...... slides stained with Van Gieson (VG). PATIENTS AND METHODS: A training set consisting of ten biopsies diagnosed as CC, CCi, and normal colon mucosa was used to develop the automated image analysis (VG app) to match the assessment by a pathologist. The study set consisted of biopsies from 75 patients...

  5. Automated rice leaf disease detection using color image analysis

    Science.gov (United States)

    Pugoy, Reinald Adrian D. L.; Mariano, Vladimir Y.

    2011-06-01

    In rice-related institutions such as the International Rice Research Institute, assessing the health condition of a rice plant through its leaves, which is usually done as a manual eyeball exercise, is important to come up with good nutrient and disease management strategies. In this paper, an automated system that can detect diseases present in a rice leaf using color image analysis is presented. In the system, the outlier region is first obtained from a rice leaf image to be tested using histogram intersection between the test and healthy rice leaf images. Upon obtaining the outlier, it is then subjected to a threshold-based K-means clustering algorithm to group related regions into clusters. Then, these clusters are subjected to further analysis to finally determine the suspected diseases of the rice leaf.

  6. Intramuscular leukemic relapse: clinical signs and imaging findings. A multicentric analysis

    Energy Technology Data Exchange (ETDEWEB)

    Surov, Alexey [Martin Luther University Halle-Wittenberg, Department of Radiology, Halle (Germany); University of Leipzig, Department of Diagnostic and Interventional Radiology, Leipzig (Germany); Kiratli, Hayyam [Hacettepe University School of Medicine, Department of Ophthalmology, Ankara (Turkey); Im, Soo Ah [Seoul St. Mary' s Hospital, Department of Radiology, Seoul (Korea, Republic of); Manabe, Yasuhiro [National Hospital Organization Okayama Medical Center, Department of Neurology, Okayama (Japan); O' Neill, Alibhe; Shinagare, Atul B. [Brigham and Women' s Hospital, Department of Radiology, Boston, MA (United States); Spielmann, Rolf Peter [Martin Luther University Halle-Wittenberg, Department of Radiology, Halle (Germany)

    2014-09-26

    Leukemia is a group of malignant diseases involving peripheral blood and bone marrow. Extramedullary tumor manifestation in leukemia can also occur. They more often involve lymph nodes, skin, and bones. Intramuscular leukemic relapse (ILR) is very unusual. The aim of this analysis was to summarize the reported data regarding clinical signs and radiological features of ILR. The PubMed database was searched for publications related to ILR. After an analysis of all identified articles, 20 publications matched the inclusion criteria. The authors of the 20 publications were contacted and provided imaging of their cases for review. The following were recorded: age, gender, primary diagnosis, clinical signs, pattern, localization and size of the intramuscular leukemic relapse. Images of 16 patients were provided [8 computer tomographic (CT) images and 15 magnetic resonance images, MRI]. Furthermore, one patient with ILR was identified in our institutional database. Therefore, images of 17 patients were available for further analysis. Overall, 32 cases with ILR were included in the analysis. In most cases acute myeloid leukemia was diagnosed. Most ILRs were localized in the extremities (44 %) and in the extraocular muscles (44 %). Clinically, ILR manifested as local pain, swelling and muscle weakness. Radiologically, ILR presented most frequently with diffuse muscle infiltration. On postcontrast CT/MRI, most lesions demonstrated homogeneous enhancement. ILRs were hypo-/isointense on T1w and hyperintense on T2w images. ILR manifests commonly as focal pain, swelling and muscle weakness. ILR predominantly involved the extraocular musculature and the extremities. Radiologically, diffuse muscle infiltration was the most common imaging finding. (orig.)

  7. Low-level processing for real-time image analysis

    Science.gov (United States)

    Eskenazi, R.; Wilf, J. M.

    1979-01-01

    A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.

  8. Automated Image Analysis of Offshore Infrastructure Marine Biofouling

    Directory of Open Access Journals (Sweden)

    Kate Gormley

    2018-01-01

    Full Text Available In the UK, some of the oldest oil and gas installations have been in the water for over 40 years and have considerable colonisation by marine organisms, which may lead to both industry challenges and/or potential biodiversity benefits (e.g., artificial reefs. The project objective was to test the use of an automated image analysis software (CoralNet on images of marine biofouling from offshore platforms on the UK continental shelf, with the aim of (i training the software to identify the main marine biofouling organisms on UK platforms; (ii testing the software performance on 3 platforms under 3 different analysis criteria (methods A–C; (iii calculating the percentage cover of marine biofouling organisms and (iv providing recommendations to industry. Following software training with 857 images, and testing of three platforms, results showed that diversity of the three platforms ranged from low (in the central North Sea to moderate (in the northern North Sea. The two central North Sea platforms were dominated by the plumose anemone Metridium dianthus; and the northern North Sea platform showed less obvious species domination. Three different analysis criteria were created, where the method of selection of points, number of points assessed and confidence level thresholds (CT varied: (method A random selection of 20 points with CT 80%, (method B stratified random of 50 points with CT of 90% and (method C a grid approach of 100 points with CT of 90%. Performed across the three platforms, the results showed that there were no significant differences across the majority of species and comparison pairs. No significant difference (across all species was noted between confirmed annotations methods (A, B and C. It was considered that the software performed well for the classification of the main fouling species in the North Sea. Overall, the study showed that the use of automated image analysis software may enable a more efficient and consistent

  9. Automated MicroSPECT/MicroCT Image Analysis of the Mouse Thyroid Gland.

    Science.gov (United States)

    Cheng, Peng; Hollingsworth, Brynn; Scarberry, Daniel; Shen, Daniel H; Powell, Kimerly; Smart, Sean C; Beech, John; Sheng, Xiaochao; Kirschner, Lawrence S; Menq, Chia-Hsiang; Jhiang, Sissy M

    2017-11-01

    The ability of thyroid follicular cells to take up iodine enables the use of radioactive iodine (RAI) for imaging and targeted killing of RAI-avid thyroid cancer following thyroidectomy. To facilitate identifying novel strategies to improve 131 I therapeutic efficacy for patients with RAI refractory disease, it is desired to optimize image acquisition and analysis for preclinical mouse models of thyroid cancer. A customized mouse cradle was designed and used for microSPECT/CT image acquisition at 1 hour (t1) and 24 hours (t24) post injection of 123 I, which mainly reflect RAI influx/efflux equilibrium and RAI retention in the thyroid, respectively. FVB/N mice with normal thyroid glands and TgBRAF V600E mice with thyroid tumors were imaged. In-house CTViewer software was developed to streamline image analysis with new capabilities, along with display of 3D voxel-based 123 I gamma photon intensity in MATLAB. The customized mouse cradle facilitates consistent tissue configuration among image acquisitions such that rigid body registration can be applied to align serial images of the same mouse via the in-house CTViewer software. CTViewer is designed specifically to streamline SPECT/CT image analysis with functions tailored to quantify thyroid radioiodine uptake. Automatic segmentation of thyroid volumes of interest (VOI) from adjacent salivary glands in t1 images is enabled by superimposing the thyroid VOI from the t24 image onto the corresponding aligned t1 image. The extent of heterogeneity in 123 I accumulation within thyroid VOIs can be visualized by 3D display of voxel-based 123 I gamma photon intensity. MicroSPECT/CT image acquisition and analysis for thyroidal RAI uptake is greatly improved by the cradle and the CTViewer software, respectively. Furthermore, the approach of superimposing thyroid VOIs from t24 images to select thyroid VOIs on corresponding aligned t1 images can be applied to studies in which the target tissue has differential radiotracer retention

  10. Open source tools for fluorescent imaging.

    Science.gov (United States)

    Hamilton, Nicholas A

    2012-01-01

    As microscopy becomes increasingly automated and imaging expands in the spatial and time dimensions, quantitative analysis tools for fluorescent imaging are becoming critical to remove both bottlenecks in throughput as well as fully extract and exploit the information contained in the imaging. In recent years there has been a flurry of activity in the development of bio-image analysis tools and methods with the result that there are now many high-quality, well-documented, and well-supported open source bio-image analysis projects with large user bases that cover essentially every aspect from image capture to publication. These open source solutions are now providing a viable alternative to commercial solutions. More importantly, they are forming an interoperable and interconnected network of tools that allow data and analysis methods to be shared between many of the major projects. Just as researchers build on, transmit, and verify knowledge through publication, open source analysis methods and software are creating a foundation that can be built upon, transmitted, and verified. Here we describe many of the major projects, their capabilities, and features. We also give an overview of the current state of open source software for fluorescent microscopy analysis and the many reasons to use and develop open source methods. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Image analysis to evaluate the browning degree of banana (Musa spp.) peel.

    Science.gov (United States)

    Cho, Jeong-Seok; Lee, Hyeon-Jeong; Park, Jung-Hoon; Sung, Jun-Hyung; Choi, Ji-Young; Moon, Kwang-Deog

    2016-03-01

    Image analysis was applied to examine banana peel browning. The banana samples were divided into 3 treatment groups: no treatment and normal packaging (Cont); CO2 gas exchange packaging (CO); normal packaging with an ethylene generator (ET). We confirmed that the browning of banana peels developed more quickly in the CO group than the other groups based on sensory test and enzyme assay. The G (green) and CIE L(∗), a(∗), and b(∗) values obtained from the image analysis sharply increased or decreased in the CO group. And these colour values showed high correlation coefficients (>0.9) with the sensory test results. CIE L(∗)a(∗)b(∗) values using a colorimeter also showed high correlation coefficients but comparatively lower than those of image analysis. Based on this analysis, browning of the banana occurred more quickly for CO2 gas exchange packaging, and image analysis can be used to evaluate the browning of banana peels. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Cardiac imaging: working towards fully-automated machine analysis & interpretation.

    Science.gov (United States)

    Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido

    2017-03-01

    Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered: This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary: Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation.

  13. Mediman: Object oriented programming approach for medical image analysis

    International Nuclear Information System (INIS)

    Coppens, A.; Sibomana, M.; Bol, A.; Michel, C.

    1993-01-01

    Mediman is a new image analysis package which has been developed to analyze quantitatively Positron Emission Tomography (PET) data. It is object-oriented, written in C++ and its user interface is based on InterViews on top of which new classes have been added. Mediman accesses data using external data representation or import/export mechanism which avoids data duplication. Multimodality studies are organized in a simple database which includes images, headers, color tables, lists and objects of interest (OOI's) and history files. Stored color table parameters allow to focus directly on the interesting portion of the dynamic range. Lists allow to organize the study according to modality, acquisition protocol, time and spatial properties. OOI's (points, lines and regions) are stored in absolute 3-D coordinates allowing correlation with other co-registered imaging modalities such as MRI or SPECT. OOI's have visualization properties and are organized into groups. Quantitative ROI analysis of anatomic images consists of position, distance, volume calculation on selected OOI's. An image calculator is connected to mediman. Quantitation of metabolic images is performed via profiles, sectorization, time activity curves and kinetic modeling. Mediman is menu and mouse driven, macro-commands can be registered and replayed. Its interface is customizable through a configuration file. The benefit of the object-oriented approach are discussed from a development point of view

  14. Texture analysis of computed tomography images of acute ischemic stroke patients

    International Nuclear Information System (INIS)

    Oliveira, M.S.; Castellano, G.; Fernandes, P.T.; Avelar, W.M.; Santos, S.L.M.; Li, L.M.

    2009-01-01

    Computed tomography (CT) images are routinely used to assess ischemic brain stroke in the acute phase. They can provide important clues about whether to treat the patient by thrombolysis with tissue plasminogen activator. However, in the acute phase, the lesions may be difficult to detect in the images using standard visual analysis. The objective of the present study was to determine if texture analysis techniques applied to CT images of stroke patients could differentiate between normal tissue and affected areas that usually go unperceived under visual analysis. We performed a pilot study in which texture analysis, based on the gray level co-occurrence matrix, was applied to the CT brain images of 5 patients and of 5 control subjects and the results were compared by discriminant analysis. Thirteen regions of interest, regarding areas that may be potentially affected by ischemic stroke, were selected for calculation of texture parameters. All regions of interest for all subjects were classified as lesional or non-lesional tissue by an expert neuroradiologist. Visual assessment of the discriminant analysis graphs showed differences in the values of texture parameters between patients and controls, and also between texture parameters for lesional and non-lesional tissue of the patients. This suggests that texture analysis can indeed be a useful tool to help neurologists in the early assessment of ischemic stroke and quantification of the extent of the affected areas. (author)

  15. Application of forensic image analysis in accident investigations.

    Science.gov (United States)

    Verolme, Ellen; Mieremet, Arjan

    2017-09-01

    Forensic investigations are primarily meant to obtain objective answers that can be used for criminal prosecution. Accident analyses are usually performed to learn from incidents and to prevent similar events from occurring in the future. Although the primary goal may be different, the steps in which information is gathered, interpreted and weighed are similar in both types of investigations, implying that forensic techniques can be of use in accident investigations as well. The use in accident investigations usually means that more information can be obtained from the available information than when used in criminal investigations, since the latter require a higher evidence level. In this paper, we demonstrate the applicability of forensic techniques for accident investigations by presenting a number of cases from one specific field of expertise: image analysis. With the rapid spread of digital devices and new media, a wealth of image material and other digital information has become available for accident investigators. We show that much information can be distilled from footage by using forensic image analysis techniques. These applications show that image analysis provides information that is crucial for obtaining the sequence of events and the two- and three-dimensional geometry of an accident. Since accident investigation focuses primarily on learning from accidents and prevention of future accidents, and less on the blame that is crucial for criminal investigations, the field of application of these forensic tools may be broader than would be the case in purely legal sense. This is an important notion for future accident investigations. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Phase Image Analysis in Conduction Disturbance Patients

    Energy Technology Data Exchange (ETDEWEB)

    Kwark, Byeng Su; Choi, Si Wan; Kang, Seung Sik; Park, Ki Nam; Lee, Kang Wook; Jeon, Eun Seok; Park, Chong Hun [Chung Nam University Hospital, Daejeon (Korea, Republic of)

    1994-03-15

    It is known that the normal His-Purkinje system provides for nearly synchronous activation of right (RV) and left (LV) ventricles. When His-Purkinje conduction is abnormal, the resulting sequence of ventricular contraction must be correspondingly abnormal. These abnormal mechanical consequences were difficult to demonstrate because of the complexity and the rapidity of its events. To determine the relationship of the phase changes and the abnormalities of ventricular conduction, we performed phase image analysis of Tc-RBC gated blood pool scintigrams in patients with intraventricular conduction disturbances (24 complete left bundle branch block (C-LBBB), 15 complete right bundle branch block (C-RBBB), 13 Wolff-Parkinson-White syndrome (WPW), 10 controls). The results were as follows; 1) The ejection fraction (EF), peak ejection rate (PER), and peak filling rate (PFR) of LV in gated blood pool scintigraphy (GBPS) were significantly lower in patients with C-LBBB than in controls (44.4 +- 13.9% vs 69.9 +- 4.2%, 2.48 +- 0.98 vs 3.51 +- 0,62, 1.76 +- 0.71 vs 3.38 +- 0.92, respectively, p<0.05). 2) In the phase angle analysis of LV, Standard deviation (SD), width of half maximum of phase angle (FWHM), and range of phase angle were significantly increased in patients with C-LBBB than in controls (20.6 + 18.1 vs S.6 + I.8, 22. 5 + 9.2 vs 16.0 + 3.9, 95.7 + 31.7 vs 51.3 + 5.4, respectively, p<0.05). 3) There was no significant difference in EF, PER, PFR between patients with the WolffParkinson-White syndrome and controls. 4) Standard deviation and range of phase angle were significantly higher in patients with WPW syndrome than in controls (10.6 + 2.6 vs 8.6 + 1.8, p<0.05, 69.8 + 11.7 vs 51.3 + 5 4, p<0.001, respectively), however, there was no difference between the two groups in full width of half maximum. 5) Phase image analysis revealed relatively uniform phase across the both ventriles in patients with normal conduction, but markedly delayed phase in the left ventricle

  17. 3-D Image Analysis of Fluorescent Drug Binding

    Directory of Open Access Journals (Sweden)

    M. Raquel Miquel

    2005-01-01

    Full Text Available Fluorescent ligands provide the means of studying receptors in whole tissues using confocal laser scanning microscopy and have advantages over antibody- or non-fluorescence-based method. Confocal microscopy provides large volumes of images to be measured. Histogram analysis of 3-D image volumes is proposed as a method of graphically displaying large amounts of volumetric image data to be quickly analyzed and compared. The fluorescent ligand BODIPY FL-prazosin (QAPB was used in mouse aorta. Histogram analysis reports the amount of ligand-receptor binding under different conditions and the technique is sensitive enough to detect changes in receptor availability after antagonist incubation or genetic manipulations. QAPB binding was concentration dependent, causing concentration-related rightward shifts in the histogram. In the presence of 10 μM phenoxybenzamine (blocking agent, the QAPB (50 nM histogram overlaps the autofluorescence curve. The histogram obtained for the 1D knockout aorta lay to the left of that of control and 1B knockout aorta, indicating a reduction in 1D receptors. We have shown, for the first time, that it is possible to graphically display binding of a fluorescent drug to a biological tissue. Although our application is specific to adrenergic receptors, the general method could be applied to any volumetric, fluorescence-image-based assay.

  18. Automatic analysis of the micronucleus test in primary human lymphocytes using image analysis.

    Science.gov (United States)

    Frieauff, W; Martus, H J; Suter, W; Elhajouji, A

    2013-01-01

    The in vitro micronucleus test (MNT) is a well-established test for early screening of new chemical entities in industrial toxicology. For assessing the clastogenic or aneugenic potential of a test compound, micronucleus induction in cells has been shown repeatedly to be a sensitive and a specific parameter. Various automated systems to replace the tedious and time-consuming visual slide analysis procedure as well as flow cytometric approaches have been discussed. The ROBIAS (Robotic Image Analysis System) for both automatic cytotoxicity assessment and micronucleus detection in human lymphocytes was developed at Novartis where the assay has been used to validate positive results obtained in the MNT in TK6 cells, which serves as the primary screening system for genotoxicity profiling in early drug development. In addition, the in vitro MNT has become an accepted alternative to support clinical studies and will be used for regulatory purposes as well. The comparison of visual with automatic analysis results showed a high degree of concordance for 25 independent experiments conducted for the profiling of 12 compounds. For concentration series of cyclophosphamide and carbendazim, a very good correlation between automatic and visual analysis by two examiners could be established, both for the relative division index used as cytotoxicity parameter, as well as for micronuclei scoring in mono- and binucleated cells. Generally, false-positive micronucleus decisions could be controlled by fast and simple relocation of the automatically detected patterns. The possibility to analyse 24 slides within 65h by automatic analysis over the weekend and the high reproducibility of the results make automatic image processing a powerful tool for the micronucleus analysis in primary human lymphocytes. The automated slide analysis for the MNT in human lymphocytes complements the portfolio of image analysis applications on ROBIAS which is supporting various assays at Novartis.

  19. Architectural design and analysis of a programmable image processor

    International Nuclear Information System (INIS)

    Siyal, M.Y.; Chowdhry, B.S.; Rajput, A.Q.K.

    2003-01-01

    In this paper we present an architectural design and analysis of a programmable image processor, nicknamed Snake. The processor was designed with a high degree of parallelism to speed up a range of image processing operations. Data parallelism found in array processors has been included into the architecture of the proposed processor. The implementation of commonly used image processing algorithms and their performance evaluation are also discussed. The performance of Snake is also compared with other types of processor architectures. (author)

  20. Image seedling analysis to evaluate tomato seed physiological potential

    Directory of Open Access Journals (Sweden)

    Vanessa Neumann Silva

    Full Text Available Computerized seedling image analysis are one of the most recently techniques to detect differences of vigor between seed lots. The aim of this study was verify the hability of computerized seedling image analysis by SVIS® to detect differences of vigor between tomato seed lots as information provided by traditionally vigor tests. Ten lots of tomato seeds, cultivar Santa Clara, were stored for 12 months in controlled environment at 20 ± 1 ºC and 45-50% of relative humidity of the air. The moisture content of the seeds was monitored and the physiological potential tested at 0, 6 and 12 months after storage, with germination test, first count of germination, traditional accelerated ageing and with saturated salt solution, electrical conductivity, seedling emergence and with seed vigor imaging system (SVIS®. A completely randomized experimental design was used with four replications. The parameters obtained by the computerized seedling analysis (seedling length and indexes of vigor and seedling growth with software SVIS® are efficient to detect differences between tomato seed lots of high and low vigor.