WorldWideScience

Sample records for photoshop-based image analysis

  1. Complete chromogen separation and analysis in double immunohistochemical stains using Photoshop-based image analysis.

    Science.gov (United States)

    Lehr, H A; van der Loos, C M; Teeling, P; Gown, A M

    1999-01-01

    Simultaneous detection of two different antigens on paraffin-embedded and frozen tissues can be accomplished by double immunohistochemistry. However, many double chromogen systems suffer from signal overlap, precluding definite signal quantification. To separate and quantitatively analyze the different chromogens, we imported images into a Macintosh computer using a CCD camera attached to a diagnostic microscope and used Photoshop software for the recognition, selection, and separation of colors. We show here that Photoshop-based image analysis allows complete separation of chromogens not only on the basis of their RGB spectral characteristics, but also on the basis of information concerning saturation, hue, and luminosity intrinsic to the digitized images. We demonstrate that Photoshop-based image analysis provides superior results compared to color separation using bandpass filters. Quantification of the individual chromogens is then provided by Photoshop using the Histogram command, which supplies information on the luminosity (corresponding to gray levels of black-and-white images) and on the number of pixels as a measure of spatial distribution. (J Histochem Cytochem 47:119-125, 1999)

  2. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    Science.gov (United States)

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  3. Beauty is only photoshop deep: legislating models' BMIs and photoshopping images.

    Science.gov (United States)

    Krawitz, Marilyn

    2014-06-01

    Many women struggle with poor body image and eating disorders due, in part, to images of very thin women and photoshopped bodies in the media and advertisements. In 2013, Israel's Act Limiting Weight in the Modelling Industry, 5772-2012, came into effect. Known as the Photoshop Law, it requires all models in Israel who are over 18 years old to have a body mass index of 18.5 or higher. The Israeli government was the first government in the world to legislate on this issue. Australia has a voluntary Code of Conduct that is similar to the Photoshop Law. This article argues that the Australian government should follow Israel's lead and pass a law similar to the Photoshop Law because the Code is not sufficiently binding.

  4. Photoshop-based image analysis of canine articular cartilage after subchondral damage.

    Science.gov (United States)

    Lahm, A; Uhl, M; Lehr, H A; Ihling, C; Kreuz, P C; Haberstroh, J

    2004-09-01

    The validity of histopathological grading is a major problem in the assessment of articular cartilage. Calculating the cumulative strength of signal intensity of different stains gives information regarding the amount of proteoglycan, glycoproteins, etc. Using this system, we examined the medium-term effect of subchondral lesions on initially healthy articular cartilage. After cadaver studies, an animal model was created to produce pure subchondral damage without affecting the articular cartilage in 12 beagle dogs under MRI control. Quantification of the different stains was provided using a Photoshop-based image analysis (pixel analysis) with the histogram command 6 months after subchondral trauma. FLASH 3D sequences revealed intact cartilage after impact in all cases. The best detection of subchondral fractures was achieved with fat-suppressed TIRM sequences. Semiquantitative image analysis showed changes in proteoglycan and glycoprotein quantities in 9 of 12 samples that had not shown any evidence of damage during the initial examination. Correlation analysis showed a loss of the physiological distribution of proteoglycans and glycoproteins in the different zones of articular cartilage. Currently available software programs can be applied for comparative analysis of histologic stains of hyaline cartilage. After subchondral fractures, significant changes in the cartilage itself occur after 6 months.

  5. Semi-automated relative quantification of cell culture contamination with mycoplasma by Photoshop-based image analysis on immunofluorescence preparations.

    Science.gov (United States)

    Kumar, Ashok; Yerneni, Lakshmana K

    2009-01-01

    Mycoplasma contamination in cell culture is a serious setback for the cell-culturist. The experiments undertaken using contaminated cell cultures are known to yield unreliable or false results due to various morphological, biochemical and genetic effects. Earlier surveys revealed incidences of mycoplasma contamination in cell cultures to range from 15 to 80%. Out of a vast array of methods for detecting mycoplasma in cell culture, the cytological methods directly demonstrate the contaminating organism present in association with the cultured cells. In this investigation, we report the adoption of a cytological immunofluorescence assay (IFA), in an attempt to obtain a semi-automated relative quantification of contamination by employing the user-friendly Photoshop-based image analysis. The study performed on 77 cell cultures randomly collected from various laboratories revealed mycoplasma contamination in 18 cell cultures simultaneously by IFA and Hoechst DNA fluorochrome staining methods. It was observed that the Photoshop-based image analysis on IFA stained slides was very valuable as a sensitive tool in providing quantitative assessment on the extent of contamination both per se and in comparison to cellularity of cell cultures. The technique could be useful in estimating the efficacy of anti-mycoplasma agents during decontaminating measures.

  6. Changes in content and synthesis of collagen types and proteoglycans in osteoarthritis of the knee joint and comparison of quantitative analysis with Photoshop-based image analysis.

    Science.gov (United States)

    Lahm, Andreas; Mrosek, Eike; Spank, Heiko; Erggelet, Christoph; Kasch, Richard; Esser, Jan; Merk, Harry

    2010-04-01

    The different cartilage layers vary in synthesis of proteoglycan and of the distinct types of collagen with the predominant collagen Type II with its associated collagens, e.g. types IX and XI, produced by normal chondrocytes. It was demonstrated that proteoglycan decreases in degenerative tissue and a switch from collagen type II to type I occurs. The aim of this study was to evaluate the correlation of real-time (RT)-PCR and Photoshop-based image analysis in detecting such lesions and find new aspects about their distribution. We performed immunohistochemistry and histology with cartilage tissue samples from 20 patients suffering from osteoarthritis compared with 20 healthy biopsies. Furthermore, we quantified our results on the gene expression of collagen type I and II and aggrecan with the help of real-time (RT)-PCR. Proteoglycan content was measured colorimetrically. Using Adobe Photoshop the digitized images of histology and immunohistochemistry stains of collagen type I and II were stored on an external data storage device. The area occupied by any specific colour range can be specified and compared in a relative manner directly from the histogram using the "magic wand tool" in the select similar menu. In the image grow menu gray levels or luminosity (colour) of all pixels within the selected area, including mean, median and standard deviation, etc. are depicted. Statistical Analysis was performed using the t test. With the help of immunohistochemistry, RT-PCR and quantitative RT- PCR we found that not only collagen type II, but also collagen type I is synthesized by the cells of the diseased cartilage tissue, shown by increasing amounts of collagen type I mRNA especially in the later stages of osteoarthritis. A decrease of collagen type II is visible especially in the upper fibrillated area of the advanced osteoarthritic samples, which leads to an overall decrease. Analysis of proteoglycan showed a loss of the overall content and a quite uniform staining in

  7. [Landmark-based automatic registration of serial cross-sectional images of Chinese digital human using Photoshop and Matlab software].

    Science.gov (United States)

    Su, Xiu-yun; Pei, Guo-xian; Yu, Bin; Hu, Yan-ling; Li, Jin; Huang, Qian; Li, Xu; Zhang, Yuan-zhi

    2007-12-01

    This paper describes automatic registration of the serial cross-sectional images of Chinese digital human by projective registration method based on the landmarks using the commercially available software Photoshop and Matlab. During cadaver embedment for acquisition of the Chinese digital human images, 4 rods were placed parallel to the vertical axis of the frozen cadaver to allow orientation. Projective distortion of the rod positions on the cross-sectional images was inevitable due to even slight changes of the relative position of the camera. The original cross-sectional images were first processed using Photoshop software firstly to obtain the images of the orientation rods, and the centroid coordinate of every rod image was acquired with Matlab software. With the average coordinate value of the rods as the fiducial point, two-dimensional projective transformation coefficient of each image was determined. Projective transformation was then carried out and projective distortion from each original serial image was eliminated. The rectified cross-sectional images were again processed using Photoshop to obtain the image of the first orientation rod, the coordinate value of first rod image was calculated using Matlab software, and the cross-sectional images were cut into images of the same size according to the first rod spatial coordinate, to achieve automatic registration of the serial cross-sectional images. sing Photoshop and Matlab softwares, projective transformation can accurately accomplish the image registration for the serial images with simpler calculation processes and easier computer processing.

  8. Adobe Photoshop CC for photographers a professional image editor's guide to the creative use of Photoshop for the Macintosh and PC

    CERN Document Server

    Evening, Martin

    2013-01-01

    Martin Evening, Photoshop hall-of-famer and acclaimed digital imaging professional, has revamped his much-admired Photoshop for Photographers book for an eleventh edition, to include detailed instruction for all of the updates to Photoshop CC on Adobe's Creative Cloud. This comprehensive guide covers all the tools and techniques serious photographers need to know when using Photoshop, from workflow guidance to core skills to advanced techniques for professional results. Using clear, succinct instruction and real world examples, this guide is the essential reference for Photoshop users of al

  9. Digital Imaging: An Adobe Photoshop Course

    Science.gov (United States)

    Cobb, Kristine

    2007-01-01

    This article introduces digital imaging, an Adobe Photoshop course at Shrewsbury High School in Shrewsbury, Massachusetts. Students are able to earn art credits to graduate by successfully completing the course. Digital imaging must cover art criteria as well as technical skills. The course begins with tutorials created by the instructor and other…

  10. Agreement between clinical estimation and a new quantitative analysis by Photoshop software in fundus and angiographic image variables.

    Science.gov (United States)

    Ramezani, Alireza; Ahmadieh, Hamid; Azarmina, Mohsen; Soheilian, Masoud; Dehghan, Mohammad H; Mohebbi, Mohammad R

    2009-12-01

    To evaluate the validity of a new method for the quantitative analysis of fundus or angiographic images using Photoshop 7.0 (Adobe, USA) software by comparing with clinical evaluation. Four hundred and eighteen fundus and angiographic images of diabetic patients were evaluated by three retina specialists and then by computing using Photoshop 7.0 software. Four variables were selected for comparison: amount of hard exudates (HE) on color pictures, amount of HE on red-free pictures, severity of leakage, and the size of the foveal avascular zone (FAZ). The coefficient of agreement (Kappa) between the two methods in the amount of HE on color and red-free photographs were 85% (0.69) and 79% (0.59), respectively. The agreement for severity of leakage was 72% (0.46). In the two methods for the evaluation of the FAZ size using the magic and lasso software tools, the agreement was 54% (0.09) and 89% (0.77), respectively. Agreement in the estimation of the FAZ size by the lasso magnetic tool was excellent and was almost as good in the quantification of HE on color and on red-free images. Considering the agreement of this new technique for the measurement of variables in fundus images using Photoshop software with the clinical evaluation, this method seems to have sufficient validity to be used for the quantitative analysis of HE, leakage, and FAZ size on the angiograms of diabetic patients.

  11. Adobe Photoshop images software in the verification of radiation portal

    International Nuclear Information System (INIS)

    Ouyang Shuigen; Wang Xiaohu; Liu Zhiqiang; Wei Xiyi; Qi Yong

    2010-01-01

    Objective: To investigate the value of Adobe Photoshop images software in the verification of radiation portal. Methods: The portal and simulation films or CT reconstruction images were imported into computer using a scanner. The image size, gray scale and contrast scale were adjusted with Adobe Photoshop images software, then image registration and measurement were completed. Results: By the comparison between portal image and simulation image, the set-up errors in right-left, superior-inferior and anterior-posterior directions were (1.11 ± 1.37) mm, (1.33 ± 1.25) mm and (0.83±0.79) mm in the head and neck;(1.44±1.03) mm,(1.6±1.52) mm and (1.34±1.17) mm in the thorax;(1.53±0.86) mm, (1.83 ± 1.19) mm and (1.67 ± 0.68)mm in the abdomen; (1.93 ± I. 83) mm, (1.59 ± 1.07)mm and (0.85 ± 0.72)mm in the pelvic cavity. Conclusions: Accurate radiation portal verification and position measurement can be completed by using Adobe Photoshop, which is a simple, safe and reliable method. (authors)

  12. Ultrasound estimates of muscle quality in older adults: reliability and comparison of Photoshop and ImageJ for the grayscale analysis of muscle echogenicity.

    Science.gov (United States)

    Harris-Love, Michael O; Seamon, Bryant A; Teixeira, Carla; Ismail, Catheeja

    2016-01-01

    Background. Quantitative diagnostic ultrasound imaging has been proposed as a method of estimating muscle quality using measures of echogenicity. The Rectangular Marquee Tool (RMT) and the Free Hand Tool (FHT) are two types of editing features used in Photoshop and ImageJ for determining a region of interest (ROI) within an ultrasound image. The primary objective of this study is to determine the intrarater and interrater reliability of Photoshop and ImageJ for the estimate of muscle tissue echogenicity in older adults via grayscale histogram analysis. The secondary objective is to compare the mean grayscale values obtained using both the RMT and FHT methods across both image analysis platforms. Methods. This cross-sectional observational study features 18 community-dwelling men (age = 61.5 ± 2.32 years). Longitudinal views of the rectus femoris were captured using B-mode ultrasound. The ROI for each scan was selected by 2 examiners using the RMT and FHT methods from each software program. Their reliability is assessed using intraclass correlation coefficients (ICCs) and the standard error of the measurement (SEM). Measurement agreement for these values is depicted using Bland-Altman plots. A paired t-test is used to determine mean differences in echogenicity expressed as grayscale values using the RMT and FHT methods to select the post-image acquisition ROI. The degree of association among ROI selection methods and image analysis platforms is analyzed using the coefficient of determination (R (2)). Results. The raters demonstrated excellent intrarater and interrater reliability using the RMT and FHT methods across both platforms (lower bound 95% CI ICC = .97-.99, p Photoshop was .97 and 1.05 grayscale levels when using the RMT and FHT ROI selection methods, respectively. Comparatively, the SEM values were .72 and .81 grayscale levels, respectively, when using the RMT and FHT ROI selection methods in ImageJ. Uniform coefficients of determination (R (2) = .96

  13. Evaluation of photoshop based image analysis in cytologic diagnosis of pleural fluid in comparison with conventional modalities.

    Science.gov (United States)

    Jafarian, Amir Hossein; Tasbandi, Aida; Mohamadian Roshan, Nema

    2018-04-19

    The aim of this study is to investigate and compare the results of digital image analysis in pleural effusion cytology samples with conventional modalities. In this cross-sectional study, 53 pleural fluid cytology smears from Qaem hospital pathology department, located in Mashhad, Iran were investigated. Prior to digital analysis, all specimens were evaluated by two pathologists and categorized into three groups as: benign, suspicious, and malignant. Using an Olympus microscope and Olympus DP3 digital camera, digital images from cytology slides were captured. Appropriate images (n = 130) were separately imported to Adobe Photoshop CS5 and parameters including area and perimeter, circularity, Gray Value mean, integrated density, and nucleus to cytoplasm area ratio were analyzed. Gray Value mean, nucleus to cytoplasm area ratio, and circularity showed the best sensitivity and specificity rates as well as significant differences between all groups. Also, nucleus area and perimeter showed a significant relation between suspicious and malignant groups with benign group. Whereas, there was no such difference between suspicious and malignant groups. We concluded that digital image analysis is welcomed in the field of research on pleural fluid smears as it can provide quantitative data to apply various comparisons and reduce interobserver variation which could assist pathologists to achieve a more accurate diagnosis. © 2018 Wiley Periodicals, Inc.

  14. Adobe Photoshop CC for photographers

    CERN Document Server

    Evening, Martin

    2014-01-01

    Adobe Photoshop for Photographers 2014 Release by Photoshop hall-of-famer and acclaimed digital imaging professional Martin Evening has been fully updated to include detailed instruction for all of the updates to Photoshop CC 2014 on Adobe's Creative Cloud, including significant new features, such as Focus Area selections, enhanced Content-Aware filling, and new Spin and Path blur gallery effects. This guide covers all the tools and techniques photographers and professional image editors need to know when using Photoshop, from workflow guidance to core skills to advanced techniques for profess

  15. Clean Up Your Image: A Beginner's Guide to Scanning and Photoshop

    Science.gov (United States)

    Stitzer, Michael S.

    2005-01-01

    In this article, the author addresses the key steps of scanning and illustrates the process with screen shots taken from a Macintosh G4 Powerbook computer running OSX and Adobe Photoshop 7.0. After reviewing scanning procedures, the author describes how to use Photoshop 7.0 to manipulate a scanned image. This activity gives students a good general…

  16. Photoshop Elements 10 For Dummies

    CERN Document Server

    Obermeier, Barbara

    2011-01-01

    Perfect your photos and images with this "focused" guide to the latest version of Photoshop Elements For most of us, the professional-level Photoshop is overkill for our needs. Amateur photographers and photo enthusiasts turn to Photoshop Elements for a powerful but simpler way to edit and retouch their snapshots. Photoshop Elements 10 For Dummies, fully updated and revised for the latest release of this software product, helps you navigate Elements to create, edit, fix, share, and organize the high-quality images you desire. Full color pages bring the techniques to life and make taking great

  17. Photoshop CS5 for dummies

    CERN Document Server

    Bauer, Peter

    2010-01-01

    The bestselling guide to the leading image-editing software, fully updated Previous editions of this For Dummies guide have sold more 650,000 copies. Richly illustrated in full color, this edition covers all the updates in the newest version of Photoshop, the gold standard for image-editing programs. Used by professional photographers, graphic designers, and Web designers as well as hobbyists, Photoshop has more than four million users worldwide.Photoshop is the image-editing software preferred by professional photographers and designers around the world; the latest ver

  18. Adobe Photoshop CS4 for Photographers The Ultimate Workshop

    CERN Document Server

    Evening, Martin

    2009-01-01

    Professional commercial photographer and digital imager Jeff Schewe (based in Chicago, USA) has teamed up with best-selling Photoshop author Martin Evening to create this goldmine of information for advanced Photoshop users. Building on Martin Evening's successful Adobe Photoshop for Photographers series of titles, this new guide takes the same winning approach and applies it to a professional Photoshop workflow. Highly visual, with clear, step-by-step tutorials, this advanced guide will really appeal to those who want to see how the experts approach Photoshop, produci

  19. Photoshop CC for dummies

    CERN Document Server

    Bauer, Peter

    2013-01-01

    Stretch your creativity beyond the cloud with this fully-updated Photoshop guide!Photoshop puts amazing design and photo-editing tools in the hands of creative professionals and hobbyists everywhere, and the latest version - Photoshop CC - is packed with even more powerful tools to help you manage and enhance your images. This friendly, full-color guide introduces you to the basics of Photoshop CC and provides clear explanations of the menus, panels, tools, options, and shortcuts you'll use the most. Plus, you'll learn valuable tips for fixing common photo flaws, improvin

  20. Ultrasound estimates of muscle quality in older adults: reliability and comparison of Photoshop and ImageJ for the grayscale analysis of muscle echogenicity

    Directory of Open Access Journals (Sweden)

    Michael O. Harris-Love

    2016-02-01

    Full Text Available Background. Quantitative diagnostic ultrasound imaging has been proposed as a method of estimating muscle quality using measures of echogenicity. The Rectangular Marquee Tool (RMT and the Free Hand Tool (FHT are two types of editing features used in Photoshop and ImageJ for determining a region of interest (ROI within an ultrasound image. The primary objective of this study is to determine the intrarater and interrater reliability of Photoshop and ImageJ for the estimate of muscle tissue echogenicity in older adults via grayscale histogram analysis. The secondary objective is to compare the mean grayscale values obtained using both the RMT and FHT methods across both image analysis platforms. Methods. This cross-sectional observational study features 18 community-dwelling men (age = 61.5 ± 2.32 years. Longitudinal views of the rectus femoris were captured using B-mode ultrasound. The ROI for each scan was selected by 2 examiners using the RMT and FHT methods from each software program. Their reliability is assessed using intraclass correlation coefficients (ICCs and the standard error of the measurement (SEM. Measurement agreement for these values is depicted using Bland-Altman plots. A paired t-test is used to determine mean differences in echogenicity expressed as grayscale values using the RMT and FHT methods to select the post-image acquisition ROI. The degree of association among ROI selection methods and image analysis platforms is analyzed using the coefficient of determination (R2. Results. The raters demonstrated excellent intrarater and interrater reliability using the RMT and FHT methods across both platforms (lower bound 95% CI ICC = .97–.99, p < .001. Mean differences between the echogenicity estimates obtained with the RMT and FHT methods was .87 grayscale levels (95% CI [.54–1.21], p < .0001 using data obtained with both programs. The SEM for Photoshop was .97 and 1.05 grayscale levels when using the RMT and FHT ROI selection

  1. Power, speed & automation with Adobe Photoshop

    CERN Document Server

    Scott, Geoff

    2012-01-01

    This is a must for the serious Photoshop user! Power, Speed & Automation explores how to customize and automate Photoshop to increase your speed and productivity.  With numerous step-by-step instructions, the authors-two of Adobe's own software developers!- walk you through the steps to best tailor Photoshop's interface to your personal workflow; write and apply Actions; and use batching and scripts to process large numbers of images quickly and automatically.  You will learn how to build your own dialogs and panels to improve your production workflows in Photoshop, the secrets of changing

  2. The Photoshop Darkroom 2 creative digital transformations

    CERN Document Server

    Davis, Harold

    2011-01-01

    Award-winning photography/design team Harold and Phyllis Davis are back with a brand new volume in their new Photoshop Darkroom series. Picking up where their best-selling first book left off, The Photoshop Darkroom 2: Advanced Digital Post-Processing will show you everything you need to know to take your digital imaging skills to the next level. Great photographers know that the best images begin well before the shutter clicks, and certainly well before Photoshop boots up. Harold takes a step back, and shares his helpful tips for capturing the most compelling images possible by keeping in min

  3. Teach yourself visually Photoshop CC

    CERN Document Server

    Wooldridge, Mike

    2013-01-01

    Get savvy with the newest features and enhancements of Photoshop CC The newest version of Photoshop boasts enhanced and new features that afford you some amazing and creative ways to create images with impact, and this popular guide gets visual learners up to speed quickly. Packed with colorful screen shots that illustrate the step-by-step instructions, this visual guide is perfect for Photoshop newcomers as well as experienced users who are looking for some beginning to intermediate-level techniques to give their projects the ""wow"" factor! Veteran and bestselling authors Mik

  4. Using photoshop filters to create anatomic line-art medical images.

    Science.gov (United States)

    Kirsch, Jacobo; Geller, Brian S

    2006-08-01

    There are multiple ways to obtain anatomic drawings suitable for publication or presentations. This article demonstrates how to use Photoshop to alter digital radiologic images to create line-art illustrations in a quick and easy way. We present two simple to use methods; however, not every image can adequately be transformed and personal preferences and specific changes need to be applied to each image to obtain the desired result. There are multiple ways to obtain anatomic drawings suitable for publication or to prepare presentations. Medical illustrators have always played a major role in the radiology and medical education process. Whether used to teach a complex surgical or radiologic procedure, to define typical or atypical patterns of the spread of disease, or to illustrate normal or aberrant anatomy, medical illustration significantly affects learning (). However, if you are not an accomplished illustrator, the alternatives can be expensive (contacting a professional medical illustrator or buying an already existing stock of digital images) or simply not necessarily applicable to what you are trying to communicate. The purpose of this article is to demonstrate how using Photoshop (Adobe Systems, San Jose, CA) to alter digital radiologic images we can create line-art illustrations in a quick, inexpensive, and easy way in preparation for electronic presentations and publication.

  5. Photoshop CS6 all-in-one for dummies

    CERN Document Server

    Obermeier, Barbara

    2012-01-01

    Everything you need to know about the newest version of Photoshop packed into one For Dummies guide Photoshop is the world’s most popular image editing software, with more than four million users worldwide. Professional photographers, graphic designers, and Web designers as well as photo hobbyists need to learn the fundamentals and master the newest features of the latest version of Photoshop - Photoshop CS6. This complete all-in-one reference makes it easy, with eight self-contained minibooks covering each aspect of Photoshop. Helps you familiarize yourself with the latest Photos

  6. Focus On Photoshop Elements Focus on the Fundamentals

    CERN Document Server

    Asch, David

    2011-01-01

    Are you bewildered by the advanced editing options available in Photoshop Elements? Do you want to get the most out of your image without going bleary-eyed in front of a computer screen? This handy guide will explain the ins and outs of using Photoshop Elements, without having to spend hours staring at the screen. Using a fabulous combination of easy-to-follow advice and step-by-step instructions, Focus On Photoshop Elements gives great advice on setting up, storing and sharing your image library and teaches you the basics of RAW image processing and color correction, plus shows you how to edi

  7. Digitally quantifying cerebral hemorrhage using Photoshop and Image J.

    Science.gov (United States)

    Tang, Xian Nan; Berman, Ari Ethan; Swanson, Raymond Alan; Yenari, Midori Anne

    2010-07-15

    A spectrophotometric hemoglobin assay is widely used to estimate the extent of brain hemorrhage by measuring the amount of hemoglobin in the brain. However, this method requires using the entire brain sample, leaving none for histology or other assays. Other widely used measures of gross brain hemorrhage are generally semi-quantitative and can miss subtle differences. Semi-quantitative brain hemorrhage scales may also be subject to bias. Here, we present a method to digitally quantify brain hemorrhage using Photoshop and Image J, and compared this method to the spectrophotometric hemoglobin assay. Male Sprague-Dawley rats received varying amounts of autologous blood injected into the cerebral hemispheres in order to generate different sized hematomas. 24h later, the brains were harvested, sectioned, photographed then prepared for the hemoglobin assay. From the brain section photographs, pixels containing hemorrhage were identified by Photoshop and the optical intensity was measured by Image J. Identification of hemorrhage size using optical intensities strongly correlated to the hemoglobin assay (R=0.94). We conclude that our method can accurately quantify the extent of hemorrhage. An advantage of this technique is that brain tissue can be used for additional studies. Published by Elsevier B.V.

  8. Optimising the measurement of bruises in children across conventional and cross polarized images using segmentation analysis techniques in Image J, Photoshop and circle diameter measurements.

    Science.gov (United States)

    Harris, C; Alcock, A; Trefan, L; Nuttall, D; Evans, S T; Maguire, S; Kemp, A M

    2018-02-01

    Bruising is a common abusive injury in children, and it is standard practice to image and measure them, yet there is no current standard for measuring bruise size consistently. We aim to identify the optimal method of measuring photographic images of bruises, including computerised measurement techniques. 24 children aged Photoshop 'ruler' software (Photoshop diameter)). Inter and intra-observer effects were determined by two individuals repeating 11 electronic measurements, and relevant Intraclass Correlation Coefficient's (ICC's) were used to establish reliability. Spearman's rank correlation was used to compare in vivo with computerised measurements; a comparison of measurement techniques across imaging modalities was conducted using Kolmogorov-Smirnov tests. Significance was set at p 0.5 for all techniques, with maximum Feret diameter and maximum Photoshop diameter on conventional images having the strongest correlation with in vivo measurements. There were significant differences between in vivo and computer-aided measurements, but none between different computer-aided measurement techniques. Overall, computer aided measurements appeared larger than in vivo. Inter- and intra-observer agreement was high for all maximum diameter measurements (ICC's > 0.7). Whilst there are minimal differences between measurements of images obtained, the most consistent results were obtained when conventional images, segmented by Image J Software, were measured with a Feret diameter. This is therefore proposed as a standard for future research, and forensic practice, with the proviso that all computer aided measurements appear larger than in vivo. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  9. Adobe Photoshop CS6 for photographers

    CERN Document Server

    Evening, Martin

    2012-01-01

    Renowned Photographer and Photoshop hall-of-famer, Martin Evening returns with his comprehensive guide to Photoshop. This acclaimed work covers everything from the core aspects of working in Photoshop to advanced techniques for refined workflows and professional results. Using concise advice, clear instruction and real world examples, this essential guide will give you the skills, regardless of your experience, to create professional quality results. A robust accompanying website features sample images, tutorial videos, bonus chapters and a plethora of extra resources. Quite simply, this is

  10. A novel method for measuring anterior segment area of the eye on ultrasound biomicroscopic images using photoshop.

    Directory of Open Access Journals (Sweden)

    Zhonghao Wang

    Full Text Available To describe a novel method for quantitative measurement of area parameters in ocular anterior segment ultrasound biomicroscopy (UBM images using Photoshop software and to assess its intraobserver and interobserver reproducibility.Twenty healthy volunteers with wide angles and twenty patients with narrow or closed angles were consecutively recruited. UBM images were obtained and analyzed using Photoshop software by two physicians with different-level training on two occasions. Borders of anterior segment structures including cornea, iris, lens, and zonules in the UBM image were semi-automatically defined by the Magnetic Lasso Tool in the Photoshop software according to the pixel contrast and modified by the observers. Anterior chamber area (ACA, posterior chamber area (PCA, iris cross-section area (ICA and angle recess area (ARA were drawn and measured. The intraobserver and interobserver reproducibilities of the anterior segment area parameters and scleral spur location were assessed by limits of agreement, coefficient of variation (CV, and intraclass correlation coefficient (ICC.All of the parameters were successfully measured by Photoshop. The intraobserver and interobserver reproducibilities of ACA, PCA, and ICA were good, with no more than 5% CV and more than 0.95 ICC, while the CVs of ARA were within 20%. The intraobserver and interobserver reproducibilities for defining the spur location were more than 0.97 ICCs. Although the operating times for both observers were less than 3 minutes per image, there was significant difference in the measuring time between two observers with different levels of training (p<0.001.Measurements of ocular anterior segment areas on UBM images by Photoshop showed good intraobserver and interobserver reproducibilties. The methodology was easy to adopt and effective in measuring.

  11. A novel method for measuring anterior segment area of the eye on ultrasound biomicroscopic images using photoshop.

    Science.gov (United States)

    Wang, Zhonghao; Liang, Xuanwei; Wu, Ziqiang; Lin, Jialiu; Huang, Jingjing

    2015-01-01

    To describe a novel method for quantitative measurement of area parameters in ocular anterior segment ultrasound biomicroscopy (UBM) images using Photoshop software and to assess its intraobserver and interobserver reproducibility. Twenty healthy volunteers with wide angles and twenty patients with narrow or closed angles were consecutively recruited. UBM images were obtained and analyzed using Photoshop software by two physicians with different-level training on two occasions. Borders of anterior segment structures including cornea, iris, lens, and zonules in the UBM image were semi-automatically defined by the Magnetic Lasso Tool in the Photoshop software according to the pixel contrast and modified by the observers. Anterior chamber area (ACA), posterior chamber area (PCA), iris cross-section area (ICA) and angle recess area (ARA) were drawn and measured. The intraobserver and interobserver reproducibilities of the anterior segment area parameters and scleral spur location were assessed by limits of agreement, coefficient of variation (CV), and intraclass correlation coefficient (ICC). All of the parameters were successfully measured by Photoshop. The intraobserver and interobserver reproducibilities of ACA, PCA, and ICA were good, with no more than 5% CV and more than 0.95 ICC, while the CVs of ARA were within 20%. The intraobserver and interobserver reproducibilities for defining the spur location were more than 0.97 ICCs. Although the operating times for both observers were less than 3 minutes per image, there was significant difference in the measuring time between two observers with different levels of training (pPhotoshop showed good intraobserver and interobserver reproducibilties. The methodology was easy to adopt and effective in measuring.

  12. Photoshop Elements 10 All-in-One For Dummies

    CERN Document Server

    Obermeier, Barbara

    2011-01-01

    Create your photo vision with the latest version of Photoshop Elements Photoshop Elements is the top selling consumer photo editing software and Adobe continues to add innovative features that allow digital photo enthusiasts to do it all. This value-packed reference combines nine content-rich minibooks in one complete package. User-friendly and detailed, it covers the key features and tools that beginner and experienced users need to create high-quality images for print, e-mail, and the web using the latest release of Photoshop Elements - Photoshop Elements 10. Presented in full color, this re

  13. Adobe Photoshop CS6 bible

    CERN Document Server

    Dayley, Brad

    2012-01-01

    The comprehensive, soup-to-nuts guide to Photoshop, fully updated Photoshop CS6, used for both print and digital media, is the industry leader in image-editing software. The newest version adds some exciting new features, and this bestselling guide has been revised to cover each of them, along with all the basic information you need to get started. Learn to use all the tools, including the histogram palette, Lens Blur, Match Color, and the color replacement tool, as well as keyboard shortcuts. Then master retouching and color correction, work with Camera Raw images, prepare photos for print

  14. Dielectric barrier discharge image processing by Photoshop

    Science.gov (United States)

    Dong, Lifang; Li, Xuechen; Yin, Zengqian; Zhang, Qingli

    2001-09-01

    In this paper, the filamentary pattern of dielectric barrier discharge has been processed by using Photoshop, the coordinates of each filament can also be obtained. By using Photoshop two different ways have been used to analyze the spatial order of the pattern formation in dielectric barrier discharge. The results show that the distance of the neighbor filaments at U equals 14 kV and d equals 0.9 mm is about 1.8 mm. In the scope of the experimental error, the results from the two different methods are similar.

  15. Application of photoshop-based image analysis to quantification of hormone receptor expression in breast cancer.

    Science.gov (United States)

    Lehr, H A; Mankoff, D A; Corwin, D; Santeusanio, G; Gown, A M

    1997-11-01

    The benefit of quantifying estrogen receptor (ER) and progesterone receptor (PR) expression in breast cancer is well established. However, in routine breast cancer diagnosis, receptor expression is often quantified in arbitrary scores with high inter- and intraobserver variability. In this study we tested the validity of an image analysis system employing inexpensive, commercially available computer software on a personal computer. In a series of 28 invasive ductal breast cancers, immunohistochemical determinations of ER and PR were performed, along with biochemical analyses on fresh tumor homogenates, by the dextran-coated charcoal technique (DCC) and by enzyme immunoassay (EIA). From each immunohistochemical slide, three representative tumor fields (x20 objective) were captured and digitized with a Macintosh personal computer. Using the tools of Photoshop software, optical density plots of tumor cell nuclei were generated and, after background subtraction, were used as an index of immunostaining intensity. This immunostaining index showed a strong semilogarithmic correlation with biochemical receptor assessments of ER (DCC, r = 0.70, p < 0.001; EIA, r = 0.76, p < 0.001) and even better of PR (DCC, r = 0.86; p < 0.01; EIA, r = 0.80, p < 0.001). A strong linear correlation of ER and PR quantification was also seen between DCC and EIA techniques (ER, r = 0.62, p < 0.001; PR, r = 0.92, p < 0.001). This study demonstrates that a simple, inexpensive, commercially available software program can be accurately applied to the quantification of immunohistochemical hormone receptor studies.

  16. Teach yourself visually Adobe Photoshop CS6

    CERN Document Server

    Wooldridge, Mike

    2012-01-01

    Gets visual learners up to speed on the newest enhancements in Photoshop Photoshop is constantly evolving, and the newest version offers great new tools for photographers. This popular guide gets visual learners up to speed quickly; previous editions have sold more than 150,000 copies. With colorful screen shots illustrating the step-by-step instructions, this book is perfect for Photoshop newcomers and for visual learners who are upgrading from an earlier version. It covers setting up the software, importing images from the camera, using all the tools, creating an online gallery, and more. C

  17. Application of Photoshop and Scion Image analysis to quantification of signals in histochemistry, immunocytochemistry and hybridocytochemistry.

    Science.gov (United States)

    Tolivia, Jorge; Navarro, Ana; del Valle, Eva; Perez, Cristina; Ordoñez, Cristina; Martínez, Eva

    2006-02-01

    To describe a simple method to achieve the differential selection and subsequent quantification of the strength signal using only one section. Several methods for performing quantitative histochemistry, immunocytochemistry or hybridocytochemistry, without use of specific commercial image analysis systems, rely on pixel-counting algorithms, which do not provide information on the amount of chromogen present in the section. Other techniques use complex algorithms to calculate the cumulative signal strength using two consecutive sections. To separate the chromogen signal we used the "Color range" option of the Adobe Photoshop program, which provides a specific file for a particular chromogen selection that could be applied on similar sections. The measurement of the chromogen signal strength of the specific staining is achieved with the Scion Image software program. The method described in this paper can also be applied to simultaneous detection of different signals on the same section or different parameters (area of particles, number of particles, etc.) when the "Analyze particles" tool of the Scion program is used.

  18. Software Aids for radiologists: Part 1, Useful Photoshop skills.

    Science.gov (United States)

    Gross, Joel A; Thapa, Mahesh M

    2012-12-01

    The purpose of this review is to describe the use of several essential techniques and tools in Adobe Photoshop image-editing software. The techniques shown expand on those previously described in the radiologic literature. Radiologists, especially those with minimal experience with image-editing software, can quickly apply a few essential Photoshop tools to minimize the frustration that can result from attempting to navigate a complex user interface.

  19. Focus On Adobe Photoshop Focus on the Fundamentals

    CERN Document Server

    Hilz, Corey

    2011-01-01

    This no-nonsense, highly affordable, and inspiring guide walks photographers new to Photoshop through the end to end Photoshop workflow. Starting from the moment you download your images off your memory card, photographer Corey Hilz guides you through importing and organizing your photos in Bridge, demonstrating how to give each photo ratings and keywords to make searching through your photos a snap. He then details the basics of editing photos in both Camera Raw and Photoshop, including how to correct exposure, make color and tonal adjustments, retouch flaws and imperfections, and much more.

  20. Image editing with Adobe Photoshop 6.0.

    Science.gov (United States)

    Caruso, Ronald D; Postel, Gregory C

    2002-01-01

    The authors introduce Photoshop 6.0 for radiologists and demonstrate basic techniques of editing gray-scale cross-sectional images intended for publication and for incorporation into computerized presentations. For basic editing of gray-scale cross-sectional images, the Tools palette and the History/Actions palette pair should be displayed. The History palette may be used to undo a step or series of steps. The Actions palette is a menu of user-defined macros that save time by automating an action or series of actions. Converting an image to 8-bit gray scale is the first editing function. Cropping is the next action. Both decrease file size. Use of the smallest file size necessary for the purpose at hand is recommended. Final file size for gray-scale cross-sectional neuroradiologic images (8-bit, single-layer TIFF [tagged image file format] at 300 pixels per inch) intended for publication varies from about 700 Kbytes to 3 Mbytes. Final file size for incorporation into computerized presentations is about 10-100 Kbytes (8-bit, single-layer, gray-scale, high-quality JPEG [Joint Photographic Experts Group]), depending on source and intended use. Editing and annotating images before they are inserted into presentation software is highly recommended, both for convenience and flexibility. Radiologists should find that image editing can be carried out very rapidly once the basic steps are learned and automated. Copyright RSNA, 2002

  1. Techniques on semiautomatic segmentation using the Adobe Photoshop

    Science.gov (United States)

    Park, Jin Seo; Chung, Min Suk; Hwang, Sung Bae

    2005-04-01

    The purpose of this research is to enable anybody to semiautomatically segment the anatomical structures in the MRIs, CTs, and other medical images on the personal computer. The segmented images are used for making three-dimensional images, which are helpful in medical education and research. To achieve this purpose, the following trials were performed. The entire body of a volunteer was MR scanned to make 557 MRIs, which were transferred to a personal computer. On Adobe Photoshop, contours of 19 anatomical structures in the MRIs were semiautomatically drawn using MAGNETIC LASSO TOOL; successively, manually corrected using either LASSO TOOL or DIRECT SELECTION TOOL to make 557 segmented images. In a likewise manner, 11 anatomical structures in the 8,500 anatomcial images were segmented. Also, 12 brain and 10 heart anatomical structures in anatomical images were segmented. Proper segmentation was verified by making and examining the coronal, sagittal, and three-dimensional images from the segmented images. During semiautomatic segmentation on Adobe Photoshop, suitable algorithm could be used, the extent of automatization could be regulated, convenient user interface could be used, and software bugs rarely occurred. The techniques of semiautomatic segmentation using Adobe Photoshop are expected to be widely used for segmentation of the anatomical structures in various medical images.

  2. Contrast enhancement of bite mark images using the grayscale mixer in ACR in Photoshop®.

    Science.gov (United States)

    Evans, Sam; Noorbhai, Suzanne; Lawson, Zoe; Stacey-Jones, Seren; Carabott, Romina

    2013-05-01

    Enhanced images may improve bite mark edge definition, assisting forensic analysis. Current contrast enhancement involves color extraction, viewing layered images by channel. A novel technique, producing a single enhanced image using the grayscale mix panel within Adobe Camera Raw®, has been developed and assessed here, allowing adjustments of multiple color channels simultaneously. Stage 1 measured RGB values in 72 versions of a color chart image; eight sliders in Photoshop® were adjusted at 25% intervals, all corresponding colors affected. Stage 2 used a bite mark image, and found only red, orange, and yellow sliders had discernable effects. Stage 3 assessed modality preference between color, grayscale, and enhanced images; on average, the 22 survey participants chose the enhanced image as better defined for nine out of 10 bite marks. The study has shown potential benefits for this new technique. However, further research is needed before use in the analysis of bite marks. © 2013 American Academy of Forensic Sciences.

  3. The Adobe Photoshop layers book

    CERN Document Server

    Lynch, Richard

    2011-01-01

    Layers are the building blocks for working in Photoshop. With the correct use of the Layers Tool, you can edit individual components of your images nondestructively to ensure that your end result is a combination of the best parts of your work. Despite how important it is for successful Photoshop work, the Layers Tool is one of the most often misused and misunderstood features within this powerful software program. This book will show you absolutely everything you need to know to work with layers, including how to use masks, blending, modes and layer management. You'll learn professional tech

  4. GrinLine identification using digital imaging and Adobe Photoshop.

    Science.gov (United States)

    Bollinger, Susan A; Brumit, Paula C; Schrader, Bruce A; Senn, David R

    2009-03-01

    The purpose of this study was to outline a method by which an antemortem photograph of a victim can be critically compared with a postmortem photograph in an effort to facilitate the identification process. Ten subjects, between 27 and 55 years old provided historical pictures of themselves exhibiting a broad smile showing anterior teeth to some extent (a grin). These photos were termed "antemortem" for the purpose of the study. A digital camera was used to take a current photo of each subject's grin. These photos represented the "postmortem" images. A single subject's "postmortem" photo set was randomly selected to be the "unknown victim." These combined data of the unknown and the 10 antemortem subjects were digitally stored and, using Adobe Photoshop software, the images were sized and oriented for comparative analysis. The goal was to devise a technique that could facilitate the accurate determination of which "antemortem" subject was the "unknown." The generation of antemortem digital overlays of the teeth visible in a grin and the comparison of those overlays to the images of the postmortem dentition is the foundation of the technique. The comparisons made using the GrinLine Identification Technique may assist medical examiners and coroners in making identifications or exclusions.

  5. Evaluation of chronic periapical lesions by digital subtraction radiography by using Adobe Photoshop CS: a technical report.

    Science.gov (United States)

    Carvalho, Fabiola B; Gonçalves, Marcelo; Tanomaru-Filho, Mário

    2007-04-01

    The purpose of this study was to describe a new technique by using Adobe Photoshop CS (San Jose, CA) image-analysis software to evaluate the radiographic changes of chronic periapical lesions after root canal treatment by digital subtraction radiography. Thirteen upper anterior human teeth with pulp necrosis and radiographic image of chronic periapical lesion were endodontically treated and radiographed 0, 2, 4, and 6 months after root canal treatment by using a film holder. The radiographic films were automatically developed and digitized. The radiographic images taken 0, 2, 4, and 6 months after root canal therapy were submitted to digital subtraction in pairs (0 and 2 months, 2 and 4 months, and 4 and 6 months) choosing "image," "calculation," "subtract," and "new document" tools from Adobe Photoshop CS image-analysis software toolbar. The resulting images showed areas of periapical healing in all cases. According to this methodology, the healing or expansion of periapical lesions can be evaluated by means of digital subtraction radiography by using Adobe Photoshop CS software.

  6. Adobe Photoshop CS6 digital classroom

    CERN Document Server

    Smith, Jennifer

    2012-01-01

    A complete training package on the newest version of Photoshop! The Digital Classroom series combines a full-color book with a full-featured DVD, resulting in a complete training package written by expert instructors. Photoshop is the industry standard for image editing, and this guide gets photographers, commercial designers, web developers, fine artists, and serious hobbyists up to speed on the newest version. It includes 13 self-paced lessons that allow you to progress at your own speed, with complete lesson files and tutorials on the DVD. Topics include Camera RAW, masks and la

  7. Geometrical verification system using Adobe Photoshop in radiotherapy.

    Science.gov (United States)

    Ishiyama, Hiromichi; Suzuki, Koji; Niino, Keiji; Hosoya, Takaaki; Hayakawa, Kazushige

    2005-02-01

    Adobe Photoshop is used worldwide and is useful for comparing portal films with simulation films. It is possible to scan images and then view them simultaneously with this software. The purpose of this study was to assess the accuracy of a geometrical verification system using Adobe Photoshop. We prepared the following two conditions for verification. Under one condition, films were hanged on light boxes, and examiners measured distances between the isocenter on simulation films and that on portal films by adjusting the bony structures. Under the other condition, films were scanned into a computer and displayed using Adobe Photoshop, and examiners measured distances between the isocenter on simulation films and those on portal films by adjusting the bony structures. To obtain control data, lead balls were used as a fiducial point for matching the films accurately. The errors, defined as the differences between the control data and the measurement data, were assessed. Errors of the data obtained using Adobe Photoshop were significantly smaller than those of the data obtained from films on light boxes (p Adobe Photoshop is available on any PC with this software and is useful for improving the accuracy of verification.

  8. Photoshop Elements 10 Top 100 Simplified Tips and Tricks

    CERN Document Server

    Sheppard, Rob

    2011-01-01

    A visual guide to getting the most out of Photoshop Elements 10 If you understand the basics of Photoshop Elements, you'll love this collection of 100 must-know tips and tricks. Two-page tutorials, full-color screen shots, and step-by-step instructions make it easy to see and follow the directions, helping you to get the very most from this top-selling image-editing software. This guide catches you up on Photoshop Elements 10, covers features you may not have known about, and alerts you to a slew of cool effects and techniques. Explains techniques, best practices, and creative ways to transfor

  9. Photoshop tips and tricks every facial plastic surgeon should know.

    Science.gov (United States)

    Hamilton, Grant S

    2010-05-01

    Postprocessing of patient photographs is an important skill for the facial plastic surgeon. Postprocessing is intended to optimize the image, not change the surgical result. This article refers to use of Photoshop CS3 (Adobe Systems Incorporated, San Jose, CA, USA) for descriptions, but any recent version of Photoshop is sufficiently similar. Topics covered are types of camera, shooting formats, color balance, alignment of preoperative and postoperative photographs, and preparing figures for publication. Each section presents step-by-step guidance and instructions along with a graphic depiction of the computer screen and Photoshop tools under discussion. Copyright 2010 Elsevier Inc. All rights reserved.

  10. Technical report on semiautomatic segmentation using the Adobe Photoshop.

    Science.gov (United States)

    Park, Jin Seo; Chung, Min Suk; Hwang, Sung Bae; Lee, Yong Sook; Har, Dong-Hwan

    2005-12-01

    The purpose of this research is to enable users to semiautomatically segment the anatomical structures in magnetic resonance images (MRIs), computerized tomographs (CTs), and other medical images on a personal computer. The segmented images are used for making 3D images, which are helpful to medical education and research. To achieve this purpose, the following trials were performed. The entire body of a volunteer was scanned to make 557 MRIs. On Adobe Photoshop, contours of 19 anatomical structures in the MRIs were semiautomatically drawn using MAGNETIC LASSO TOOL and manually corrected using either LASSO TOOL or DIRECT SELECTION TOOL to make 557 segmented images. In a similar manner, 13 anatomical structures in 8,590 anatomical images were segmented. Proper segmentation was verified by making 3D images from the segmented images. Semiautomatic segmentation using Adobe Photoshop is expected to be widely used for segmentation of anatomical structures in various medical images.

  11. Preprocessing with Photoshop Software on Microscopic Images of A549 Cells in Epithelial-Mesenchymal Transition.

    Science.gov (United States)

    Ren, Zhou-Xin; Yu, Hai-Bin; Shen, Jun-Ling; Li, Ya; Li, Jian-Sheng

    2015-06-01

    To establish a preprocessing method for cell morphometry in microscopic images of A549 cells in epithelial-mesenchymal transition (EMT). Adobe Photoshop CS2 (Adobe Systems, Inc.) was used for preprocessing the images. First, all images were processed for size uniformity and high distinguishability between the cell and background area. Then, a blank image with the same size and grids was established and cross points of the grids were added into a distinct color. The blank image was merged into a processed image. In the merged images, the cells with 1 or more cross points were chosen, and then the cell areas were enclosed and were replaced in a distinct color. Except for chosen cellular areas, all areas were changed into a unique hue. Three observers quantified roundness of cells in images with the image preprocess (IPP) or without the method (Controls), respectively. Furthermore, 1 observer measured the roundness 3 times with the 2 methods, respectively. The results between IPPs and Controls were compared for repeatability and reproducibility. As compared with the Control method, among 3 observers, use of the IPP method resulted in a higher number and a higher percentage of same-chosen cells in an image. The relative average deviation values of roundness, either for 3 observers or 1 observer, were significantly higher in Controls than in IPPs (p Photoshop, a chosen cell from an image was more objective, regular, and accurate, creating an increase of reproducibility and repeatability on morphometry of A549 cells in epithelial to mesenchymal transition.

  12. Photoshop Elements 5 The Missing Manual

    CERN Document Server

    Brundage, Barbara

    2006-01-01

    Anyone still think that Adobe Photoshop Elements is a toy version of the real thing? As the most popular photo-editing program on the market, Photoshop Elements not only has Photoshop's marvelous powers, but also has capabilities the mothership lacks. Each new version includes more tools designed specifically for today's consumer digital photo enthusiasts. The latest edition, Photoshop Elements 5, solidifies the reputation of this superb and inexpensive product with new scrapbook features, a link to online photo services, and many other improvements. In fact, there's so much to Photoshop Ele

  13. Adobe Photoshop CS5 for photographers

    CERN Document Server

    Evening, Martin

    2010-01-01

    With the new edition of this proven bestseller, Photoshop users can master the power of Photoshop CS5 with internationally renowned photographer and Photoshop hall-of-famer Martin Evening by their side.  In this acclaimed reference work, Martin covers everything from the core aspects of working in Photoshop to advanced techniques for professional results. Subjects covered include organizing a digital workflow, improving creativity, output, automating Photoshop, and using Camera RAW. The style of the book is extremely clear, with real examples, diagrams, illustrations, and step-by-step ex

  14. The clinical application of Photoshop in image post-processing of no-gap-lower-limb digital photography

    International Nuclear Information System (INIS)

    Zhang Ziqi; Wang Longhua; Feng Min; Gu Jianping; Lu Lingquan; Gui Jianchao; Wang Liming

    2006-01-01

    Objective: To explore the value of Photoshop in image post-processing of digital total lower-limb X-ray photography, so as to obtain a more reasonable and accurate photography. Methods: Digital total lower-limb X-ray photography was performed in 34 patients, and the films were printed. Then the digital imaging were converted to a total no-gap-tower-limb photography by Adobe Photoshop CS software and were printed in A4 sheets. The Q angles ( the angle between the line of femoral axis and the line of central points of femoral head, knee joint and ankle joint) were measured by radiologists and orthopedists. The films and pages were evaluated separately by radiologists and orthopedists. The Q angles were compared. Results: There were 25 cases retrograde osteoarthritis of knee joints inl9 patients (6 patients were involved two sides), 15 cases of rheumatoid osteoarthritis in 12 patients, 1 case of malformation, 1 case of traumatic osteoarthritis, and 1 case of TB. Twenty-six of these patients were performed the knee joint replacement operations. The Q angle in films were (6.3±0.8) degree, and (6.1±0.3) degree in sheets. There was no significant difference between the two methods (paired-t-test, t=0.022, P>0.5). Conclusion: Photoshop software could be used readily to obtain a optional total no-gap-lower limb photography satisfying diagnostic and operational needs of orthopedics. (authors)

  15. Adobe Photoshop CS5 for Photographers The Ultimate Workshop

    CERN Document Server

    Evening, Martin

    2010-01-01

    If you already have a good knowledge of Adobe Photoshop and are looking to advance your skills, Adobe Photoshop CS5 for Photographers: The Ultimate Workshop is the book you've been waiting for.  Renowned photographers Martin Evening and Jeff Schewe impart their Photoshop tips and workflow, showing you how to use a vast array of rarely seen advanced Photoshop techniques.  Whether the subject is serious retouching work, weird and wonderful compositions, or planning a shoot before you've even picked up a camera, you can be sure that the advice is based on years of practical experience.

  16. Photoshop Elements 12 all-in-one for dummies

    CERN Document Server

    Obermeier, Barbara

    2013-01-01

    9 books in 1 Getting Started with ElementsOrganizer FundamentalsImage EssentialsSelectionsPainting, Drawing, and TypingWorking with Layers and MasksFilters, Effects, Styles, and DistortionsRetouching and EnhancingCreating and Sharing with Elements Create extraordinary photos with Photoshop Elements 12 and this friendly guide! These days, we're practically never without a camera at hand - even if it's just a cellphone. Whatever you shoot with, Photoshop Elements can help you make your shots look their best. The nine easy-to-follow minibooks in this guide will help you organize, edit, create, a

  17. How to optimize radiological images captured from digital cameras, using the Adobe Photoshop 6.0 program.

    Science.gov (United States)

    Chalazonitis, A N; Koumarianos, D; Tzovara, J; Chronopoulos, P

    2003-06-01

    Over the past decade, the technology that permits images to be digitized and the reduction in the cost of digital equipment allows quick digital transfer of any conventional radiological film. Images then can be transferred to a personal computer, and several software programs are available that can manipulate their digital appearance. In this article, the fundamentals of digital imaging are discussed, as well as the wide variety of optional adjustments that the Adobe Photoshop 6.0 (Adobe Systems, San Jose, CA) program can offer to present radiological images with satisfactory digital imaging quality.

  18. The ESA/ESO/NASA Photoshop FITS Liberator 3: Have your say on new features

    Science.gov (United States)

    Nielsen, L. H.; Christensen, L. L.; Hurt, R. L.; Nielsen, K.; Johansen, T.

    2008-06-01

    The popular, free ESA/ESO/NASA Photoshop FITS Liberator image processing software (a plugin for Adobe Photoshop) is about to get simpler, faster and more user-friendly! Here we would like to solicit inputs from the community of users.

  19. The Photoshop CS4 Companion for Photographers

    CERN Document Server

    Story, Derrick

    2009-01-01

    "Derrick shows that Photoshop can be friendly as well as powerful. In part, he does that by focusing photographers on the essential steps of an efficient workflow. With this guide in hand, you'll quickly learn how to leverage Photoshop CS4's features to organize and improve your pictures."-- John Nack, Principal Product Manager, Adobe Photoshop & BridgeMany photographers -- even the pros -- feel overwhelmed by all the editing options Photoshop provides. The Photoshop CS4 Companion for Photographers pares it down to only the tools you'll need most often, and shows you how to use those tools as

  20. Hepatic volumetry with PhotoShop in personal computer.

    Science.gov (United States)

    Lu, Yi; Wu, Zheng; Liu, Chang; Wang, Hao-Hua

    2004-02-01

    Convenient way to clarify liver volume or tumor volume in the liver is eagerly demanded by hepatobiliary surgeons, for so many aspects of clinical work need to know the liver volumetry. At present, some methods have been used to measure the liver volumetry, such as computed tomography (CT) scans, three-dimensional ultrasound volumetric system([1]) and 3-dimensional sonography([2,3]) et al. But enough volumetric information was failed to obtain by surgeons and a new way of measuring the liver volumetry that can be operated by themselves is exigent. Whereas we devise a new method of using PhotoShop in personal computer to measure the liver volumetry. A piece of whole CT film was transformed to a high quality digitized image by digital camera or scanner and then the digitized image was conducted as JPEG file into personal computer. The JPEG image file of CT film was opened by PhotoShop. Determining the edge of interested areas, and the data of pixel values of the interested areas divided by 1 cm2 pixel value will produce the actual area with the unit of square centimeter. If section thickness of CT scan is 1 cm, the sum of the areas of the liver or tumor in all sections naturally is the volume of the liver or tumor. Comparison of 10 hepatic volumes gained by this method and those gained by the GE Prospeed CT set showed a good relativity between the two groups. The volumes of three right lobes were calculated by this method before lobectomy and their real volumes were obtained postoperatively by a volumenometer. Their variation was limited to 5%. Hepatic volume obtained by PhotoShop is reliable. This method can be used to measure hepatic volume perfectly to meet clinical demand, and many parameters such as liver resection rate, graft volume can be achieved. The disadvantage of this method is the step of copying the pixel value from PhotoShop to Microsoft Excel.

  1. The application of image processing software: Photoshop in environmental design

    Science.gov (United States)

    Dong, Baohua; Zhang, Chunmi; Zhuo, Chen

    2011-02-01

    In the process of environmental design and creation, the design sketch holds a very important position in that it not only illuminates the design's idea and concept but also shows the design's visual effects to the client. In the field of environmental design, computer aided design has made significant improvement. Many types of specialized design software for environmental performance of the drawings and post artistic processing have been implemented. Additionally, with the use of this software, working efficiency has greatly increased and drawings have become more specific and more specialized. By analyzing the application of photoshop image processing software in environmental design and comparing and contrasting traditional hand drawing and drawing with modern technology, this essay will further explore the way for computer technology to play a bigger role in environmental design.

  2. A Technique Using Calibrated Photography and Photoshop for Accurate Shade Analysis and Communication.

    Science.gov (United States)

    McLaren, Edward A; Figueira, Johan; Goldstein, Ronald E

    2017-02-01

    This article reviews the critical aspects of controlling the shade-taking environment and discusses various modalities introduced throughout the years to acquire and communicate shade information. Demonstrating a highly calibrated digital photographic technique for capturing shade information, this article shows how to use Photoshop® to standardize images and extract color information from the tooth and shade tab for use by a ceramist for an accurate shade-matching restoration.

  3. Photoshop CS3 RAW Transforming your RAW data into works of art

    CERN Document Server

    Aaland, Mikkel

    2008-01-01

    Because RAW files remain virtually untouched by in-camera processing, working with them has given digital photographers greater flexibility and control during the editing process -- for those who are familiar enough with the format. Camera RAW, the plug in for Adobe Photoshop CS3, has emerged as one of the best and most familiar tools for editing RAW images, and the best way to master this workflow is with Photoshop CS3 RAW. Award-winning author Mikkel Aaland explores the entire RAW process, from the practical reasons to shoot RAW, to managing the images with the new features of Bridge 2.0 n

  4. Photoshop Elements 9 the missing manual

    CERN Document Server

    Brundage, Barbara

    2010-01-01

    Elements 9 offers much of Photoshop's power without the huge price tag. It's an ideal tool for most image-editing buffs -- including scrapbookers, photographers, and aspiring graphic artists. But Elements still doesn't come with a decent manual. This bestselling book will help you get the most out of the program, from the basics to advanced tips for both Windows and Mac. Quickly learn your way around. Customize Elements to suit your working style.Get to work right away. Import, organize, and make quick image fixes with ease.Retouch any image. Learn how to repair and restore your old and damag

  5. Photoshop Elements 10 The Missing Manual

    CERN Document Server

    Brundage, Barbara

    2011-01-01

    Elements 10 offers much of Photoshop's power without the huge price tag. It's a great tool for most image-editing buffs-whether you're a photographer, scrapbooker, or aspiring graphic artist. But Elements still doesn't come with a useful manual. This bestselling book helps you get the most out of the program, from the basics to advanced tips for both Windows and Mac users. The important stuff you need to know: Quickly learn your way around. Customize Elements to suit your working style.Get to work right away. Import, organize, and make quick image fixes with ease.Retouch any image. Learn how

  6. Photoshop Lightroom 4 FAQs

    CERN Document Server

    Sholik, Stan

    2012-01-01

    Get the answers to 365 of the most commonly asked questions about Lightroom Photographers who are getting acquainted with Photoshop Lightroom and all its advantages for managing large quantities of images will find this handy book an indispensable resource. Veteran photographer Stan Sholik answers 365 of the most frequently asked questions about the new Lightroom 4 in an informative, practical format, making it easy to find what you're looking for and put the information to use. Sample photos illustrate questions and answers, and a quick-reference guide provides easy access to must-have inform

  7. Using Photoshop with images created by a confocal system.

    Science.gov (United States)

    Sedgewick, Jerry

    2014-01-01

    Many pure colors and grayscales tones that result from confocal imaging are not reproducible to output devices, such as printing presses, laptop projectors, and laser jet printers. Part of the difficulty in predicting the colors and tones that will reproduce lies in both the computer display, and in the display of unreproducible colors chosen for fluorophores. The use of a grayscale display for confocal channels and a LUT display to show saturated (clipped) tonal values aids visualization in the former instance and image integrity in the latter. Computer monitors used for post-processing in order to conform the image to the output device can be placed in darkened rooms, and the gamma for the display can be set to create darker shadow regions, and to control the display of color. These conditions aid in visualization of images so that blacks are set to grayer values that are more amenable to faithful reproduction. Preferences can be set in Photoshop for consistent display of colors, along with other settings to optimize use of memory. The Info window is opened so that tonal information can be shown via readouts. Images that are saved as indexed color are converted to grayscale or RGB Color, 16-bit is converted to 8-bit when desired, and colorized images from confocal software is returned to grayscale and re-colorized according to presented methods so that reproducible colors are made. Images may also be sharpened and noise may be reduced, or more than one image layered to show colocalization according to specific methods. Images are then converted to CMYK (Cyan, Magenta, Yellow and Black) for consequent assignment of pigment percentages for printing presses. Changes to single images and multiple images from image stacks are automated for efficient and consistent image processing changes. Some additional changes are done to those images destined for 3D visualization to better separate regions of interest from background. Files are returned to image stacks, saved and

  8. Technobabble: Photoshop 6 Converges Web, Print Photograph-Editing Capabilities.

    Science.gov (United States)

    Communication: Journalism Education Today, 2001

    2001-01-01

    Discusses the newly-released Adobe Photoshop 6, and its use in student publications. Notes its refined text-handling capabilities, a more user-friendly interface, integrated vector functions, easier preparation of Web images, and new and more powerful layer functions. (SR)

  9. Teach yourself visually Photoshop Elements 12

    CERN Document Server

    Wooldridge, Mike

    2013-01-01

    Are you a visual learner? Do you prefer instructions that show you how to do something - and skip the long-winded explanations? If so, then this book is for you. Open it up and you'll find clear, step-by-step screen shots that show you how to tackle more than 160 Photoshop Elements tasks. Each task-based spread covers a single technique, sure to help you get up and running on Photoshop Elements 12 in no time. You'll learn to:Use both the Organizer and EditorImport photos from various sourcesEnhance lighting and colorRestore old photos and add effectsSave, back up, and share photos Designed f

  10. Etching and image analysis of the microstructure in marble

    DEFF Research Database (Denmark)

    Alm, Ditte; Brix, Susanne; Howe-Rasmussen, Helle

    2005-01-01

    of grains exposed on that surface are measured on the microscope images using image analysis by the program Adobe Photoshop 7.0 with Image Processing Toolkit 4.0. The parameters measured by the program on microscope images of thin sections of two marble types are used for calculation of the coefficient...

  11. Preparing Colorful Astronomical Images and Illustrations

    Science.gov (United States)

    Levay, Z. G.; Frattare, L. M.

    2001-12-01

    We present techniques for using mainstream graphics software, specifically Adobe Photoshop and Illustrator, for producing composite color images and illustrations from astronomical data. These techniques have been used with numerous images from the Hubble Space Telescope to produce printed and web-based news, education and public presentation products as well as illustrations for technical publication. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to work with scaled images, masks, text and graphics in multiple semi-transparent layers and channels. These features, along with its user-oriented, visual interface, provide convenient tools to produce high-quality, full-color images and graphics for printed and on-line publication and presentation.

  12. Ten Steps to Create Virtual Smile Design Templates With Adobe Photoshop® CS6.

    Science.gov (United States)

    Sundar, Manoj Kumar; Chelliah, Venkataraman

    2018-03-01

    Computer design software has become a primary tool for communication among the dentist, patient, and ceramist. Virtual smile design can be carried out using various software programs, most of which use assorted forms of teeth templates that are made based on the concept of "golden proportion." Despite current advances in 3-dimensional imaging and smile designing, many clinicians still employ conventional design methods and analog (ie, man-made) mock-ups in assessing and establishing esthetic makeovers. To simplify virtual smile designing, the teeth templates should be readily available. No literature has provided details as to how to create these templates. This article explains a technique for creating different forms of teeth templates using Adobe Photoshop® CS6 that eventually can be used for smile design purposes, either in Photoshop or Microsoft Powerpoint. Clinically speaking, various smile design templates created using set proportions in Adobe Photoshop CS6 can be used in virtual smile designing, a valuable resource in diagnosis, treatment planning, and communicating with patients and ceramists, thus providing a platform for a successful esthetic rehabilitation.

  13. Preparing Colorful Astronomical Images II

    Science.gov (United States)

    Levay, Z. G.; Frattare, L. M.

    2002-12-01

    We present additional techniques for using mainstream graphics software (Adobe Photoshop and Illustrator) to produce composite color images and illustrations from astronomical data. These techniques have been used on numerous images from the Hubble Space Telescope to produce photographic, print and web-based products for news, education and public presentation as well as illustrations for technical publication. We expand on a previous paper to present more detail and additional techniques, taking advantage of new or improved features available in the latest software versions. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to work with scaled images, masks, text and graphics in multiple semi-transparent layers and channels.

  14. Photoshop CC top 100 simplified tips and tricks

    CERN Document Server

    Sholik, Stan

    2013-01-01

    Take your Photoshop skill set to the next level with these essential techniques If you're already familiar with Photoshop basics and are ready to learn some new tips, tricks, and techniques, then this is the book for you! Full-color, step-by-step instructions take you beyond the essentials and show you how to make the most of the newest features of Photoshop CC (Creative Cloud). Beautiful photos will inspire you to experiment with Photoshop's features, and numbered instructions make the techniques easy to learn. Encourages you to expand your skill set with creative, or

  15. An easy and inexpensive method for quantitative analysis of endothelial damage by using vital dye staining and Adobe Photoshop software.

    Science.gov (United States)

    Saad, Hisham A; Terry, Mark A; Shamie, Neda; Chen, Edwin S; Friend, Daniel F; Holiman, Jeffrey D; Stoeger, Christopher

    2008-08-01

    We developed a simple, practical, and inexpensive technique to analyze areas of endothelial cell loss and/or damage over the entire corneal area after vital dye staining by using a readily available, off-the-shelf, consumer software program, Adobe Photoshop. The purpose of this article is to convey a method of quantifying areas of cell loss and/or damage. Descemet-stripping automated endothelial keratoplasty corneal transplant surgery was performed by using 5 precut corneas on a human cadaver eye. Corneas were removed and stained with trypan blue and alizarin red S and subsequently photographed. Quantitative assessment of endothelial damage was performed by using Adobe Photoshop 7.0 software. The average difference for cell area damage for analyses performed by 1 observer twice was 1.41%. For analyses performed by 2 observers, the average difference was 1.71%. Three masked observers were 100% successful in matching the randomized stained corneas to their randomized processed Adobe images. Vital dye staining of corneal endothelial cells can be combined with Adobe Photoshop software to yield a quantitative assessment of areas of acute endothelial cell loss and/or damage. This described technique holds promise for a more consistent and accurate method to evaluate the surgical trauma to the endothelial cell layer in laboratory models. This method of quantitative analysis can probably be generalized to any area of research that involves areas that are differentiated by color or contrast.

  16. An evaluation of the subtraction photoshop software accuracy to detect minor changes in optical density by radiovisiography

    Directory of Open Access Journals (Sweden)

    Talaeipour AR.

    2004-06-01

    Full Text Available Statement of Problem: Subtraction is a newly presented radiography technique to detect minor density"nchanges that are not visible by conventional radiography."nPurpose: The aim of this In-vitro study was to evaluate the efficacy of photoshop subtraction software for"ndetecting minor density changes between two dental images."nMaterials and Methods: In this research, five dried human mandibles were held in fixed position while thin"naluminium sheets were superimposed on each mandible on the 1th and 2nd molar teeth regions."nA reference image, without aluminium sheet placement, was obtained from each mandible subsequently series"nconsist of 20 images with aluminium sheets, ranging from 50p. to "5Q0"x were recorded by radiovisiography"n(RVG system. Initial images were subtracted from subsequent ones by Photoshop subtraction software. The"ndifference in density between the two images at the 1st and 2nd molar sites was related to the aluminium"nsheets. The optical density of aluminium sheets was determined by densitometer."nResults: In the present study, 6.6% of the optical density changes of the minimum aluminium thickness as"n300u. could be detected by photoshop software software."nConclusion: The findings of this study showed that the accuracy of photoshop subtraction software was equal"nto that of the conventional subtraction softwares. Additionally, the accuracy of this software was proved to be"nsuitable for clinical investigations of small localized changes in alveolar bone.

  17. Effect of software manipulation (Photoshop) of digitised retinal images on the grading of diabetic retinopathy.

    Science.gov (United States)

    George, L D; Lusty, J; Owens, D R; Ollerton, R L

    1999-08-01

    To determine whether software processing of digitised retinal images using a "sharpen" filter improves the ability to grade diabetic retinopathy. 150 macula centred retinal images were taken as 35 mm colour transparencies representing a spectrum of diabetic retinopathy, digitised, and graded in random order before and after the application of a sharpen filter (Adobe Photoshop). Digital enhancement of contrast and brightness was performed and a X2 digital zoom was utilised. The grades from the unenhanced and enhanced digitised images were compared with the same retinal fields viewed as slides. Overall agreement in retinopathy grade from the digitised images improved from 83.3% (125/150) to 94.0% (141/150) with sight threatening diabetic retinopathy (STDR) correctly identified in 95.5% (84/88) and 98.9% (87/88) of cases when using unenhanced and enhanced images respectively. In total, five images were overgraded and four undergraded from the enhanced images compared with 17 and eight images respectively when using unenhanced images. This study demonstrates that the already good agreement in grading performance can be further improved by software manipulation or processing of digitised retinal images.

  18. [Use of Adobe Photoshop software in medical criminology].

    Science.gov (United States)

    Nikitin, S A; Demidov, I V

    2000-01-01

    Describes the method of comparative analysis of various objects in practical medical criminology and making of high-quality photographs with the use of Adobe Photoshop software. Options of the software needed for expert evaluations are enumerated.

  19. Appling Andragogy Theory in Photoshop Training Programs

    Science.gov (United States)

    Alajlan, Abdulrahman Saad

    2015-01-01

    Andragogy is a strategy for teaching adults that can be applied to Photoshop training. Photoshop workshops are frequented by adult learners, and thus andragogical models for instruction would be extremely helpful for prospective trainers looking to improve their classroom designs. Adult learners are much different than child learners, given the…

  20. Tutorial Pengenalan Adobe Photoshop Menggunakan Adobe Flash CS3

    OpenAIRE

    Mayoka, Rio

    2011-01-01

    Kajian ini bertujuan untuk membangun sebuah aplikasi yang dapat menjadi alat bantu dalam pembelajaran Adobe Photoshop, dimana terdapat beberapa materi pengenalan dasar Adobe Photoshop. Aplikasi ini suatu gagasan dengan membuat tutorial beranimasi yang interatif. Aplikasi ini dibuat dengan menggunakan Adobe Flash CS3 dan dapat dijalankan dengan Flash player. Aplikasi ini dapat membantu para penggunanya dalam memahami pengenalan Adobe Photoshop, terutama pengenalan tool pada Adob...

  1. An evaluation on the accuracy of the indirect digital images densitometry by modified Photoshop software

    Directory of Open Access Journals (Sweden)

    Bashizadeh Fakhar H.

    2004-06-01

    Full Text Available Statement of Problem: One of the major goals, in most dental researches, is to measure bone destruction or deposition due to the progression or regression of disease. Failure of human eyes to detect minor radiographic density changes resulted in more accurate methods such as optical densitometry and direct or indirect digital densitometry."nPurpose: The aim of this study was to determine the accuracy of a newly proposed method of indirect digital densitometry using modified Photoshop software."nMaterials and Methods: Radiographs from 37 samples of urografin solution with three concentrations (12.5%, 25% and 37.5% were taken on dental radiographic films no.2 and digitized by a scanner. A region with 800*800 pixels was cropped from each image and compressed with the Joint Photographic Experts Group (JPEG compression algorithm and saved. These new images were then put into registration with new algorithm using MATLAB software version 6.1. This algorithm assigned each image and average pixel value (between 0 and 255. The association between concentration and calculated values for each image was tested with regression analysis and the meaning fullness of differences between calculated values was also analysis by ANOVA test. Tukey HSD and Alpha Krunbach were used whenever needs."nResults: Regression analysis revealed significant correlation between concentration and calculated average pixel value (r=0.883. The differences between average of pixels value for different concentration was significant (P=0.0001. Pixel values showed a good intra- sample and intra-group repeatability (Alpha Krunbach: a=99.96%, a=99.68%."nConclusion: This method due to its high accuracy, easy usage and densitometer independency can be considered as a suitable alternative for conventional densitometry methods.

  2. The Photoshop Smile Design technique (part 1): digital dental photography.

    Science.gov (United States)

    McLaren, Edward A; Garber, David A; Figueira, Johan

    2013-01-01

    The proliferation of digital photography and imaging devices is enhancing clinicians' ability to visually document patients' intraoral conditions. By understanding the elements of esthetics and learning how to incorporate technology applications into clinical dentistry, clinicians can predictably plan smile design and communicate anticipated results to patients and ceramists alike. This article discusses camera, lens, and flash selection and setup, and how to execute specific types of images using the Adobe Photoshop Smile Design (PSD) technique.

  3. A comparative study of 2 computer-assisted methods of quantifying brightfield microscopy images.

    Science.gov (United States)

    Tse, George H; Marson, Lorna P

    2013-10-01

    Immunohistochemistry continues to be a powerful tool for the detection of antigens. There are several commercially available software packages that allow image analysis; however, these can be complex, require relatively high level of computer skills, and can be expensive. We compared 2 commonly available software packages, Adobe Photoshop CS6 and ImageJ, in their ability to quantify percentage positive area after picrosirius red (PSR) staining and 3,3'-diaminobenzidine (DAB) staining. On analysis of DAB-stained B cells in the mouse spleen, with a biotinylated primary rat anti-mouse-B220 antibody, there was no significant difference on converting images from brightfield microscopy to binary images to measure black and white pixels using ImageJ compared with measuring a range of brown pixels with Photoshop (Student t test, P=0.243, correlation r=0.985). When analyzing mouse kidney allografts stained with PSR, Photoshop achieved a greater interquartile range while maintaining a lower 10th percentile value compared with analysis with ImageJ. A lower 10% percentile reflects that Photoshop analysis is better at analyzing tissues with low levels of positive pixels; particularly relevant for control tissues or negative controls, whereas after ImageJ analysis the same images would result in spuriously high levels of positivity. Furthermore comparing the 2 methods by Bland-Altman plot revealed that these 2 methodologies did not agree when measuring images with a higher percentage of positive staining and correlation was poor (r=0.804). We conclude that for computer-assisted analysis of images of DAB-stained tissue there is no difference between using Photoshop or ImageJ. However, for analysis of color images where differentiation into a binary pattern is not easy, such as with PSR, Photoshop is superior at identifying higher levels of positivity while maintaining differentiation of low levels of positive staining.

  4. Differentiation between chronic hepatitis and normal liver of grayscale ultrasound tissue quantification using adobe photoshop(5.0)

    International Nuclear Information System (INIS)

    Choi, Jong Cheol; Oh, Jong Young; Lim, Jong Uk; Nam, Kyung Jin

    2001-01-01

    To evaluate whether was any difference in the brightness of echogenicity on gray scale ultrasound imaging between the liver with chronic hepatitis and the normal liver using Adobe photoshop 5.0 Seventy-five patients with pathologically proven chronic hepatitis and twenty normal volunteers were included in this study. Adobe photoshop 5.0 histogram was used to measure the brightness of image. The measured brightness of the liver was divided by the brightness of the kidney, and the radio was calculated and compared between patients with chronic hepatitis and the normal control groups. In addition, the degree of fibrosis was also evaluated. The difference in brightness between the normal liver and live with chronic hepatitis was statistically significant, but no statistically significant difference was observed between the brightness of the liver and the degree of fibrosis in the liver. Tissue echo quantification using Adobe Photoshop 5.0 may be a helpful diagnostic methods for the patients with chronic hepatitis.

  5. Face recognition using elastic grid matching through photoshop: A new approach

    Directory of Open Access Journals (Sweden)

    Manavpreet Kaur

    2015-12-01

    Full Text Available Computing grids propose to be a very efficacious, economic and ascendable way of image identification. In this paper, we propose a grid based face recognition overture employing a general template matching method to solve the timeconsuming face recognition problem. A new approach has been employed in which the grid was prepared for a specific individual over his photograph using Adobe Photoshop CS5 software. The background was later removed and the grid prepared by merging layers was used as a template for image matching or comparison. This overture is computationally efficient, has high recognition rates and is able to identify a person with minimal efforts and in short time even from photographs taken at different magnifications and from different distances.

  6. Adobe photoshop quantification (PSQ) rather than point-counting: A rapid and precise method for quantifying rock textural data and porosities

    Science.gov (United States)

    Zhang, Xuefeng; Liu, Bo; Wang, Jieqiong; Zhang, Zhe; Shi, Kaibo; Wu, Shuanglin

    2014-08-01

    Commonly used petrological quantification methods are visual estimation, counting, and image analyses. However, in this article, an Adobe Photoshop-based analyzing method (PSQ) is recommended for quantifying the rock textural data and porosities. Adobe Photoshop system provides versatile abilities in selecting an area of interest and the pixel number of a selection could be read and used to calculate its area percentage. Therefore, Adobe Photoshop could be used to rapidly quantify textural components, such as content of grains, cements, and porosities including total porosities and different genetic type porosities. This method was named as Adobe Photoshop Quantification (PSQ). The workflow of the PSQ method was introduced with the oolitic dolomite samples from the Triassic Feixianguan Formation, Northeastern Sichuan Basin, China, for example. And the method was tested by comparing with the Folk's and Shvetsov's "standard" diagrams. In both cases, there is a close agreement between the "standard" percentages and those determined by the PSQ method with really small counting errors and operator errors, small standard deviations and high confidence levels. The porosities quantified by PSQ were evaluated against those determined by the whole rock helium gas expansion method to test the specimen errors. Results have shown that the porosities quantified by the PSQ are well correlated to the porosities determined by the conventional helium gas expansion method. Generally small discrepancies (mostly ranging from -3% to 3%) are caused by microporosities which would cause systematic underestimation of 2% and/or by macroporosities causing underestimation or overestimation in different cases. Adobe Photoshop could be used to quantify rock textural components and porosities. This method has been tested to be precise and accurate. It is time saving compared with usual methods.

  7. Photoshop CS5 restoration and retouching for digital photographers only

    CERN Document Server

    Fitzgerald, Mark

    2010-01-01

    Adobe Photoshop CS5 Restoration and Retouching For Digital Photographers Only is the complete guide to restoration and retouching. Whether you're new to Photoshop, or if you've been using it for years, you'll learn lots of new tricks that will help put the beauty back into cherished family photos, and turn new photos into frameable works of art. Follow Adobe Certified Photoshop Expert Mark Fitzgerald as he guides you through the restoration and retouching workflows. Begin by learning about basic concepts, such as proper tonal and color adjustment, selections, and masking. Then learn t

  8. Bodyshop The PhotoShop Retouching Guide for the Face and Body

    CERN Document Server

    Nitzsche, Birgit

    2011-01-01

    The book Photoshop users need to get bodies into shape.This full-color book will show how both the newest and previous versions of Photoshop can be used to retouch and enhance the entire human form.From body contouring to changing hairstyles to adding makeup and fixing nails, this book will be a must-have reference for anyone who uses Photoshop to fix people pictures.The companion DVD includes before-and-after views of all pictures from the book, additional setting files for individual workshops, and trial versions of several Nik Multimedia filters.

  9. Digital quantification of fibrosis in liver biopsy sections: description of a new method by Photoshop software.

    Science.gov (United States)

    Dahab, Gamal M; Kheriza, Mohamed M; El-Beltagi, Hussien M; Fouda, Abdel-Motaal M; El-Din, Osama A Sharaf

    2004-01-01

    The precise quantification of fibrous tissue in liver biopsy sections is extremely important in the classification, diagnosis and grading of chronic liver disease, as well as in evaluating the response to antifibrotic therapy. Because the recently described methods of digital image analysis of fibrosis in liver biopsy sections have major flaws, including the use of out-dated techniques in image processing, inadequate precision and inability to detect and quantify perisinusoidal fibrosis, we developed a new technique in computerized image analysis of liver biopsy sections based on Adobe Photoshop software. We prepared an experimental model of liver fibrosis involving treatment of rats with oral CCl4 for 6 weeks. After staining liver sections with Masson's trichrome, a series of computer operations were performed including (i) reconstitution of seamless widefield images from a number of acquired fields of liver sections; (ii) image size and solution adjustment; (iii) color correction; (iv) digital selection of a specified color range representing all fibrous tissue in the image and; (v) extraction and calculation. This technique is fully computerized with no manual interference at any step, and thus could be very reliable for objectively quantifying any pattern of fibrosis in liver biopsy sections and in assessing the response to antifibrotic therapy. It could also be a valuable tool in the precise assessment of antifibrotic therapy to other tissue regardless of the pattern of tissue or fibrosis.

  10. Geometric correction of radiographic images using general purpose image processing program

    International Nuclear Information System (INIS)

    Kim, Eun Kyung; Cheong, Ji Seong; Lee, Sang Hoon

    1994-01-01

    The present study was undertaken to compare geometric corrected image by general-purpose image processing program for the Apple Macintosh II computer (NIH Image, Adobe Photoshop) with standardized image by individualized custom fabricated alignment instrument. Two non-standardized periapical films with XCP film holder only were taken at the lower molar portion of 19 volunteers. Two standardized periapical films with customized XCP film holder with impression material on the bite-block were taken for each person. Geometric correction was performed with Adobe Photoshop and NIH Image program. Specially, arbitrary image rotation function of 'Adobe Photoshop' and subtraction with transparency function of 'NIH Image' were utilized. The standard deviations of grey values of subtracted images were used to measure image similarity. Average standard deviation of grey values of subtracted images if standardized group was slightly lower than that of corrected group. However, the difference was found to be statistically insignificant (p>0.05). It is considered that we can use 'NIH Image' and 'Adobe Photoshop' program for correction of nonstandardized film, taken with XCP film holder at lower molar portion.

  11. Image analysis software versus direct anthropometry for breast measurements.

    Science.gov (United States)

    Quieregatto, Paulo Rogério; Hochman, Bernardo; Furtado, Fabianne; Machado, Aline Fernanda Perez; Sabino Neto, Miguel; Ferreira, Lydia Masako

    2014-10-01

    To compare breast measurements performed using the software packages ImageTool(r), AutoCAD(r) and Adobe Photoshop(r) with direct anthropometric measurements. Points were marked on the breasts and arms of 40 volunteer women aged between 18 and 60 years. When connecting the points, seven linear segments and one angular measurement on each half of the body, and one medial segment common to both body halves were defined. The volunteers were photographed in a standardized manner. Photogrammetric measurements were performed by three independent observers using the three software packages and compared to direct anthropometric measurements made with calipers and a protractor. Measurements obtained with AutoCAD(r) were the most reproducible and those made with ImageTool(r) were the most similar to direct anthropometry, while measurements with Adobe Photoshop(r) showed the largest differences. Except for angular measurements, significant differences were found between measurements of line segments made using the three software packages and those obtained by direct anthropometry. AutoCAD(r) provided the highest precision and intermediate accuracy; ImageTool(r) had the highest accuracy and lowest precision; and Adobe Photoshop(r) showed intermediate precision and the worst accuracy among the three software packages.

  12. The hen's egg chorioallantoic membrane (HET-CAM) test to predict the ophthalmic irritation potential of a cysteamine-containing gel: Quantification using Photoshop® and ImageJ.

    Science.gov (United States)

    McKenzie, Barbara; Kay, Graeme; Matthews, Kerr H; Knott, Rachel M; Cairns, Donald

    2015-07-25

    A modified hen's egg chorioallantoic membrane (HET-CAM) test has been developed, combining ImageJ analysis with Adobe(®) Photoshop(®). The irritation potential of an ophthalmic medicine can be quantified using this method, by monitoring damage to blood vessels. The evaluation of cysteamine containing hyaluronate gel is reported. The results demonstrated that the novel gel formulation is non-irritant to the ocular tissues, in line with saline solution (negative control). In conclusion, the modification of the established HET-CAM test can quantify the damage to minute blood vessels. These results offer the possibility to formulate cysteamine in an ocular applicable gel formulation. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. How to cheat in Photoshop Elements 12 release your imagination

    CERN Document Server

    Asch, David

    2014-01-01

    Have you ever wanted to summon magical powers? Appear in a graphic novel? Or control the weather and seasons? There's a whole world of opportunity out there for creating fun photomontages, powerful panoramas, and dynamic distortions.How to Cheat in Photoshop Elements 12 starts you at the basics of photomontage with selection techniques, layers and transformations; leading up to full-length projects for creating magazine covers, fantasy scenes, poster artwork and much, much more.This book also features:A dedicated website where you can download images and tutorial videos that show you how to ex

  14. Evaluation of a new electronic preoperative reference marker for toric intraocular lens implantation by two different methods of analysis: Adobe Photoshop versus iTrace.

    Science.gov (United States)

    Farooqui, Javed Hussain; Sharma, Mansi; Koul, Archana; Dutta, Ranjan; Shroff, Noshir Minoo

    2017-01-01

    The aim of this study is to compare two different methods of analysis of preoperative reference marking for toric intraocular lens (IOL) after marking with an electronic marker. Cataract and IOL Implantation Service, Shroff Eye Centre, New Delhi, India. Fifty-two eyes of thirty patients planned for toric IOL implantation were included in the study. All patients had preoperative marking performed with an electronic preoperative two-step toric IOL reference marker (ASICO AE-2929). Reference marks were placed at 3-and 9-o'clock positions. Marks were analyzed with two systems. First, slit-lamp photographs taken and analyzed using Adobe Photoshop (version 7.0). Second, Tracey iTrace Visual Function Analyzer (version 5.1.1) was used for capturing corneal topograph examination and position of marks noted. Amount of alignment error was calculated. Mean absolute rotation error was 2.38 ± 1.78° by Photoshop and 2.87 ± 2.03° by iTrace which was not statistically significant ( P = 0.215). Nearly 72.7% of eyes by Photoshop and 61.4% by iTrace had rotation error ≤3° ( P = 0.359); and 90.9% of eyes by Photoshop and 81.8% by iTrace had rotation error ≤5° ( P = 0.344). No significant difference in absolute amount of rotation between eyes when analyzed by either method. Difference in reference mark positions when analyzed by two systems suggests the presence of varying cyclotorsion at different points of time. Both analysis methods showed an approximately 3° of alignment error, which could contribute to 10% loss of astigmatic correction of toric IOL. This can be further compounded by intra-operative marking errors and final placement of IOL in the bag.

  15. The complete raw workflow guide how to get the most from your raw images in Adobe Camera Raw, Lightroom, Photoshop, and Elements

    CERN Document Server

    Andrews, Philip

    2007-01-01

    One of the most important technologies a photographer can master is shooting and working with raw images. However, figuring out the best way to work with raw files can be confusing and overwhelming. What's the advantage to working in raw? How do you manage, organize, and store raw files? What's the best way to process your files to meet your photographic needs? How do Photoshop, Lightroom and Adobe Camera Raw work together? Is it possible to keep your photos in the raw format and still enhance them extensively? Philip Andrews answers these questions and more in his all-new essential raw workfl

  16. An Image Analysis Method for the Precise Selection and Quantitation of Fluorescently Labeled Cellular Constituents

    Science.gov (United States)

    Agley, Chibeza C.; Velloso, Cristiana P.; Lazarus, Norman R.

    2012-01-01

    The accurate measurement of the morphological characteristics of cells with nonuniform conformations presents difficulties. We report here a straightforward method using immunofluorescent staining and the commercially available imaging program Adobe Photoshop, which allows objective and precise information to be gathered on irregularly shaped cells. We have applied this measurement technique to the analysis of human muscle cells and their immunologically marked intracellular constituents, as these cells are prone to adopting a highly branched phenotype in culture. Use of this method can be used to overcome many of the long-standing limitations of conventional approaches for quantifying muscle cell size in vitro. In addition, wider applications of Photoshop as a quantitative and semiquantitative tool in immunocytochemistry are explored. PMID:22511600

  17. Measurement of facial movements with Photoshop software during treatment of facial nerve palsy*

    Science.gov (United States)

    Pourmomeny, Abbas Ali; Zadmehr, Hassan; Hossaini, Mohsen

    2011-01-01

    BACKGROUND: Evaluating the function of facial nerve is essential in order to determine the influences of various treatment methods. The aim of this study was to evaluate and assess the agreement of Photoshop scaling system versus the facial grading system (FGS). METHODS: In this semi-experimental study, thirty subjects with facial nerve paralysis were recruited. The evaluation of all patients before and after the treatment was performed by FGS and Photoshop measurements. RESULTS: The mean values of FGS before and after the treatment were 35 ± 25 and 67 ± 24, respectively (p Photoshop assessment, mean changes of face expressions in the impaired side relative to the normal side in rest position and three main movements of the face were 3.4 ± 0.55 and 4.04 ± 0.49 millimeter before and after the treatment, respectively (p Photoshop was more objective than using FGS. Therefore, it may be recommended to use this method instead. PMID:22973325

  18. Measurement of facial movements with Photoshop software during treatment of facial nerve palsy.

    Science.gov (United States)

    Pourmomeny, Abbas Ali; Zadmehr, Hassan; Hossaini, Mohsen

    2011-10-01

    Evaluating the function of facial nerve is essential in order to determine the influences of various treatment methods. The aim of this study was to evaluate and assess the agreement of Photoshop scaling system versus the facial grading system (FGS). In this semi-experimental study, thirty subjects with facial nerve paralysis were recruited. The evaluation of all patients before and after the treatment was performed by FGS and Photoshop measurements. The mean values of FGS before and after the treatment were 35 ± 25 and 67 ± 24, respectively (p Photoshop assessment, mean changes of face expressions in the impaired side relative to the normal side in rest position and three main movements of the face were 3.4 ± 0.55 and 4.04 ± 0.49 millimeter before and after the treatment, respectively (p Photoshop was more objective than using FGS. Therefore, it may be recommended to use this method instead.

  19. Creating animated GIF files for electronic presentations using Photoshop.

    Science.gov (United States)

    Yam, Chun-Shan; Kruskal, Jonathan; Larson, Michael

    2007-05-01

    Our objective is to present a simple method for converting movie clips to animated GIFs (graphics interchange format) using Photoshop. Although animated GIF is a more reliable format than movie clips (e.g., AVI and QuickTime) for presenting dynamic data sets in PowerPoint presentations, this output format is not available on most radiology workstations. Therefore, many academic radiologists still experience the problem of incompatible codecs and missing file links when trying to show movie clips in their PowerPoint presentations. One way to resolve this issue is to convert the movie clips to animated GIFs. In this article, we provide a simple method for this conversion using Photoshop--a common software application used by radiologists.

  20. Morphing images to demonstrate potential surgical outcomes.

    Science.gov (United States)

    Hamilton, Grant S

    2010-05-01

    Morphing patient images to offer some demonstration of the intended surgical outcome can support shared expectations between patient and facial plastic surgeon. As part of the preoperative consultation, showing a patient an image that compares their face before surgery with what is planned after surgery can greatly enhance the surgical experience. This article refers to use of Photoshop CS3 for tutorial descriptions but any recent version of Photoshop is sufficiently similar. Among the topics covered are creating a before-and-after, rhinoplasty imaging, face- and brow-lift imaging, and removing wrinkles. Each section presents a step-by-step tutorial with graphic images demonstrating the computer screen and Photoshop tools. Copyright 2010 Elsevier Inc. All rights reserved.

  1. Photoshop Elements 6 The Missing Manual

    CERN Document Server

    Brundage, Barbara

    2009-01-01

    With Photoshop Elements 6, the most popular photo-editing program on Earth just keeps getting better. It's perfect for scrapbooking, email-ready slideshows, Web galleries, you name it. But knowing what to do and when is tricky. That's why our Missing Manual is the bestselling book on the topic. This fully revised guide explains not only how the tools and commands work, but when to use them.

  2. Reliability and Validity of the Footprint Assessment Method Using Photoshop CS5 Software.

    Science.gov (United States)

    Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam

    2015-05-01

    Several sophisticated methods of footprint analysis currently exist. However, it is sometimes useful to apply standard measurement methods of recognized evidence with an easy and quick application. We sought to assess the reliability and validity of a new method of footprint assessment in a healthy population using Photoshop CS5 software (Adobe Systems Inc, San Jose, California). Forty-two footprints, corresponding to 21 healthy individuals (11 men with a mean ± SD age of 20.45 ± 2.16 years and 10 women with a mean ± SD age of 20.00 ± 1.70 years) were analyzed. Footprints were recorded in static bipedal standing position using optical podography and digital photography. Three trials for each participant were performed. The Hernández-Corvo, Chippaux-Smirak, and Staheli indices and the Clarke angle were calculated by manual method and by computerized method using Photoshop CS5 software. Test-retest was used to determine reliability. Validity was obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed high values (ICC, 0.98-0.99). Moreover, the validity test clearly showed no difference between techniques (ICC, 0.99-1). The reliability and validity of a method to measure, assess, and record the podometric indices using Photoshop CS5 software has been demonstrated. This provides a quick and accurate tool useful for the digital recording of morphostatic foot study parameters and their control.

  3. DICOM to print, 35-mm slides, web, and video projector: tutorial using Adobe Photoshop.

    Science.gov (United States)

    Gurney, Jud W

    2002-10-01

    Preparing images for publication has dealt with film and the photographic process. With picture archiving and communications systems, many departments will no longer produce film. This will change how images are produced for publication. DICOM, the file format for radiographic images, has to be converted and then prepared for traditional publication, 35-mm slides, the newest techniques of video projection, and the World Wide Web. Tagged image file format is the common format for traditional print publication, whereas joint photographic expert group is the current file format for the World Wide Web. Each medium has specific requirements that can be met with a common image-editing program such as Adobe Photoshop (Adobe Systems, San Jose, CA). High-resolution images are required for print, a process that requires interpolation. However, the Internet requires images with a small file size for rapid transmission. The resolution of each output differs and the image resolution must be optimized to match the output of the publishing medium.

  4. Predictive images of postoperative levator resection outcome using image processing software

    Directory of Open Access Journals (Sweden)

    Mawatari Y

    2016-09-01

    Full Text Available Yuki Mawatari,1 Mikiko Fukushima2 1Igo Ophthalmic Clinic, Kagoshima, 2Department of Ophthalmology, Faculty of Life Science, Kumamoto University, Chuo-ku, Kumamoto, Japan Purpose: This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection.Methods: Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller’s muscle complex (levator resection. Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop®. Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery.Results: Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2% were satisfied with their postoperative appearances, and 55 patients (84.8% positively responded to the usefulness of processed images to predict postoperative appearance.Conclusion: Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery. Keywords: levator resection, blepharoptosis, image processing, Adobe Photoshop® 

  5. Reliability and Validity of the Footprint Assessment Method Using Photoshop CS5 Software in Young People with Down Syndrome.

    Science.gov (United States)

    Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Rey-Abella, Ferran; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam

    2016-05-01

    People with Down syndrome present skeletal abnormalities in their feet that can be analyzed by commonly used gold standard indices (the Hernández-Corvo index, the Chippaux-Smirak index, the Staheli arch index, and the Clarke angle) based on footprint measurements. The use of Photoshop CS5 software (Adobe Systems Software Ireland Ltd, Dublin, Ireland) to measure footprints has been validated in the general population. The present study aimed to assess the reliability and validity of this footprint assessment technique in the population with Down syndrome. Using optical podography and photography, 44 footprints from 22 patients with Down syndrome (11 men [mean ± SD age, 23.82 ± 3.12 years] and 11 women [mean ± SD age, 24.82 ± 6.81 years]) were recorded in a static bipedal standing position. A blinded observer performed the measurements using a validated manual method three times during the 4-month study, with 2 months between measurements. Test-retest was used to check the reliability of the Photoshop CS5 software measurements. Validity and reliability were obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed very good values for the Photoshop CS5 method (ICC, 0.982-0.995). Validity testing also found no differences between the techniques (ICC, 0.988-0.999). The Photoshop CS5 software method is reliable and valid for the study of footprints in young people with Down syndrome.

  6. Adobe Photoshop Elements 11 for photographers

    CERN Document Server

    Andrews, Philip

    2013-01-01

    To coincide with some of the biggest changes in Photoshop Elements for years, Philip Andrews completely revises his bestselling title to include all the new features of this release. See how the new interface works alongside new tools, techniques and workflows to make editing, enhancing and sharing your pictures easier than ever. And as always, he introduces the changed and improved features with colorful illustrations and the clear step-by-step instruction that has made his books the go-to titles for photographers the world over. ????In this edition Andrews highlights followi

  7. How to cheat in Photoshop CC

    CERN Document Server

    Caplin, Steve

    2013-01-01

    Have you ever struggled to make the vision in your mind come to life on your screen? Then this book can help you realise your goal. In this comprehensive revision of the best-selling How To Cheat in Photoshop, photomontage guru Steve Caplin shows you how to get optimum results in minimum time, by cheating your way to success.As a professional digital artist, Steve knows all about creating great work under pressure. In this book he combines detailed step-by-step instructions with invaluable real-world hints, tips, and advice to really let your creativity run wild. Fully updated to

  8. Distribution of dendritic cells expressing dendritic cell-specific ICAM-3-grabbing non-integrin (DC-SIGN, CD209): Morphological analysis using a novel Photoshop-aided multiple immunohistochemistry technique.

    Science.gov (United States)

    Masuda, Akihiro; Nishikawa, Toshio

    2014-08-01

    The distribution of dendritic cells (DCs) expressing DC-specific ICAM-3-grabbing non-integrin (DC-SIGN, CD209) and the morphological interaction of DC-SIGN⁺ DCs with other cells, especially B cells, in tonsillar and other lymphoid tissues were investigated by multiple immunohistochemistry (IHC) using the graphics editing program Photoshop, which enabled staining with 4 or more antibodies in formalin-fixed paraffin sections. Images obtained by repetition of conventional IHC using diaminobenzidine color development in a tissue section were processed on Photoshop for multiple staining. DC-SIGN⁺ DCs were present in the area around the lymphoid follicles and formed a DC-SIGN⁺ DC-rich area, and these cells contacted not only T cells, fascin⁺ DCs, and blood vessels but also several subsets of B cells simultaneously, including naïve and memory B cells. DC-SIGN⁺ DCs may play an important role in the regulation of the immune response mediated by not only T cells but also B cells. The multiple IHC method introduced in the present study is a simple and useful method for analyzing details of complex structures. Because this method can be applied to routinely processed paraffin sections with conventional IHC with diaminobenzidine, it can be applied to a wide variety of archival specimens.

  9. An image-based method to measure all-terrain vehicle dimensions for engineering safety purposes.

    Science.gov (United States)

    Jennissen, Charles A; Miller, Nathan S; Tang, Kaiyang; Denning, Gerene M

    2014-04-01

    All-terrain vehicle (ATV) crashes are a serious public health and safety concern. Engineering approaches that address ATV injury prevention are critically needed. Avenues to pursue include evidence-based seat design that decreases risky behaviours, such as carrying passengers and operation of adult-size vehicles by children. The goal of this study was to create and validate an image-based method to measure ATV seat length and placement. Publicly available ATV images were downloaded. Adobe Photoshop was then used to generate a vertical grid through the centre of the vehicle, to define the grid scale using the manufacturer's reported wheelbase, and to determine seat length and placement relative to the front and rear axles using this scale. Images that yielded a difference greater than 5% between the calculated and the manufacturer's reported ATV lengths were excluded from further analysis. For the 77 images that met inclusion criteria, the mean±SD for the difference in calculated versus reported vehicle length was 1.8%±1.2%. The Pearson correlation coefficient for comparing image-based seat lengths determined by two independent measurers (20 models) and image-based lengths versus lengths measured at dealerships (12 models) were 0.95 and 0.96, respectively. The image-based method provides accurate and reproducible results for determining ATV measurements, including seat length and placement. This method greatly expands the number of ATV models that can be studied, and may be generalisable to other motor vehicle types. These measurements can be used to guide engineering approaches that improve ATV safety design.

  10. Semiquantitative analysis of ECM molecules in the different cartilage layers in early and advanced osteoarthritis of the knee joint.

    Science.gov (United States)

    Lahm, Andreas; Kasch, Richard; Mrosek, Eike; Spank, Heiko; Erggelet, Christoph; Esser, Jan; Merk, Harry

    2012-05-01

    The study was conducted to examine the expression of collagen type I and II in the different cartilage layers in relation to other ECM molecules during the progression of early osteoarthritic degeneration in human articular cartilage (AC). Quantitative real-time (RT)-PCR and colorimetrical techniques were used for calibration of Photoshop-based image analysis in detecting such lesions. Immunohistochemistry and histology were performed with 40 cartilage tissue samples showing mild (ICRS grade 1b) respectively moderate/advanced (ICRS grade 3a or 3b) (20 each) osteoarthritis compared with 15 healthy biopsies. Furthermore, we quantified our results on the gene expression of collagen type I and II and aggrecan with the help of real-time (RT)-PCR. Proteoglycan content was measured colorimetrically. The digitized images of histology and immunohistochemistry stains were analyzed with Photoshop software. T-test and Spearman correlation analysis were used for statistical analysis. In the earliest stages of AC deterioration the loss of collagen type II was associated with the appearance of collagen type I, shown by increasing amounts of collagen type I mRNA. During subsequent stages, a progressive loss of structural integrity was associated with increasing deposition of collagen type I as part of a natural healing response. A decrease of collagen type II is visible especially in the upper fibrillated area of the advanced osteoarthritic samples, which then leads to an overall decrease. Analysis of proteoglycan showed losses of the overall content and a loss of the classical zonal formation. Correlation analysis of the proteoglycan Photoshop measurements with the RT-PCR revealed strong correlation for Safranin O and collagen type I, medium for collagen type II, alcian blue and glycoprotein but weak correlation with PCR aggrecan results. Photoshop based image analysis might become a valuable supplement for well known histopathological grading systems of lesioned articular

  11. Game Art Complete All-in-One; Learn Maya, 3ds Max, ZBrush, and Photoshop Winning Techniques

    CERN Document Server

    Gahan, Andrew

    2008-01-01

    A compilation of key chapters from the top Focal game art books available today - in the areas of Max, Maya, Photoshop, and ZBrush. The chapters provide the CG Artist with an excellent sampling of essential techniques that every 3D artist needs to create stunning game art. Game artists will be able to master the modeling, rendering, rigging, and texturing techniques they need - with advice from Focal's best and brightest authors. Artists can learn hundreds of tips, tricks and shortcuts in Max, Maya, Photoshop, ZBrush - all within the covers of one complete, inspiring reference

  12. Enhancing Architectural Drawings and Models with Photoshop

    CERN Document Server

    Onstott, Scott

    2010-01-01

    Transform your CAD drawings into powerful presentationThis one-of-a-kind book shows you how to use Photoshop to turn CAD drawings and BIM models into artistic presentations with captivating animations, videos, and dynamic 3D imagery. The techniques apply to all leading architectural design software including AutoCAD, Revit, and 3ds Max Design. Video tutorials on the DVD improve your learning curve and let you compare your work with the author's.Turn CAD drawings and BIM models into powerful presentations featuring animation, videos, and 3D imagery for enhanced client appealCraft interactive pa

  13. Comparison of mosaicking techniques for airborne images from consumer-grade cameras

    Science.gov (United States)

    Song, Huaibo; Yang, Chenghai; Zhang, Jian; Hoffmann, Wesley Clint; He, Dongjian; Thomasson, J. Alex

    2016-01-01

    Images captured from airborne imaging systems can be mosaicked for diverse remote sensing applications. The objective of this study was to identify appropriate mosaicking techniques and software to generate mosaicked images for use by aerial applicators and other users. Three software packages-Photoshop CC, Autostitch, and Pix4Dmapper-were selected for mosaicking airborne images acquired from a large cropping area. Ground control points were collected for georeferencing the mosaicked images and for evaluating the accuracy of eight mosaicking techniques. Analysis and accuracy assessment showed that Pix4Dmapper can be the first choice if georeferenced imagery with high accuracy is required. The spherical method in Photoshop CC can be an alternative for cost considerations, and Autostitch can be used to quickly mosaic images with reduced spatial resolution. The results also showed that the accuracy of image mosaicking techniques could be greatly affected by the size of the imaging area or the number of the images and that the accuracy would be higher for a small area than for a large area. The results from this study will provide useful information for the selection of image mosaicking software and techniques for aerial applicators and other users.

  14. FITS Liberator: Image processing software

    Science.gov (United States)

    Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David

    2012-06-01

    The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.

  15. Detection and Evaluation of Skin Disorders by One of Photogrammetric Image Analysis Methods

    Science.gov (United States)

    Güçin, M.; Patias, P.; Altan, M. O.

    2012-08-01

    Abnormalities on skin may vary from simple acne to painful wounds which affect a person's life quality. Detection of these kinds of disorders in early stages, followed by the evaluation of abnormalities is of high importance. At this stage, photogrammetry offers a non-contact solution to this concern by providing geometric highly accurate data. Photogrammetry, which has been used for firstly topographic purposes, in virtue of terrestrial photogrammetry became useful technique in non-topographic applications also (Wolf et al., 2000). Moreover the extension of usage of photogrammetry, in parallel with the development in technology, analogue photographs are replaced with digital images and besides digital image processing techniques, it provides modification of digital images by using filters, registration processes etc. Besides, photogrammetry (using same coordinate system by registration of images) can serve as a tool for the comparison of temporal imaging data. The aim of this study is to examine several digital image processing techniques, in particular the digital filters, which might be useful to determine skin disorders. In our study we examine affordable to purchase, user friendly software which needs neither expertise nor pre-training. Since it is a pre-work for subsequent and deeper studies, Adobe Photoshop 7.0 is used as a present software. In addition to that Adobe Photoshop released a DesAcc plug-ins with CS3 version and provides full compatibility with DICOM (Digital Imaging and Communications in Medicine) and PACS (Picture Archiving and Communications System) that enables doctors to store all medical data together with relevant images and share if necessary.

  16. DETECTION AND EVALUATION OF SKIN DISORDERS BY ONE OF PHOTOGRAMMETRIC IMAGE ANALYSIS METHODS

    Directory of Open Access Journals (Sweden)

    M. Güçin

    2012-08-01

    Full Text Available Abnormalities on skin may vary from simple acne to painful wounds which affect a person's life quality. Detection of these kinds of disorders in early stages, followed by the evaluation of abnormalities is of high importance. At this stage, photogrammetry offers a non-contact solution to this concern by providing geometric highly accurate data. Photogrammetry, which has been used for firstly topographic purposes, in virtue of terrestrial photogrammetry became useful technique in non-topographic applications also (Wolf et al., 2000. Moreover the extension of usage of photogrammetry, in parallel with the development in technology, analogue photographs are replaced with digital images and besides digital image processing techniques, it provides modification of digital images by using filters, registration processes etc. Besides, photogrammetry (using same coordinate system by registration of images can serve as a tool for the comparison of temporal imaging data. The aim of this study is to examine several digital image processing techniques, in particular the digital filters, which might be useful to determine skin disorders. In our study we examine affordable to purchase, user friendly software which needs neither expertise nor pre-training. Since it is a pre-work for subsequent and deeper studies, Adobe Photoshop 7.0 is used as a present software. In addition to that Adobe Photoshop released a DesAcc plug-ins with CS3 version and provides full compatibility with DICOM (Digital Imaging and Communications in Medicine and PACS (Picture Archiving and Communications System that enables doctors to store all medical data together with relevant images and share if necessary.

  17. Karyotype analysis of three Solanum plants using combined PI-DAPI ...

    African Journals Online (AJOL)

    ajl yemi

    2011-12-19

    Dec 19, 2011 ... OLYMPUS epifluorescence microscope, and their images were captured with a CoolSNAP-CCD video camera using Meta Imaging. Series software. In this study, Adobe Photoshop software was used to take photos of the chromosomes, and karyotype analysis was studied by Li and. Chen (1985) methods.

  18. Image Montaging for Creating a Virtual Pathology Slide: An Innovative and Economical Tool to Obtain a Whole Slide Image.

    Science.gov (United States)

    Banavar, Spoorthi Ravi; Chippagiri, Prashanthi; Pandurangappa, Rohit; Annavajjula, Saileela; Rajashekaraiah, Premalatha Bidadi

    2016-01-01

    Background . Microscopes are omnipresent throughout the field of biological research. With microscopes one can see in detail what is going on at the cellular level in tissues. Though it is a ubiquitous tool, the limitation is that with high magnification there is a small field of view. It is often advantageous to see an entire sample at high magnification. Over the years technological advancements in optics have helped to provide solutions to this limitation of microscopes by creating the so-called dedicated "slide scanners" which can provide a "whole slide digital image." These scanners can provide seamless, large-field-of-view, high resolution image of entire tissue section. The only disadvantage of such complete slide imaging system is its outrageous cost, thereby hindering their practical use by most laboratories, especially in developing and low resource countries. Methods . In a quest for their substitute, we tried commonly used image editing software Adobe Photoshop along with a basic image capturing device attached to a trinocular microscope to create a digital pathology slide. Results . The seamless image created using Adobe Photoshop maintained its diagnostic quality. Conclusion . With time and effort photomicrographs obtained from a basic camera-microscope set up can be combined and merged in Adobe Photoshop to create a whole slide digital image of practically usable quality at a negligible cost.

  19. Comparison between a new computer program and the reference software for gray-scale median analysis of atherosclerotic carotid plaques.

    Science.gov (United States)

    Casella, Ivan Benaduce; Fukushima, Rodrigo Bono; Marques, Anita Battistini de Azevedo; Cury, Marcus Vinícius Martins; Presti, Calógero

    2015-03-01

    To compare a new dedicated software program and Adobe Photoshop for gray-scale median (GSM) analysis of B-mode images of carotid plaques. A series of 42 carotid plaques generating ≥50% diameter stenosis was evaluated by a single observer. The best segment for visualization of internal carotid artery plaque was identified on a single longitudinal view and images were recorded in JPEG format. Plaque analysis was performed by both programs. After normalization of image intensity (blood = 0, adventitial layer = 190), histograms were obtained after manual delineation of plaque. Results were compared with nonparametric Wilcoxon signed rank test and Kendall tau-b correlation analysis. GSM ranged from 00 to 100 with Adobe Photoshop and from 00 to 96 with IMTPC, with a high grade of similarity between image pairs, and a highly significant correlation (R = 0.94, p < .0001). IMTPC software appears suitable for the GSM analysis of carotid plaques. © 2014 Wiley Periodicals, Inc.

  20. Dynamic Chest Image Analysis: Model-Based Perfusion Analysis in Dynamic Pulmonary Imaging

    Directory of Open Access Journals (Sweden)

    Kiuru Aaro

    2003-01-01

    Full Text Available The "Dynamic Chest Image Analysis" project aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the dynamic pulmonary imaging technique. We have proposed and evaluated a multiresolutional method with an explicit ventilation model for ventilation analysis. This paper presents a new model-based method for pulmonary perfusion analysis. According to perfusion properties, we first devise a novel mathematical function to form a perfusion model. A simple yet accurate approach is further introduced to extract cardiac systolic and diastolic phases from the heart, so that this cardiac information may be utilized to accelerate the perfusion analysis and improve its sensitivity in detecting pulmonary perfusion abnormalities. This makes perfusion analysis not only fast but also robust in computation; consequently, perfusion analysis becomes computationally feasible without using contrast media. Our clinical case studies with 52 patients show that this technique is effective for pulmonary embolism even without using contrast media, demonstrating consistent correlations with computed tomography (CT and nuclear medicine (NM studies. This fluoroscopical examination takes only about 2 seconds for perfusion study with only low radiation dose to patient, involving no preparation, no radioactive isotopes, and no contrast media.

  1. Stress analysis in oral obturator prostheses: imaging photoelastic

    Science.gov (United States)

    Pesqueira, Aldiéris Alves; Goiato, Marcelo Coelho; dos Santos, Daniela Micheline; Haddad, Marcela Filié; Andreotti, Agda Marobo; Moreno, Amália

    2013-06-01

    Maxillary defects resulting from cancer, trauma, and congenital malformation affect the chewing efficiency and retention of dentures in these patients. The use of implant-retained palatal obturator dentures has improved the self-esteem and quality of life of several subjects. We evaluate the stress distribution of implant-retained palatal obturator dentures with different attachment systems by using the photoelastic analysis images. Two photoelastic models of the maxilla with oral-sinus-nasal communication were fabricated. One model received three implants on the left side of the alveolar ridge (incisive, canine, and first molar regions) and the other did not receive implants. Afterwards, a conventional palatal obturator denture (control) and two implant-retained palatal obturator dentures with different attachment systems (O-ring; bar-clip) were constructed. Models were placed in a circular polariscope and a 100-N axial load was applied in three different regions (incisive, canine, and first molar regions) by using a universal testing machine. The results were photographed and analyzed qualitatively using a software (Adobe Photoshop). The bar-clip system exhibited the highest stress concentration followed by the O-ring system and conventional denture (control). Images generated by the photoelastic method help in the oral rehabilitator planning.

  2. Predictive images of postoperative levator resection outcome using image processing software.

    Science.gov (United States)

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller's muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop ® ). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery.

  3. Geographic Object-Based Image Analysis: Towards a new paradigm

    NARCIS (Netherlands)

    Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.A.|info:eu-repo/dai/nl/224281216; Queiroz Feitosa, R.; van der Meer, F.D.|info:eu-repo/dai/nl/138940908; van der Werff, H.M.A.; van Coillie, F.; Tiede, A.

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis – GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature

  4. Web Based Distributed Coastal Image Analysis System, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project develops Web based distributed image analysis system processing the Moderate Resolution Imaging Spectroradiometer (MODIS) data to provide decision...

  5. Research of second harmonic generation images based on texture analysis

    Science.gov (United States)

    Liu, Yao; Li, Yan; Gong, Haiming; Zhu, Xiaoqin; Huang, Zufang; Chen, Guannan

    2014-09-01

    Texture analysis plays a crucial role in identifying objects or regions of interest in an image. It has been applied to a variety of medical image processing, ranging from the detection of disease and the segmentation of specific anatomical structures, to differentiation between healthy and pathological tissues. Second harmonic generation (SHG) microscopy as a potential noninvasive tool for imaging biological tissues has been widely used in medicine, with reduced phototoxicity and photobleaching. In this paper, we clarified the principles of texture analysis including statistical, transform, structural and model-based methods and gave examples of its applications, reviewing studies of the technique. Moreover, we tried to apply texture analysis to the SHG images for the differentiation of human skin scar tissues. Texture analysis method based on local binary pattern (LBP) and wavelet transform was used to extract texture features of SHG images from collagen in normal and abnormal scars, and then the scar SHG images were classified into normal or abnormal ones. Compared with other texture analysis methods with respect to the receiver operating characteristic analysis, LBP combined with wavelet transform was demonstrated to achieve higher accuracy. It can provide a new way for clinical diagnosis of scar types. At last, future development of texture analysis in SHG images were discussed.

  6. Asian Rhinoplasty: Preoperative Simulation and Planning Using Adobe Photoshop.

    Science.gov (United States)

    Kiranantawat, Kidakorn; Nguyen, Anh H

    2015-11-01

    A rhinoplasty in Asians differs from a rhinoplasty performed in patients of other ethnicities. Surgeons should understand the concept of Asian beauty, the nasal anatomy of Asians, and common problems encountered while operating on the Asian nose. With this understanding, surgeons can set appropriate goals, choose proper operative procedures, and provide an outcome that satisfies patients. In this article the authors define the concept of an Asian rhinoplasty-a paradigm shift from the traditional on-top augmentation rhinoplasty to a structurally integrated augmentation rhinoplasty-and provide a step-by-step procedure for the use of Adobe Photoshop as a preoperative program to simulate the expected surgical outcome for patients and to develop a preoperative plan for surgeons.

  7. An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis

    Science.gov (United States)

    Kim, Yongmin; Alexander, Thomas

    1986-06-01

    In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.

  8. An Integrative Object-Based Image Analysis Workflow for Uav Images

    Science.gov (United States)

    Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong

    2016-06-01

    In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  9. AN INTEGRATIVE OBJECT-BASED IMAGE ANALYSIS WORKFLOW FOR UAV IMAGES

    Directory of Open Access Journals (Sweden)

    H. Yu

    2016-06-01

    Full Text Available In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA. More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC. Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya’an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  10. Can state-of-the-art HVS-based objective image quality criteria be used for image reconstruction techniques based on ROI analysis?

    Science.gov (United States)

    Dostal, P.; Krasula, L.; Klima, M.

    2012-06-01

    Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.

  11. Smart phone: a popular device supports amylase activity assay in fisheries research.

    Science.gov (United States)

    Thongprajukaew, Karun; Choodum, Aree; Sa-E, Barunee; Hayee, Ummah

    2014-11-15

    Colourimetric determinations of amylase activity were developed based on a standard dinitrosalicylic acid (DNS) staining method, using maltose as the analyte. Intensities and absorbances of red, green and blue (RGB) were obtained with iPhone imaging and Adobe Photoshop image analysis. Correlation of green and analyte concentrations was highly significant, and the accuracy of the developed method was excellent in analytical performance. The common iPhone has sufficient imaging ability for accurate quantification of maltose concentrations. Detection limits, sensitivity and linearity were comparable to a spectrophotometric method, but provided better inter-day precision. In quantifying amylase specific activity from a commercial source (P>0.02) and fish samples (P>0.05), differences compared with spectrophotometric measurements were not significant. We have demonstrated that iPhone imaging with image analysis in Adobe Photoshop has potential for field and laboratory studies of amylase. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Knowledge-based analysis and understanding of 3D medical images

    International Nuclear Information System (INIS)

    Dhawan, A.P.; Juvvadi, S.

    1988-01-01

    The anatomical three-dimensional (3D) medical imaging modalities, such as X-ray CT and MRI, have been well recognized in the diagnostic radiology for several years while the nuclear medicine modalities, such as PET, have just started making a strong impact through functional imaging. Though PET images provide the functional information about the human organs, they are hard to interpret because of the lack of anatomical information. The authors objective is to develop a knowledge-based biomedical image analysis system which can interpret the anatomical images (such as CT). The anatomical information thus obtained can then be used in analyzing PET images of the same patient. This will not only help in interpreting PET images but it will also provide a means of studying the correlation between the anatomical and functional imaging. This paper presents the preliminary results of the knowledge based biomedical image analysis system for interpreting CT images of the chest

  13. A REGION-BASED MULTI-SCALE APPROACH FOR OBJECT-BASED IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    T. Kavzoglu

    2016-06-01

    Full Text Available Within the last two decades, object-based image analysis (OBIA considering objects (i.e. groups of pixels instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient. Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.

  14. Towards a framework for agent-based image analysis of remote-sensing data.

    Science.gov (United States)

    Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera

    2015-04-03

    Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects' properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA).

  15. Comparison of JPEG and wavelet compression on intraoral digital radiographic images

    International Nuclear Information System (INIS)

    Kim, Eun Kyung

    2004-01-01

    To determine the proper image compression method and ratio without image quality degradation in intraoral digital radiographic images, comparing the discrete cosine transform (DCT)-based JPEG with the wavelet-based JPEG 2000 algorithm. Thirty extracted sound teeth and thirty extracted teeth with occlusal caries were used for this study. Twenty plaster blocks were made with three teeth each. They were radiographically exposed using CDR sensors (Schick Inc., Long Island, USA). Digital images were compressed to JPEG format, using Adobe Photoshop v. 7.0 and JPEG 2000 format using Jasper program with compression ratios of 5 : 1, 9 : 1, 14 : 1, 28 : 1 each. To evaluate the lesion detectability, receiver operating characteristic (ROC) analysis was performed by the three oral and maxillofacial radiologists. To evaluate the image quality, all the compressed images were assessed subjectively using 5 grades, in comparison to the original uncompressed images. Compressed images up to compression ratio of 14: 1 in JPEG and 28 : 1 in JPEG 2000 showed nearly the same the lesion detectability as the original images. In the subjective assessment of image quality, images up to compression ratio of 9 : 1 in JPEG and 14 : 1 in JPEG 2000 showed minute mean paired differences from the original images. The results showed that the clinically acceptable compression ratios were up to 9 : 1 for JPEG and 14 : 1 for JPEG 2000. The wavelet-based JPEG 2000 is a better compression method, comparing to DCT-based JPEG for intraoral digital radiographic images.

  16. Independent component analysis based filtering for penumbral imaging

    International Nuclear Information System (INIS)

    Chen Yenwei; Han Xianhua; Nozaki, Shinya

    2004-01-01

    We propose a filtering based on independent component analysis (ICA) for Poisson noise reduction. In the proposed filtering, the image is first transformed to ICA domain and then the noise components are removed by a soft thresholding (shrinkage). The proposed filter, which is used as a preprocessing of the reconstruction, has been successfully applied to penumbral imaging. Both simulation results and experimental results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters

  17. Cnn Based Retinal Image Upscaling Using Zero Component Analysis

    Science.gov (United States)

    Nasonov, A.; Chesnakov, K.; Krylov, A.

    2017-05-01

    The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.

  18. Quantitative measurement of holographic image quality using Adobe Photoshop

    International Nuclear Information System (INIS)

    Wesly, E

    2013-01-01

    Measurement of the characteristics of image holograms in regards to diffraction efficiency and signal to noise ratio are demonstrated, using readily available digital cameras and image editing software. Illustrations and case studies, using currently available holographic recording materials, are presented.

  19. Quantitative measurement of holographic image quality using Adobe Photoshop

    Science.gov (United States)

    Wesly, E.

    2013-02-01

    Measurement of the characteristics of image holograms in regards to diffraction efficiency and signal to noise ratio are demonstrated, using readily available digital cameras and image editing software. Illustrations and case studies, using currently available holographic recording materials, are presented.

  20. Chain of evidence generation for contrast enhancement in digital image forensics

    Science.gov (United States)

    Battiato, Sebastiano; Messina, Giuseppe; Strano, Daniela

    2010-01-01

    The quality of the images obtained by digital cameras has improved a lot since digital cameras early days. Unfortunately, it is not unusual in image forensics to find wrongly exposed pictures. This is mainly due to obsolete techniques or old technologies, but also due to backlight conditions. To extrapolate some invisible details a stretching of the image contrast is obviously required. The forensics rules to produce evidences require a complete documentation of the processing steps, enabling the replication of the entire process. The automation of enhancement techniques is thus quite difficult and needs to be carefully documented. This work presents an automatic procedure to find contrast enhancement settings, allowing both image correction and automatic scripting generation. The technique is based on a preprocessing step which extracts the features of the image and selects correction parameters. The parameters are thus saved through a JavaScript code that is used in the second step of the approach to correct the image. The generated script is Adobe Photoshop compliant (which is largely used in image forensics analysis) thus permitting the replication of the enhancement steps. Experiments on a dataset of images are also reported showing the effectiveness of the proposed methodology.

  1. Feed particle size evaluation: conventional approach versus digital holography based image analysis

    Directory of Open Access Journals (Sweden)

    Vittorio Dell’Orto

    2010-01-01

    Full Text Available The aim of this study was to evaluate the application of image analysis approach based on digital holography in defining particle size in comparison with the sieve shaker method (sieving method as reference method. For this purpose ground corn meal was analyzed by a sieve shaker Retsch VS 1000 and by image analysis approach based on digital holography. Particle size from digital holography were compared with results obtained by screen (sieving analysis for each of size classes by a cumulative distribution plot. Comparison between particle size values obtained by sieving method and image analysis indicated that values were comparable in term of particle size information, introducing a potential application for digital holography and image analysis in feed industry.

  2. Image-Analysis Based on Seed Phenomics in Sesame

    Directory of Open Access Journals (Sweden)

    Prasad R.

    2014-10-01

    Full Text Available The seed coat (testa structure of twenty-three cultivated (Sesamum indicum L. and six wild sesame (s. occidentale Regel & Heer., S. mulayanum Nair, S. prostratum Retz., S. radiatum Schumach. & Thonn., S. angustifolium (Oliv. Engl. and S. schinzianum Asch germplasm was analyzed from digital and Scanning Electron Microscopy (SEM images with dedicated software using the descriptors for computer based seed image analysis to understand the diversity of seed morphometric traits, which later on can be extended to screen and evaluate improved genotypes of sesame. Seeds of wild sesame species could conveniently be distinguished from cultivated varieties based on shape and architectural analysis. Results indicated discrete ‘cut off values to identify definite shape and contour of seed for a desirable sesame genotype along with the con-ventional practice of selecting lighter colored testa.

  3. Knowledge-based low-level image analysis for computer vision systems

    Science.gov (United States)

    Dhawan, Atam P.; Baxi, Himanshu; Ranganath, M. V.

    1988-01-01

    Two algorithms for entry-level image analysis and preliminary segmentation are proposed which are flexible enough to incorporate local properties of the image. The first algorithm involves pyramid-based multiresolution processing and a strategy to define and use interlevel and intralevel link strengths. The second algorithm, which is designed for selected window processing, extracts regions adaptively using local histograms. The preliminary segmentation and a set of features are employed as the input to an efficient rule-based low-level analysis system, resulting in suboptimal meaningful segmentation.

  4. Physics-based deformable organisms for medical image analysis

    Science.gov (United States)

    Hamarneh, Ghassan; McIntosh, Chris

    2005-04-01

    Previously, "Deformable organisms" were introduced as a novel paradigm for medical image analysis that uses artificial life modelling concepts. Deformable organisms were designed to complement the classical bottom-up deformable models methodologies (geometrical and physical layers), with top-down intelligent deformation control mechanisms (behavioral and cognitive layers). However, a true physical layer was absent and in order to complete medical image segmentation tasks, deformable organisms relied on pure geometry-based shape deformations guided by sensory data, prior structural knowledge, and expert-generated schedules of behaviors. In this paper we introduce the use of physics-based shape deformations within the deformable organisms framework yielding additional robustness by allowing intuitive real-time user guidance and interaction when necessary. We present the results of applying our physics-based deformable organisms, with an underlying dynamic spring-mass mesh model, to segmenting and labelling the corpus callosum in 2D midsagittal magnetic resonance images.

  5. Morphological images analysis and chromosomic aberrations classification based on fuzzy logic

    International Nuclear Information System (INIS)

    Souza, Leonardo Peres

    2011-01-01

    This work has implemented a methodology for automation of images analysis of chromosomes of human cells irradiated at IEA-R1 nuclear reactor (located at IPEN, Sao Paulo, Brazil), and therefore subject to morphological aberrations. This methodology intends to be a tool for helping cytogeneticists on identification, characterization and classification of chromosomal metaphasic analysis. The methodology development has included the creation of a software application based on artificial intelligence techniques using Fuzzy Logic combined with image processing techniques. The developed application was named CHRIMAN and is composed of modules that contain the methodological steps which are important requirements in order to achieve an automated analysis. The first step is the standardization of the bi-dimensional digital image acquisition procedure through coupling a simple digital camera to the ocular of the conventional metaphasic analysis microscope. Second step is related to the image treatment achieved through digital filters application; storing and organization of information obtained both from image content itself, and from selected extracted features, for further use on pattern recognition algorithms. The third step consists on characterizing, counting and classification of stored digital images and extracted features information. The accuracy in the recognition of chromosome images is 93.9%. This classification is based on classical standards obtained at Buckton [1973], and enables support to geneticist on chromosomic analysis procedure, decreasing analysis time, and creating conditions to include this method on a broader evaluation system on human cell damage due to ionizing radiation exposure. (author)

  6. iScreen: Image-Based High-Content RNAi Screening Analysis Tools.

    Science.gov (United States)

    Zhong, Rui; Dong, Xiaonan; Levine, Beth; Xie, Yang; Xiao, Guanghua

    2015-09-01

    High-throughput RNA interference (RNAi) screening has opened up a path to investigating functional genomics in a genome-wide pattern. However, such studies are often restricted to assays that have a single readout format. Recently, advanced image technologies have been coupled with high-throughput RNAi screening to develop high-content screening, in which one or more cell image(s), instead of a single readout, were generated from each well. This image-based high-content screening technology has led to genome-wide functional annotation in a wider spectrum of biological research studies, as well as in drug and target discovery, so that complex cellular phenotypes can be measured in a multiparametric format. Despite these advances, data analysis and visualization tools are still largely lacking for these types of experiments. Therefore, we developed iScreen (image-Based High-content RNAi Screening Analysis Tool), an R package for the statistical modeling and visualization of image-based high-content RNAi screening. Two case studies were used to demonstrate the capability and efficiency of the iScreen package. iScreen is available for download on CRAN (http://cran.cnr.berkeley.edu/web/packages/iScreen/index.html). The user manual is also available as a supplementary document. © 2014 Society for Laboratory Automation and Screening.

  7. Geographic Object-Based Image Analysis - Towards a new paradigm.

    Science.gov (United States)

    Blaschke, Thomas; Hay, Geoffrey J; Kelly, Maggi; Lang, Stefan; Hofmann, Peter; Addink, Elisabeth; Queiroz Feitosa, Raul; van der Meer, Freek; van der Werff, Harald; van Coillie, Frieke; Tiede, Dirk

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis - GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extraction approaches. This article investigates these development and its implications and asks whether or not this is a new paradigm in remote sensing and Geographic Information Science (GIScience). We first discuss several limitations of prevailing per-pixel methods when applied to high resolution images. Then we explore the paradigm concept developed by Kuhn (1962) and discuss whether GEOBIA can be regarded as a paradigm according to this definition. We crystallize core concepts of GEOBIA, including the role of objects, of ontologies and the multiplicity of scales and we discuss how these conceptual developments support important methods in remote sensing such as change detection and accuracy assessment. The ramifications of the different theoretical foundations between the ' per-pixel paradigm ' and GEOBIA are analysed, as are some of the challenges along this path from pixels, to objects, to geo-intelligence. Based on several paradigm indications as defined by Kuhn and based on an analysis of peer-reviewed scientific literature we conclude that GEOBIA is a new and evolving paradigm.

  8. Image-based RSA: Roentgen stereophotogrammetric analysis based on 2D-3D image registration.

    Science.gov (United States)

    de Bruin, P W; Kaptein, B L; Stoel, B C; Reiber, J H C; Rozing, P M; Valstar, E R

    2008-01-01

    Image-based Roentgen stereophotogrammetric analysis (IBRSA) integrates 2D-3D image registration and conventional RSA. Instead of radiopaque RSA bone markers, IBRSA uses 3D CT data, from which digitally reconstructed radiographs (DRRs) are generated. Using 2D-3D image registration, the 3D pose of the CT is iteratively adjusted such that the generated DRRs resemble the 2D RSA images as closely as possible, according to an image matching metric. Effectively, by registering all 2D follow-up moments to the same 3D CT, the CT volume functions as common ground. In two experiments, using RSA and using a micromanipulator as gold standard, IBRSA has been validated on cadaveric and sawbone scapula radiographs, and good matching results have been achieved. The accuracy was: |mu |RSA but higher than in vivo standard RSA. Because IBRSA does not require radiopaque markers, it adds functionality to the RSA method by opening new directions and possibilities for research, such as dynamic analyses using fluoroscopy on subjects without markers and computer navigation applications.

  9. Wizard CD Plus and ProTaper Universal: analysis of apical transportation using new software.

    Science.gov (United States)

    Giannastasio, Daiana; Rosa, Ricardo Abreu da; Peres, Bernardo Urbanetto; Barreto, Mirela Sangoi; Dotto, Gustavo Nogara; Kuga, Milton Carlos; Pereira, Jefferson Ricardo; Só, Marcus Vinícius Reis

    2013-01-01

    This study has two aims: 1) to evaluate the apical transportation of the Wizard CD Plus and ProTaper Universal after preparation of simulated root canals; 2) to compare, with Adobe Photoshop, the ability of a new software (Regeemy) in superposing and subtracting images. Twenty five simulated root canals in acrylic-resin blocks (with 20º curvature) underwent cone beam computed tomography before and after preparation with the rotary systems (70 kVp, 4 mA, 10 s and with the 8×8 cm FoV selection). Canals were prepared up to F2 (ProTaper) and 24.04 (Wizard CD Plus) instruments and the working length was established to 15 mm. The tomographic images were imported into iCAT Vision software and CorelDraw for standardization. The superposition of pre- and post-instrumentation images from both systems was performed using Regeemy and Adobe Photoshop. The apical transportation was measured in millimetres using Image J. Five acrylic resin blocks were used to validate the superposition achieved by the software. Student's t-test for independent samples was used to evaluate the apical transportation achieved by the rotary systems using each software individually. Student's t-test for paired samples was used to compare the ability of each software in superposing and subtracting images from one rotary system per time. The values obtained with Regeemy and Adobe Photoshop were similar to rotary systems (P>0.05). ProTaper Universal and Wizard CD Plus promoted similar apical transportation regardless of the software used for image's superposition and subtraction (P>0.05). Wizard CD Plus and ProTaper Universal promoted little apical transportation. Regeemy consists in a feasible software to superpose and subtract images and appears to be an alternative to Adobe Photoshop.

  10. Wizard CD Plus and ProTaper Universal: analysis of apical transportation using new software

    Science.gov (United States)

    GIANNASTASIO, Daiana; da ROSA, Ricardo Abreu; PERES, Bernardo Urbanetto; BARRETO, Mirela Sangoi; DOTTO, Gustavo Nogara; KUGA, Milton Carlos; PEREIRA, Jefferson Ricardo; SÓ, Marcus Vinícius Reis

    2013-01-01

    Objective This study has two aims: 1) to evaluate the apical transportation of the Wizard CD Plus and ProTaper Universal after preparation of simulated root canals; 2) to compare, with Adobe Photoshop, the ability of a new software (Regeemy) in superposing and subtracting images. Material and Methods Twenty five simulated root canals in acrylic-resin blocks (with 20º curvature) underwent cone beam computed tomography before and after preparation with the rotary systems (70 kVp, 4 mA, 10 s and with the 8×8 cm FoV selection). Canals were prepared up to F2 (ProTaper) and 24.04 (Wizard CD Plus) instruments and the working length was established to 15 mm. The tomographic images were imported into iCAT Vision software and CorelDraw for standardization. The superposition of pre- and post-instrumentation images from both systems was performed using Regeemy and Adobe Photoshop. The apical transportation was measured in millimetres using Image J. Five acrylic resin blocks were used to validate the superposition achieved by the software. Student's t-test for independent samples was used to evaluate the apical transportation achieved by the rotary systems using each software individually. Student's t-test for paired samples was used to compare the ability of each software in superposing and subtracting images from one rotary system per time. Results The values obtained with Regeemy and Adobe Photoshop were similar to rotary systems (P>0.05). ProTaper Universal and Wizard CD Plus promoted similar apical transportation regardless of the software used for image's superposition and subtraction (P>0.05). Conclusion Wizard CD Plus and ProTaper Universal promoted little apical transportation. Regeemy consists in a feasible software to superpose and subtract images and appears to be an alternative to Adobe Photoshop. PMID:24212994

  11. Wizard CD Plus and ProTaper Universal: analysis of apical transportation using new software

    Directory of Open Access Journals (Sweden)

    Daiana Giannastasio

    2013-09-01

    Full Text Available OBJECTIVE: This study has two aims: 1 to evaluate the apical transportation of the Wizard CD Plus and ProTaper Universal after preparation of simulated root canals; 2 to compare, with Adobe Photoshop, the ability of a new software (Regeemy in superposing and subtracting images. MATERIAL AND METHODS: Twenty five simulated root canals in acrylic-resin blocks (with 20º curvature underwent cone beam computed tomography before and after preparation with the rotary systems (70 kVp, 4 mA, 10 s and with the 8×8 cm FoV selection. Canals were prepared up to F2 (ProTaper and 24.04 (Wizard CD Plus instruments and the working length was established to 15 mm. The tomographic images were imported into iCAT Vision software and CorelDraw for standardization. The superposition of pre- and post-instrumentation images from both systems was performed using Regeemy and Adobe Photoshop. The apical transportation was measured in millimetres using Image J. Five acrylic resin blocks were used to validate the superposition achieved by the software. Student's t-test for independent samples was used to evaluate the apical transportation achieved by the rotary systems using each software individually. Student's t-test for paired samples was used to compare the ability of each software in superposing and subtracting images from one rotary system per time. RESULTS: The values obtained with Regeemy and Adobe Photoshop were similar to rotary systems (P>0.05. ProTaper Universal and Wizard CD Plus promoted similar apical transportation regardless of the software used for image's superposition and subtraction (P>0.05. CONCLUSION: Wizard CD Plus and ProTaper Universal promoted little apical transportation. Regeemy consists in a feasible software to superpose and subtract images and appears to be an alternative to Adobe Photoshop.

  12. Analysis and improvement of a chaos-based image encryption algorithm

    International Nuclear Information System (INIS)

    Xiao Di; Liao Xiaofeng; Wei Pengcheng

    2009-01-01

    The security of digital image attracts much attention recently. In Guan et al. [Guan Z, Huang F, Guan W. Chaos-based image encryption algorithm. Phys Lett A 2005; 346: 153-7.], a chaos-based image encryption algorithm has been proposed. In this paper, the cause of potential flaws in the original algorithm is analyzed in detail, and then the corresponding enhancement measures are proposed. Both theoretical analysis and computer simulation indicate that the improved algorithm can overcome these flaws and maintain all the merits of the original one.

  13. Detailed analysis of latencies in image-based dynamic MLC tracking

    International Nuclear Information System (INIS)

    Poulsen, Per Rugaard; Cho, Byungchul; Sawant, Amit; Ruan, Dan; Keall, Paul J.

    2010-01-01

    Purpose: Previous measurements of the accuracy of image-based real-time dynamic multileaf collimator (DMLC) tracking show that the major contributor to errors is latency, i.e., the delay between target motion and MLC response. Therefore the purpose of this work was to develop a method for detailed analysis of latency contributions during image-based DMLC tracking. Methods: A prototype DMLC tracking system integrated with a linear accelerator was used for tracking a phantom with an embedded fiducial marker during treatment delivery. The phantom performed a sinusoidal motion. Real-time target localization was based on x-ray images acquired either with a portal imager or a kV imager mounted orthogonal to the treatment beam. Each image was stored in a file on the imaging workstation. A marker segmentation program opened the image file, determined the marker position in the image, and transferred it to the DMLC tracking program. This program estimated the three-dimensional target position by a single-imager method and adjusted the MLC aperture to the target position. Imaging intervals ΔT image from 150 to 1000 ms were investigated for both kV and MV imaging. After the experiments, the recorded images were synchronized with MLC log files generated by the MLC controller and tracking log files generated by the tracking program. This synchronization allowed temporal analysis of the information flow for each individual image from acquisition to completed MLC adjustment. The synchronization also allowed investigation of the MLC adjustment dynamics on a considerably finer time scale than the 50 ms time resolution of the MLC log files. Results: For ΔT image =150 ms, the total time from image acquisition to completed MLC adjustment was 380±9 ms for MV and 420±12 ms for kV images. The main part of this time was from image acquisition to completed image file writing (272 ms for MV and 309 ms for kV). Image file opening (38 ms), marker segmentation (4 ms), MLC position

  14. Detailed analysis of latencies in image-based dynamic MLC tracking

    Energy Technology Data Exchange (ETDEWEB)

    Poulsen, Per Rugaard; Cho, Byungchul; Sawant, Amit; Ruan, Dan; Keall, Paul J. [Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Department of Oncology and Department of Medical Physics, Aarhus University Hospital, 8000 Aarhus (Denmark); Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Department of Radiation Oncology, Asan Medical Center, Seoul 138-736 (Korea, Republic of); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States)

    2010-09-15

    Purpose: Previous measurements of the accuracy of image-based real-time dynamic multileaf collimator (DMLC) tracking show that the major contributor to errors is latency, i.e., the delay between target motion and MLC response. Therefore the purpose of this work was to develop a method for detailed analysis of latency contributions during image-based DMLC tracking. Methods: A prototype DMLC tracking system integrated with a linear accelerator was used for tracking a phantom with an embedded fiducial marker during treatment delivery. The phantom performed a sinusoidal motion. Real-time target localization was based on x-ray images acquired either with a portal imager or a kV imager mounted orthogonal to the treatment beam. Each image was stored in a file on the imaging workstation. A marker segmentation program opened the image file, determined the marker position in the image, and transferred it to the DMLC tracking program. This program estimated the three-dimensional target position by a single-imager method and adjusted the MLC aperture to the target position. Imaging intervals {Delta}T{sub image} from 150 to 1000 ms were investigated for both kV and MV imaging. After the experiments, the recorded images were synchronized with MLC log files generated by the MLC controller and tracking log files generated by the tracking program. This synchronization allowed temporal analysis of the information flow for each individual image from acquisition to completed MLC adjustment. The synchronization also allowed investigation of the MLC adjustment dynamics on a considerably finer time scale than the 50 ms time resolution of the MLC log files. Results: For {Delta}T{sub image}=150 ms, the total time from image acquisition to completed MLC adjustment was 380{+-}9 ms for MV and 420{+-}12 ms for kV images. The main part of this time was from image acquisition to completed image file writing (272 ms for MV and 309 ms for kV). Image file opening (38 ms), marker segmentation (4 ms

  15. Mapping Fire Severity Using Imaging Spectroscopy and Kernel Based Image Analysis

    Science.gov (United States)

    Prasad, S.; Cui, M.; Zhang, Y.; Veraverbeke, S.

    2014-12-01

    Improved spatial representation of within-burn heterogeneity after wildfires is paramount to effective land management decisions and more accurate fire emissions estimates. In this work, we demonstrate feasibility and efficacy of airborne imaging spectroscopy (hyperspectral imagery) for quantifying wildfire burn severity, using kernel based image analysis techniques. Two different airborne hyperspectral datasets, acquired over the 2011 Canyon and 2013 Rim fire in California using the Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) sensor, were used in this study. The Rim Fire, covering parts of the Yosemite National Park started on August 17, 2013, and was the third largest fire in California's history. Canyon Fire occurred in the Tehachapi mountains, and started on September 4, 2011. In addition to post-fire data for both fires, half of the Rim fire was also covered with pre-fire images. Fire severity was measured in the field using Geo Composite Burn Index (GeoCBI). The field data was utilized to train and validate our models, wherein the trained models, in conjunction with imaging spectroscopy data were used for GeoCBI estimation wide geographical regions. This work presents an approach for using remotely sensed imagery combined with GeoCBI field data to map fire scars based on a non-linear (kernel based) epsilon-Support Vector Regression (e-SVR), which was used to learn the relationship between spectra and GeoCBI in a kernel-induced feature space. Classification of healthy vegetation versus fire-affected areas based on morphological multi-attribute profiles was also studied. The availability of pre- and post-fire imaging spectroscopy data over the Rim Fire provided a unique opportunity to evaluate the performance of bi-temporal imaging spectroscopy for assessing post-fire effects. This type of data is currently constrained because of limited airborne acquisitions before a fire, but will become widespread with future spaceborne sensors such as those on

  16. CONTEXT BASED FOOD IMAGE ANALYSIS

    OpenAIRE

    He, Ye; Xu, Chang; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.

    2013-01-01

    We are developing a dietary assessment system that records daily food intake through the use of food images. Recognizing food in an image is difficult due to large visual variance with respect to eating or preparation conditions. This task becomes even more challenging when different foods have similar visual appearance. In this paper we propose to incorporate two types of contextual dietary information, food co-occurrence patterns and personalized learning models, in food image analysis to r...

  17. Tracking Color Shift in Ballpoint Pen Ink Using Photoshop Assisted Spectroscopy: A Nondestructive Technique Developed to Rehouse a Nobel Laureate's Manuscript.

    Science.gov (United States)

    Wright, Kristi; Herro, Holly

    2016-01-01

    Many historically and culturally significant documents from the mid-to-late twentieth century were written in ballpoint pen inks, which contain light-sensitive dyes that present problems for collection custodians and paper conservators. The conservation staff at the National Library of Medicine (NLM), National Institutes of Health, conducted a multiphase project on the chemistry and aging of ballpoint pen ink that culminated in the development of a new method to detect aging of ballpoint pen ink while examining a variety of storage environments. NLM staff determined that ballpoint pen ink color shift can be detected noninvasively using image editing software. Instructions are provided on how to detect color shift in digitized materials using a technique developed specifically for this project-Photoshop Assisted Spectroscopy. 1 The study results offer collection custodians storage options for historic documents containing ballpoint pen ink.

  18. Shape analysis in medical image analysis

    CERN Document Server

    Tavares, João

    2014-01-01

    This book contains thirteen contributions from invited experts of international recognition addressing important issues in shape analysis in medical image analysis, including techniques for image segmentation, registration, modelling and classification, and applications in biology, as well as in cardiac, brain, spine, chest, lung and clinical practice. This volume treats topics such as, anatomic and functional shape representation and matching; shape-based medical image segmentation; shape registration; statistical shape analysis; shape deformation; shape-based abnormity detection; shape tracking and longitudinal shape analysis; machine learning for shape modeling and analysis; shape-based computer-aided-diagnosis; shape-based medical navigation; benchmark and validation of shape representation, analysis and modeling algorithms. This work will be of interest to researchers, students, and manufacturers in the fields of artificial intelligence, bioengineering, biomechanics, computational mechanics, computationa...

  19. Retinal Imaging and Image Analysis

    Science.gov (United States)

    Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

    2011-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:22275207

  20. Object-Based Image Analysis Beyond Remote Sensing - the Human Perspective

    Science.gov (United States)

    Blaschke, T.; Lang, S.; Tiede, D.; Papadakis, M.; Györi, A.

    2016-06-01

    We introduce a prototypical methodological framework for a place-based GIS-RS system for the spatial delineation of place while incorporating spatial analysis and mapping techniques using methods from different fields such as environmental psychology, geography, and computer science. The methodological lynchpin for this to happen - when aiming to delineate place in terms of objects - is object-based image analysis (OBIA).

  1. Geographic Object-Based Image Analysis – Towards a new paradigm

    Science.gov (United States)

    Blaschke, Thomas; Hay, Geoffrey J.; Kelly, Maggi; Lang, Stefan; Hofmann, Peter; Addink, Elisabeth; Queiroz Feitosa, Raul; van der Meer, Freek; van der Werff, Harald; van Coillie, Frieke; Tiede, Dirk

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis – GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extraction approaches. This article investigates these development and its implications and asks whether or not this is a new paradigm in remote sensing and Geographic Information Science (GIScience). We first discuss several limitations of prevailing per-pixel methods when applied to high resolution images. Then we explore the paradigm concept developed by Kuhn (1962) and discuss whether GEOBIA can be regarded as a paradigm according to this definition. We crystallize core concepts of GEOBIA, including the role of objects, of ontologies and the multiplicity of scales and we discuss how these conceptual developments support important methods in remote sensing such as change detection and accuracy assessment. The ramifications of the different theoretical foundations between the ‘per-pixel paradigm’ and GEOBIA are analysed, as are some of the challenges along this path from pixels, to objects, to geo-intelligence. Based on several paradigm indications as defined by Kuhn and based on an analysis of peer-reviewed scientific literature we conclude that GEOBIA is a new and evolving paradigm. PMID:24623958

  2. Students’ needs of Computer Science: learning about image processing

    Directory of Open Access Journals (Sweden)

    Juana Marlen Tellez Reinoso

    2009-12-01

    Full Text Available To learn the treatment to image, specifically in the application Photoshop Marinates is one of the objectives in the specialty of Degree in Education, Computer Sciencie, guided to guarantee the preparation of the students as future professional, being able to reach in each citizen of our country an Integral General Culture. With that purpose a computer application is suggested, of tutorial type, entitled “Learning Treatment to Image".

  3. A Cost-Effective Transparency-Based Digital Imaging for Efficient and Accurate Wound Area Measurement

    Science.gov (United States)

    Li, Pei-Nan; Li, Hong; Wu, Mo-Li; Wang, Shou-Yu; Kong, Qing-You; Zhang, Zhen; Sun, Yuan; Liu, Jia; Lv, De-Cheng

    2012-01-01

    Wound measurement is an objective and direct way to trace the course of wound healing and to evaluate therapeutic efficacy. Nevertheless, the accuracy and efficiency of the current measurement methods need to be improved. Taking the advantages of reliability of transparency tracing and the accuracy of computer-aided digital imaging, a transparency-based digital imaging approach is established, by which data from 340 wound tracing were collected from 6 experimental groups (8 rats/group) at 8 experimental time points (Day 1, 3, 5, 7, 10, 12, 14 and 16) and orderly archived onto a transparency model sheet. This sheet was scanned and its image was saved in JPG form. Since a set of standard area units from 1 mm2 to 1 cm2 was integrated into the sheet, the tracing areas in JPG image were measured directly, using the “Magnetic lasso tool” in Adobe Photoshop program. The pixel values/PVs of individual outlined regions were obtained and recorded in an average speed of 27 second/region. All PV data were saved in an excel form and their corresponding areas were calculated simultaneously by the formula of Y (PV of the outlined region)/X (PV of standard area unit) × Z (area of standard unit). It took a researcher less than 3 hours to finish area calculation of 340 regions. In contrast, over 3 hours were expended by three skillful researchers to accomplish the above work with traditional transparency-based method. Moreover, unlike the results obtained traditionally, little variation was found among the data calculated by different persons and the standard area units in different sizes and shapes. Given its accurate, reproductive and efficient properties, this transparency-based digital imaging approach would be of significant values in basic wound healing research and clinical practice. PMID:22666449

  4. Clinical Applications of a CT Window Blending Algorithm: RADIO (Relative Attenuation-Dependent Image Overlay).

    Science.gov (United States)

    Mandell, Jacob C; Khurana, Bharti; Folio, Les R; Hyun, Hyewon; Smith, Stacy E; Dunne, Ruth M; Andriole, Katherine P

    2017-06-01

    A methodology is described using Adobe Photoshop and Adobe Extendscript to process DICOM images with a Relative Attenuation-Dependent Image Overlay (RADIO) algorithm to visualize the full dynamic range of CT in one view, without requiring a change in window and level settings. The potential clinical uses for such an algorithm are described in a pictorial overview, including applications in emergency radiology, oncologic imaging, and nuclear medicine and molecular imaging.

  5. Photoshop® Assisted Spectroscopy: An Economical and Non-Destructive Method for Tracking Color Shift.

    Science.gov (United States)

    Wright, Kristi; Herro, Holly

    Many historically and culturally significant objects from the mid-to-late 20 th century were created with media which contains light sensitive dyes that present problems for collection custodians and conservators. The conservation staff at the National Library of Medicine (NLM), National Institutes of Health, conducted a multi-phase project on the aging of ballpoint pen ink in a variety of enclosure types that ultimately culminated in the development of a new method to detect color shift in documents with light sensitive media. This article offers instructions on how to detect color shift in digitized materials using Photoshop® Assisted Spectroscopy.

  6. Measurements of simulated periodontal bone defects in inverted digital image and film-based radiograph: an in vitro study

    International Nuclear Information System (INIS)

    Molon, Rafael Scaf; Morais Camillo, Juliana Aparecida Najarro Dearo; Ferreira, Mauricio Goncalves; Loffredo, Leonor Castro Monteiro; Scaf, Gulnara; Sakakura, Celso Eduardo

    2012-01-01

    This study was performed to compare the inverted digital images and film-based images of dry pig mandibles to measure the periodontal bone defect depth. Forty 2-wall bone defects were made in the proximal region of the premolar in the dry pig mandibles. The digital and conventional radiographs were taken using a Schick sensor and Kodak F-speed intraoral film. Image manipulation (inversion) was performed using Adobe Photoshop 7.0 software. Four trained examiners made all of the radiographic measurements in millimeters a total of three times from the cementoenamel junction to the most apical extension of the bone loss with both types of images: inverted digital and film. The measurements were also made in dry mandibles using a periodontal probe and digital caliper. The Student's t-test was used to compare the depth measurements obtained from the two types of images and direct visual measurement in the dry mandibles. A significance level of 0.05 for a 95% confidence interval was used for each comparison. There was a significant difference between depth measurements in the inverted digital images and direct visual measurements (p>|t|=0.0039), with means of 6.29 mm (IC 95% :6.04-6.54) and 6.79 mm (IC 95% :6.45-7.11), respectively. There was a non-significant difference between the film-based radiographs and direct visual measurements (p>|t|=0.4950), with means of 6.64 mm(IC 95% :6.40-6.89) and 6.79 mm(IC 95% :6.45-7.11), respectively. The periodontal bone defect measurements in the inverted digital images were inferior to film-based radiographs, underestimating the amount of bone loss.

  7. Residual stress distribution analysis of heat treated APS TBC using image based modelling.

    Science.gov (United States)

    Li, Chun; Zhang, Xun; Chen, Ying; Carr, James; Jacques, Simon; Behnsen, Julia; di Michiel, Marco; Xiao, Ping; Cernik, Robert

    2017-08-01

    We carried out a residual stress distribution analysis in a APS TBC throughout the depth of the coatings. The samples were heat treated at 1150 °C for 190 h and the data analysis used image based modelling based on the real 3D images measured by Computed Tomography (CT). The stress distribution in several 2D slices from the 3D model is included in this paper as well as the stress distribution along several paths shown on the slices. Our analysis can explain the occurrence of the "jump" features near the interface between the top coat and the bond coat. These features in the residual stress distribution trend were measured (as a function of depth) by high-energy synchrotron XRD (as shown in our related research article entitled 'Understanding the Residual Stress Distribution through the Thickness of Atmosphere Plasma Sprayed (APS) Thermal Barrier Coatings (TBCs) by high energy Synchrotron XRD; Digital Image Correlation (DIC) and Image Based Modelling') (Li et al., 2017) [1].

  8. Image Analysis for X-ray Imaging of Food

    DEFF Research Database (Denmark)

    Einarsdottir, Hildur

    for quality and safety evaluation of food products. In this effort the fields of statistics, image analysis and statistical learning are combined, to provide analytical tools for determining the aforementioned food traits. The work demonstrated includes a quantitative analysis of heat induced changes......X-ray imaging systems are increasingly used for quality and safety evaluation both within food science and production. They offer non-invasive and nondestructive penetration capabilities to image the inside of food. This thesis presents applications of a novel grating-based X-ray imaging technique...... and defect detection in food. Compared to the complex three dimensional analysis of microstructure, here two dimensional images are considered, making the method applicable for an industrial setting. The advantages obtained by grating-based imaging are compared to conventional X-ray imaging, for both foreign...

  9. Artistic image analysis using graph-based learning approaches.

    Science.gov (United States)

    Carneiro, Gustavo

    2013-08-01

    We introduce a new methodology for the problem of artistic image analysis, which among other tasks, involves the automatic identification of visual classes present in an art work. In this paper, we advocate the idea that artistic image analysis must explore a graph that captures the network of artistic influences by computing the similarities in terms of appearance and manual annotation. One of the novelties of our methodology is the proposed formulation that is a principled way of combining these two similarities in a single graph. Using this graph, we show that an efficient random walk algorithm based on an inverted label propagation formulation produces more accurate annotation and retrieval results compared with the following baseline algorithms: bag of visual words, label propagation, matrix completion, and structural learning. We also show that the proposed approach leads to a more efficient inference and training procedures. This experiment is run on a database containing 988 artistic images (with 49 visual classification problems divided into a multiclass problem with 27 classes and 48 binary problems), where we show the inference and training running times, and quantitative comparisons with respect to several retrieval and annotation performance measures.

  10. Change detection for synthetic aperture radar images based on pattern and intensity distinctiveness analysis

    Science.gov (United States)

    Wang, Xiao; Gao, Feng; Dong, Junyu; Qi, Qiang

    2018-04-01

    Synthetic aperture radar (SAR) image is independent on atmospheric conditions, and it is the ideal image source for change detection. Existing methods directly analysis all the regions in the speckle noise contaminated difference image. The performance of these methods is easily affected by small noisy regions. In this paper, we proposed a novel change detection framework for saliency-guided change detection based on pattern and intensity distinctiveness analysis. The saliency analysis step can remove small noisy regions, and therefore makes the proposed method more robust to the speckle noise. In the proposed method, the log-ratio operator is first utilized to obtain a difference image (DI). Then, the saliency detection method based on pattern and intensity distinctiveness analysis is utilized to obtain the changed region candidates. Finally, principal component analysis and k-means clustering are employed to analysis pixels in the changed region candidates. Thus, the final change map can be obtained by classifying these pixels into changed or unchanged class. The experiment results on two real SAR images datasets have demonstrated the effectiveness of the proposed method.

  11. Image analysis and modeling in medical image computing. Recent developments and advances.

    Science.gov (United States)

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body

  12. Fractal-Based Image Analysis In Radiological Applications

    Science.gov (United States)

    Dellepiane, S.; Serpico, S. B.; Vernazza, G.; Viviani, R.

    1987-10-01

    We present some preliminary results of a study aimed to assess the actual effectiveness of fractal theory and to define its limitations in the area of medical image analysis for texture description, in particular, in radiological applications. A general analysis to select appropriate parameters (mask size, tolerance on fractal dimension estimation, etc.) has been performed on synthetically generated images of known fractal dimensions. Moreover, we analyzed some radiological images of human organs in which pathological areas can be observed. Input images were subdivided into blocks of 6x6 pixels; then, for each block, the fractal dimension was computed in order to create fractal images whose intensity was related to the D value, i.e., texture behaviour. Results revealed that the fractal images could point out the differences between normal and pathological tissues. By applying histogram-splitting segmentation to the fractal images, pathological areas were isolated. Two different techniques (i.e., the method developed by Pentland and the "blanket" method) were employed to obtain fractal dimension values, and the results were compared; in both cases, the appropriateness of the fractal description of the original images was verified.

  13. A combined use of multispectral and SAR images for ship detection and characterization through object based image analysis

    Science.gov (United States)

    Aiello, Martina; Gianinetto, Marco

    2017-10-01

    Marine routes represent a huge portion of commercial and human trades, therefore surveillance, security and environmental protection themes are gaining increasing importance. Being able to overcome the limits imposed by terrestrial means of monitoring, ship detection from satellite has recently prompted a renewed interest for a continuous monitoring of illegal activities. This paper describes an automatic Object Based Image Analysis (OBIA) approach to detect vessels made of different materials in various sea environments. The combined use of multispectral and SAR images allows for a regular observation unrestricted by lighting and atmospheric conditions and complementarity in terms of geographic coverage and geometric detail. The method developed adopts a region growing algorithm to segment the image in homogeneous objects, which are then classified through a decision tree algorithm based on spectral and geometrical properties. Then, a spatial analysis retrieves the vessels' position, length and heading parameters and a speed range is associated. Optimization of the image processing chain is performed by selecting image tiles through a statistical index. Vessel candidates are detected over amplitude SAR images using an adaptive threshold Constant False Alarm Rate (CFAR) algorithm prior the object based analysis. Validation is carried out by comparing the retrieved parameters with the information provided by the Automatic Identification System (AIS), when available, or with manual measurement when AIS data are not available. The estimation of length shows R2=0.85 and estimation of heading R2=0.92, computed as the average of R2 values obtained for both optical and radar images.

  14. Measurements of simulated periodontal bone defects in inverted digital image and film-based radiograph: an in vitro study

    Energy Technology Data Exchange (ETDEWEB)

    Molon, Rafael Scaf; Morais Camillo, Juliana Aparecida Najarro Dearo; Ferreira, Mauricio Goncalves; Loffredo, Leonor Castro Monteiro; Scaf, Gulnara [Araraquara Dental School, Universidade Estadual Paulista, Sao Paulo (Brazil); Sakakura, Celso Eduardo [Barretos Dental School, Barretos Educational Fundation, Sao Paulo (Brazil)

    2012-09-15

    This study was performed to compare the inverted digital images and film-based images of dry pig mandibles to measure the periodontal bone defect depth. Forty 2-wall bone defects were made in the proximal region of the premolar in the dry pig mandibles. The digital and conventional radiographs were taken using a Schick sensor and Kodak F-speed intraoral film. Image manipulation (inversion) was performed using Adobe Photoshop 7.0 software. Four trained examiners made all of the radiographic measurements in millimeters a total of three times from the cementoenamel junction to the most apical extension of the bone loss with both types of images: inverted digital and film. The measurements were also made in dry mandibles using a periodontal probe and digital caliper. The Student's t-test was used to compare the depth measurements obtained from the two types of images and direct visual measurement in the dry mandibles. A significance level of 0.05 for a 95% confidence interval was used for each comparison. There was a significant difference between depth measurements in the inverted digital images and direct visual measurements (p>|t|=0.0039), with means of 6.29 mm (IC{sub 95%}:6.04-6.54) and 6.79 mm (IC{sub 95%}:6.45-7.11), respectively. There was a non-significant difference between the film-based radiographs and direct visual measurements (p>|t|=0.4950), with means of 6.64 mm(IC{sub 95%}:6.40-6.89) and 6.79 mm(IC{sub 95%}:6.45-7.11), respectively. The periodontal bone defect measurements in the inverted digital images were inferior to film-based radiographs, underestimating the amount of bone loss.

  15. MO-FG-202-06: Improving the Performance of Gamma Analysis QA with Radiomics- Based Image Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wootton, L; Nyflot, M; Ford, E [University of Washington Department of Radiation Oncology, Seattle, WA (United States); Chaovalitwongse, A [University of Washington Department of Industrial and Systems Engineering, Seattle, Washington (United States); University of Washington Department of Radiology, Seattle, WA (United States); Li, N [University of Washington Department of Industrial and Systems Engineering, Seattle, Washington (United States)

    2016-06-15

    Purpose: The use of gamma analysis for IMRT quality assurance has well-known limitations. Traditionally, a simple thresholding technique is used to evaluated passing criteria. However, like any image the gamma distribution is rich in information which thresholding mostly discards. We therefore propose a novel method of analyzing gamma images that uses quantitative image features borrowed from radiomics, with the goal of improving error detection. Methods: 368 gamma images were generated from 184 clinical IMRT beams. For each beam the dose to a phantom was measured with EPID dosimetry and compared to the TPS dose calculated with and without normally distributed (2mm sigma) errors in MLC positions. The magnitude of 17 intensity histogram and size-zone radiomic features were derived from each image. The features that differed most significantly between image sets were determined with ROC analysis. A linear machine-learning model was trained on these features to classify images as with or without errors on 180 gamma images.The model was then applied to an independent validation set of 188 additional gamma distributions, half with and half without errors. Results: The most significant features for detecting errors were histogram kurtosis (p=0.007) and three size-zone metrics (p<1e-6 for each). The sizezone metrics detected clusters of high gamma-value pixels under mispositioned MLCs. The model applied to the validation set had an AUC of 0.8, compared to 0.56 for traditional gamma analysis with the decision threshold restricted to 98% or less. Conclusion: A radiomics-based image analysis method was developed that is more effective in detecting error than traditional gamma analysis. Though the pilot study here considers only MLC position errors, radiomics-based methods for other error types are being developed, which may provide better error detection and useful information on the source of detected errors. This work was partially supported by a grant from the Agency for

  16. Data to Pictures to Data: Outreach Imaging Software and Metadata

    Science.gov (United States)

    Levay, Z.

    2011-07-01

    A convergence between astronomy science and digital photography has enabled a steady stream of visually rich imagery from state-of-the-art data. The accessibility of hardware and software has facilitated an explosion of astronomical images for outreach, from space-based observatories, ground-based professional facilities and among the vibrant amateur astrophotography community. Producing imagery from science data involves a combination of custom software to understand FITS data (FITS Liberator), off-the-shelf, industry-standard software to composite multi-wavelength data and edit digital photographs (Adobe Photoshop), and application of photo/image-processing techniques. Some additional effort is needed to close the loop and enable this imagery to be conveniently available for various purposes beyond web and print publication. The metadata paradigms in digital photography are now complying with FITS and science software to carry information such as keyword tags and world coordinates, enabling these images to be usable in more sophisticated, imaginative ways exemplified by Sky in Google Earth and World Wide Telescope.

  17. Image edge detection based tool condition monitoring with morphological component analysis.

    Science.gov (United States)

    Yu, Xiaolong; Lin, Xin; Dai, Yiquan; Zhu, Kunpeng

    2017-07-01

    The measurement and monitoring of tool condition are keys to the product precision in the automated manufacturing. To meet the need, this study proposes a novel tool wear monitoring approach based on the monitored image edge detection. Image edge detection has been a fundamental tool to obtain features of images. This approach extracts the tool edge with morphological component analysis. Through the decomposition of original tool wear image, the approach reduces the influence of texture and noise for edge measurement. Based on the target image sparse representation and edge detection, the approach could accurately extract the tool wear edge with continuous and complete contour, and is convenient in charactering tool conditions. Compared to the celebrated algorithms developed in the literature, this approach improves the integrity and connectivity of edges, and the results have shown that it achieves better geometry accuracy and lower error rate in the estimation of tool conditions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Perceptual security of encrypted images based on wavelet scaling analysis

    Science.gov (United States)

    Vargas-Olmos, C.; Murguía, J. S.; Ramírez-Torres, M. T.; Mejía Carlos, M.; Rosu, H. C.; González-Aguilar, H.

    2016-08-01

    The scaling behavior of the pixel fluctuations of encrypted images is evaluated by using the detrended fluctuation analysis based on wavelets, a modern technique that has been successfully used recently for a wide range of natural phenomena and technological processes. As encryption algorithms, we use the Advanced Encryption System (AES) in RBT mode and two versions of a cryptosystem based on cellular automata, with the encryption process applied both fully and partially by selecting different bitplanes. In all cases, the results show that the encrypted images in which no understandable information can be visually appreciated and whose pixels look totally random present a persistent scaling behavior with the scaling exponent α close to 0.5, implying no correlation between pixels when the DFA with wavelets is applied. This suggests that the scaling exponents of the encrypted images can be used as a perceptual security criterion in the sense that when their values are close to 0.5 (the white noise value) the encrypted images are more secure also from the perceptual point of view.

  19. Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis

    Science.gov (United States)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-01-01

    To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.

  20. Benchmarking the Applicability of Ontology in Geographic Object-Based Image Analysis

    Directory of Open Access Journals (Sweden)

    Sachit Rajbhandari

    2017-11-01

    Full Text Available In Geographic Object-based Image Analysis (GEOBIA, identification of image objects is normally achieved using rule-based classification techniques supported by appropriate domain knowledge. However, GEOBIA currently lacks a systematic method to formalise the domain knowledge required for image object identification. Ontology provides a representation vocabulary for characterising domain-specific classes. This study proposes an ontological framework that conceptualises domain knowledge in order to support the application of rule-based classifications. The proposed ontological framework is tested with a landslide case study. The Web Ontology Language (OWL is used to construct an ontology in the landslide domain. The segmented image objects with extracted features are incorporated into the ontology as instances. The classification rules are written in Semantic Web Rule Language (SWRL and executed using a semantic reasoner to assign instances to appropriate landslide classes. Machine learning techniques are used to predict new threshold values for feature attributes in the rules. Our framework is compared with published work on landslide detection where ontology was not used for the image classification. Our results demonstrate that a classification derived from the ontological framework accords with non-ontological methods. This study benchmarks the ontological method providing an alternative approach for image classification in the case study of landslides.

  1. Fun and Games with Photoshop: Using Image Editors To Change Photographic Meaning.

    Science.gov (United States)

    Croft, Richard S.

    The introduction of techniques for digitizing photographic images, as well as the subsequent development of powerful image-editing software, has both broadened the possibilities of altering photographs and brought the means for doing so within the reach of many. This article is an informal review of the ways image-editing software can be used to…

  2. DIGITALLY QUANTIFYING CEREBRAL HEMORRHAGE USING PHOTOSHOP® AND IMAGE J

    Science.gov (United States)

    Tang, Xian Nan; Berman, Ari Ethan; Swanson, Raymond Alan; Yenari, Midori Anne

    2010-01-01

    A spectrophotometric hemoglobin assay is widely used to estimate the extent of brain hemorrhage by measuring the amount of hemoglobin in the brain. However, this method requires using the entire brain sample, leaving none for histology or other assays. Other widely used measures of gross brain hemorrhage are generally semi-quantitative and can miss subtle differences. Semi-quantitative brain hemorrhage scales may also be subject to bias. Here, we present a method to digitally quantify brain hemorrhage using Photoshop and Image J, and compared this method to the spectrophotometric hemoglobin assay. Male Sprague-Dawley rats received varying amounts of autologous blood injected into the cerebral hemispheres in order to generate different sized hematomas. 24 hours later, the brains were harvested, sectioned, photographed then prepared for the hemoglobin assay. From the brain section photographs, pixels containing hemorrhage were identified by Photoshop® and the optical intensity was measured by Image J. Identification of hemorrhage size using optical intensities strongly correlated to the hemoglobin assay (R=0.94). We conclude that our method can accurately quantify the extent of hemorrhage. An advantage of this technique is that brain tissue can be used for additional studies. PMID:20452374

  3. Refusing the Stereotype: Decoding Negative Gender Imagery through a School-Based Digital Media Literacy Program

    Science.gov (United States)

    Berman, Naomi; White, Alexandra

    2013-01-01

    The media plays a significant role in shaping cultural norms and attitudes, concomitantly reinforcing "body" and "beauty" ideals and gender stereotypes. Unrealistic, photoshopped and stereotyped images used by the media, advertising and fashion industries influence young people's body image and impact on their feelings of body…

  4. Learning representative features for facial images based on a modified principal component analysis

    Science.gov (United States)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  5. Precision Statistical Analysis of Images Based on Brightness Distribution

    Directory of Open Access Journals (Sweden)

    Muzhir Shaban Al-Ani

    2017-07-01

    Full Text Available Study the content of images is considered an important topic in which reasonable and accurate analysis of images are generated. Recently image analysis becomes a vital field because of huge number of images transferred via transmission media in our daily life. These crowded media with images lead to highlight in research area of image analysis. In this paper, the implemented system is passed into many steps to perform the statistical measures of standard deviation and mean values of both color and grey images. Whereas the last step of the proposed method concerns to compare the obtained results in different cases of the test phase. In this paper, the statistical parameters are implemented to characterize the content of an image and its texture. Standard deviation, mean and correlation values are used to study the intensity distribution of the tested images. Reasonable results are obtained for both standard deviation and mean value via the implementation of the system. The major issue addressed in the work is concentrated on brightness distribution via statistical measures applying different types of lighting.

  6. New interpretations of the Fort Clark State Historic Site based on aerial color and thermal infrared imagery

    Science.gov (United States)

    Heller, Andrew Roland

    The Fort Clark State Historic Site (32ME2) is a well known site on the upper Missouri River, North Dakota. The site was the location of two Euroamerican trading posts and a large Mandan-Arikara earthlodge village. In 2004, Dr. Kenneth L. Kvamme and Dr. Tommy Hailey surveyed the site using aerial color and thermal infrared imagery collected from a powered parachute. Individual images were stitched together into large image mosaics and registered to Wood's 1993 interpretive map of the site using Adobe Photoshop. The analysis of those image mosaics resulted in the identification of more than 1,500 archaeological features, including as many as 124 earthlodges.

  7. Image based SAR product simulation for analysis

    Science.gov (United States)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  8. Performance Analysis of Segmentation of Hyperspectral Images Based on Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Praveen Agarwal

    2017-06-01

    Full Text Available Image segmentation is a fundamental approach in the field of image processing and based on user’s application .This paper propose an original and simple segmentation strategy based on the EM approach that resolves many informatics problems about hyperspectral images which are observed by airborne sensors. In a first step, to simplify the input color textured image into a color image without texture. The final segmentation is simply achieved by a spatially color segmentation using feature vector with the set of color values contained around the pixel to be classified with some mathematical equations. The spatial constraint allows taking into account the inherent spatial relationships of any image and its color. This approach provides effective PSNR for the segmented image. These results have the better performance as the segmented images are compared with Watershed & Region Growing Algorithm and provide effective segmentation for the Spectral Images & Medical Images.

  9. Automatic comic page image understanding based on edge segment analysis

    Science.gov (United States)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  10. Edge enhancement and noise suppression for infrared image based on feature analysis

    Science.gov (United States)

    Jiang, Meng

    2018-06-01

    Infrared images are often suffering from background noise, blurred edges, few details and low signal-to-noise ratios. To improve infrared image quality, it is essential to suppress noise and enhance edges simultaneously. To realize it in this paper, we propose a novel algorithm based on feature analysis in shearlet domain. Firstly, as one of multi-scale geometric analysis (MGA), we introduce the theory and superiority of shearlet transform. Secondly, after analyzing the defects of traditional thresholding technique to suppress noise, we propose a novel feature extraction distinguishing image structures from noise well and use it to improve the traditional thresholding technique. Thirdly, with computing the correlations between neighboring shearlet coefficients, the feature attribute maps identifying the weak detail and strong edges are completed to improve the generalized unsharped masking (GUM). At last, experiment results with infrared images captured in different scenes demonstrate that the proposed algorithm suppresses noise efficiently and enhances image edges adaptively.

  11. Cloud solution for histopathological image analysis using region of interest based compression.

    Science.gov (United States)

    Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana

    2017-07-01

    Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.

  12. Semi-supervised learning based probabilistic latent semantic analysis for automatic image annotation

    Institute of Scientific and Technical Information of China (English)

    Tian Dongping

    2017-01-01

    In recent years, multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas, especially for automatic image annotation, whose purpose is to provide an efficient and effective searching environment for users to query their images more easily.In this paper, a semi-supervised learning based probabilistic latent semantic analysis ( PL-SA) model for automatic image annotation is presenred.Since it' s often hard to obtain or create la-beled images in large quantities while unlabeled ones are easier to collect, a transductive support vector machine ( TSVM) is exploited to enhance the quality of the training image data.Then, differ-ent image features with different magnitudes will result in different performance for automatic image annotation.To this end, a Gaussian normalization method is utilized to normalize different features extracted from effective image regions segmented by the normalized cuts algorithm so as to reserve the intrinsic content of images as complete as possible.Finally, a PLSA model with asymmetric mo-dalities is constructed based on the expectation maximization( EM) algorithm to predict a candidate set of annotations with confidence scores.Extensive experiments on the general-purpose Corel5k dataset demonstrate that the proposed model can significantly improve performance of traditional PL-SA for the task of automatic image annotation.

  13. Knowledge-based image analysis: some aspects on the analysis of images using other types of information

    Energy Technology Data Exchange (ETDEWEB)

    Eklundh, J O

    1982-01-01

    The computer vision approach to image analysis is discussed from two aspects. First, this approach is constrasted to the pattern recognition approach. Second, how external knowledge and information and models from other fields of science and engineering can be used for image and scene analysis is discussed. In particular, the connections between computer vision and computer graphics are pointed out.

  14. COLOR IMAGE RETRIEVAL BASED ON FEATURE FUSION THROUGH MULTIPLE LINEAR REGRESSION ANALYSIS

    Directory of Open Access Journals (Sweden)

    K. Seetharaman

    2015-08-01

    Full Text Available This paper proposes a novel technique based on feature fusion using multiple linear regression analysis, and the least-square estimation method is employed to estimate the parameters. The given input query image is segmented into various regions according to the structure of the image. The color and texture features are extracted on each region of the query image, and the features are fused together using the multiple linear regression model. The estimated parameters of the model, which is modeled based on the features, are formed as a vector called a feature vector. The Canberra distance measure is adopted to compare the feature vectors of the query and target images. The F-measure is applied to evaluate the performance of the proposed technique. The obtained results expose that the proposed technique is comparable to the other existing techniques.

  15. Invariant moments based convolutional neural networks for image analysis

    Directory of Open Access Journals (Sweden)

    Vijayalakshmi G.V. Mahesh

    2017-01-01

    Full Text Available The paper proposes a method using convolutional neural network to effectively evaluate the discrimination between face and non face patterns, gender classification using facial images and facial expression recognition. The novelty of the method lies in the utilization of the initial trainable convolution kernels coefficients derived from the zernike moments by varying the moment order. The performance of the proposed method was compared with the convolutional neural network architecture that used random kernels as initial training parameters. The multilevel configuration of zernike moments was significant in extracting the shape information suitable for hierarchical feature learning to carry out image analysis and classification. Furthermore the results showed an outstanding performance of zernike moment based kernels in terms of the computation time and classification accuracy.

  16. Rapid Analysis and Exploration of Fluorescence Microscopy Images

    OpenAIRE

    Pavie, Benjamin; Rajaram, Satwik; Ouyang, Austin; Altschuler, Jason; Steininger, Robert J; Wu, Lani; Altschuler, Steven

    2014-01-01

    Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard.

  17. An ion beam analysis software based on ImageJ

    International Nuclear Information System (INIS)

    Udalagama, C.; Chen, X.; Bettiol, A.A.; Watt, F.

    2013-01-01

    The suit of techniques (RBS, STIM, ERDS, PIXE, IL, IF,…) available in ion beam analysis yields a variety of rich information. Typically, after the initial challenge of acquiring data we are then faced with the task of having to extract relevant information or to present the data in a format with the greatest impact. This process sometimes requires developing new software tools. When faced with such situations the usual practice at the Centre for Ion Beam Applications (CIBA) in Singapore has been to use our computational expertise to develop ad hoc software tools as and when we need them. It then became apparent that the whole ion beam community can benefit from such tools; specifically from a common software toolset that can be developed and maintained by everyone with freedom to use and allowance to modify. In addition to the benefits of readymade tools and sharing the onus of development, this also opens up the possibility for collaborators to access and analyse ion beam data without having to depend on an ion beam lab. This has the virtue of making the ion beam techniques more accessible to a broader scientific community. We have identified ImageJ as an appropriate software base to develop such a common toolset. In addition to being in the public domain and been setup for collaborative tool development, ImageJ is accompanied by hundreds of modules (plugins) that allow great breadth in analysis. The present work is the first step towards integrating ion beam analysis into ImageJ. Some of the features of the current version of the ImageJ ‘ion beam’ plugin are: (1) reading list mode or event-by-event files, (2) energy gates/sorts, (3) sort stacks, (4) colour function, (5) real time map updating, (6) real time colour updating and (7) median and average map creation

  18. An ion beam analysis software based on ImageJ

    Energy Technology Data Exchange (ETDEWEB)

    Udalagama, C., E-mail: chammika@nus.edu.sg [Centre for Ion Beam Applications (CIBA), Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117 542 (Singapore); Chen, X.; Bettiol, A.A.; Watt, F. [Centre for Ion Beam Applications (CIBA), Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117 542 (Singapore)

    2013-07-01

    The suit of techniques (RBS, STIM, ERDS, PIXE, IL, IF,…) available in ion beam analysis yields a variety of rich information. Typically, after the initial challenge of acquiring data we are then faced with the task of having to extract relevant information or to present the data in a format with the greatest impact. This process sometimes requires developing new software tools. When faced with such situations the usual practice at the Centre for Ion Beam Applications (CIBA) in Singapore has been to use our computational expertise to develop ad hoc software tools as and when we need them. It then became apparent that the whole ion beam community can benefit from such tools; specifically from a common software toolset that can be developed and maintained by everyone with freedom to use and allowance to modify. In addition to the benefits of readymade tools and sharing the onus of development, this also opens up the possibility for collaborators to access and analyse ion beam data without having to depend on an ion beam lab. This has the virtue of making the ion beam techniques more accessible to a broader scientific community. We have identified ImageJ as an appropriate software base to develop such a common toolset. In addition to being in the public domain and been setup for collaborative tool development, ImageJ is accompanied by hundreds of modules (plugins) that allow great breadth in analysis. The present work is the first step towards integrating ion beam analysis into ImageJ. Some of the features of the current version of the ImageJ ‘ion beam’ plugin are: (1) reading list mode or event-by-event files, (2) energy gates/sorts, (3) sort stacks, (4) colour function, (5) real time map updating, (6) real time colour updating and (7) median and average map creation.

  19. Comparison of grey scale median (GSM) measurement in ultrasound images of human carotid plaques using two different softwares.

    Science.gov (United States)

    Östling, Gerd; Persson, Margaretha; Hedblad, Bo; Gonçalves, Isabel

    2013-11-01

    Grey scale median (GSM) measured on ultrasound images of carotid plaques has been used for several years now in research to find the vulnerable plaque. Centres have used different software and also different methods for GSM measurement. This has resulted in a wide range of GSM values and cut-off values for the detection of the vulnerable plaque. The aim of this study was to compare the values obtained with two different softwares, using different standardization methods, for the measurement of GSM on ultrasound images of carotid human plaques. GSM was measured with Adobe Photoshop(®) and with Artery Measurement System (AMS) on duplex ultrasound images of 100 consecutive medium- to large-sized carotid plaques of the Beta-blocker Cholesterol-lowering Asymptomatic Plaque Study (BCAPS). The mean values of GSM were 35·2 ± 19·3 and 55·8 ± 22·5 for Adobe Photoshop(®) and AMS, respectively. Mean difference was 20·45 (95% CI: 19·17-21·73). Although the absolute values of GSM differed, the agreement between the two measurements was good, correlation coefficient 0·95. A chi-square test revealed a kappa value of 0·68 when studying quartiles of GSM. The intra-observer variability was 1·9% for AMS and 2·5% for Adobe Photoshop. The difference between softwares and standardization methods must be taken into consideration when comparing studies. To avoid these problems, researcher should come to a consensus regarding software and standardization method for GSM measurement on ultrasound images of plaque in the arteries. © 2013 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  20. Fan fault diagnosis based on symmetrized dot pattern analysis and image matching

    Science.gov (United States)

    Xu, Xiaogang; Liu, Haixiao; Zhu, Hao; Wang, Songling

    2016-07-01

    To detect the mechanical failure of fans, a new diagnostic method based on the symmetrized dot pattern (SDP) analysis and image matching is proposed. Vibration signals of 13 kinds of running states are acquired on a centrifugal fan test bed and reconstructed by the SDP technique. The SDP pattern templates of each running state are established. An image matching method is performed to diagnose the fault. In order to improve the diagnostic accuracy, the single template, multiple templates and clustering fault templates are used to perform the image matching.

  1. Accuracy of lung nodule density on HRCT: analysis by PSF-based image simulation.

    Science.gov (United States)

    Ohno, Ken; Ohkubo, Masaki; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi

    2012-11-08

    A computed tomography (CT) image simulation technique based on the point spread function (PSF) was applied to analyze the accuracy of CT-based clinical evaluations of lung nodule density. The PSF of the CT system was measured and used to perform the lung nodule image simulation. Then, the simulated image was resampled at intervals equal to the pixel size and the slice interval found in clinical high-resolution CT (HRCT) images. On those images, the nodule density was measured by placing a region of interest (ROI) commonly used for routine clinical practice, and comparing the measured value with the true value (a known density of object function used in the image simulation). It was quantitatively determined that the measured nodule density depended on the nodule diameter and the image reconstruction parameters (kernel and slice thickness). In addition, the measured density fluctuated, depending on the offset between the nodule center and the image voxel center. This fluctuation was reduced by decreasing the slice interval (i.e., with the use of overlapping reconstruction), leading to a stable density evaluation. Our proposed method of PSF-based image simulation accompanied with resampling enables a quantitative analysis of the accuracy of CT-based evaluations of lung nodule density. These results could potentially reveal clinical misreadings in diagnosis, and lead to more accurate and precise density evaluations. They would also be of value for determining the optimum scan and reconstruction parameters, such as image reconstruction kernels and slice thicknesses/intervals.

  2. Image quality assessment based on multiscale geometric analysis.

    Science.gov (United States)

    Gao, Xinbo; Lu, Wen; Tao, Dacheng; Li, Xuelong

    2009-07-01

    Reduced-reference (RR) image quality assessment (IQA) has been recognized as an effective and efficient way to predict the visual quality of distorted images. The current standard is the wavelet-domain natural image statistics model (WNISM), which applies the Kullback-Leibler divergence between the marginal distributions of wavelet coefficients of the reference and distorted images to measure the image distortion. However, WNISM fails to consider the statistical correlations of wavelet coefficients in different subbands and the visual response characteristics of the mammalian cortical simple cells. In addition, wavelet transforms are optimal greedy approximations to extract singularity structures, so they fail to explicitly extract the image geometric information, e.g., lines and curves. Finally, wavelet coefficients are dense for smooth image edge contours. In this paper, to target the aforementioned problems in IQA, we develop a novel framework for IQA to mimic the human visual system (HVS) by incorporating the merits from multiscale geometric analysis (MGA), contrast sensitivity function (CSF), and the Weber's law of just noticeable difference (JND). In the proposed framework, MGA is utilized to decompose images and then extract features to mimic the multichannel structure of HVS. Additionally, MGA offers a series of transforms including wavelet, curvelet, bandelet, contourlet, wavelet-based contourlet transform (WBCT), and hybrid wavelets and directional filter banks (HWD), and different transforms capture different types of image geometric information. CSF is applied to weight coefficients obtained by MGA to simulate the appearance of images to observers by taking into account many of the nonlinearities inherent in HVS. JND is finally introduced to produce a noticeable variation in sensory experience. Thorough empirical studies are carried out upon the LIVE database against subjective mean opinion score (MOS) and demonstrate that 1) the proposed framework has

  3. A comparative study between xerographic, computer-assisted overlay generation and animated-superimposition methods in bite mark analyses.

    Science.gov (United States)

    Tai, Meng Wei; Chong, Zhen Feng; Asif, Muhammad Khan; Rahmat, Rabiah A; Nambiar, Phrabhakaran

    2016-09-01

    This study was to compare the suitability and precision of xerographic and computer-assisted methods for bite mark investigations. Eleven subjects were asked to bite on their forearm and the bite marks were photographically recorded. Alginate impressions of the subjects' dentition were taken and their casts were made using dental stone. The overlays generated by xerographic method were obtained by photocopying the subjects' casts and the incisal edge outlines were then transferred on a transparent sheet. The bite mark images were imported into Adobe Photoshop® software and printed to life-size. The bite mark analyses using xerographically generated overlays were done by comparing an overlay to the corresponding printed bite mark images manually. In computer-assisted method, the subjects' casts were scanned into Adobe Photoshop®. The bite mark analyses using computer-assisted overlay generation were done by matching an overlay and the corresponding bite mark images digitally using Adobe Photoshop®. Another comparison method was superimposing the cast images with corresponding bite mark images employing the Adobe Photoshop® CS6 and GIF-Animator©. A score with a range of 0-3 was given during analysis to each precision-determining criterion and the score was increased with better matching. The Kruskal Wallis H test showed significant difference between the three sets of data (H=18.761, p<0.05). In conclusion, bite mark analysis using the computer-assisted animated-superimposition method was the most accurate, followed by the computer-assisted overlay generation and lastly the xerographic method. The superior precision contributed by digital method is discernible despite the human skin being a poor recording medium of bite marks. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Bundle Adjustment-Based Stability Analysis Method with a Case Study of a Dual Fluoroscopy Imaging System

    Science.gov (United States)

    Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.

    2018-05-01

    A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.

  5. Operational Automatic Remote Sensing Image Understanding Systems: Beyond Geographic Object-Based and Object-Oriented Image Analysis (GEOBIA/GEOOIA. Part 1: Introduction

    Directory of Open Access Journals (Sweden)

    Andrea Baraldi

    2012-09-01

    Full Text Available According to existing literature and despite their commercial success, state-of-the-art two-stage non-iterative geographic object-based image analysis (GEOBIA systems and three-stage iterative geographic object-oriented image analysis (GEOOIA systems, where GEOOIA/GEOBIA, remain affected by a lack of productivity, general consensus and research. To outperform the degree of automation, accuracy, efficiency, robustness, scalability and timeliness of existing GEOBIA/GEOOIA systems in compliance with the Quality Assurance Framework for Earth Observation (QA4EO guidelines, this methodological work is split into two parts. The present first paper provides a multi-disciplinary Strengths, Weaknesses, Opportunities and Threats (SWOT analysis of the GEOBIA/GEOOIA approaches that augments similar analyses proposed in recent years. In line with constraints stemming from human vision, this SWOT analysis promotes a shift of learning paradigm in the pre-attentive vision first stage of a remote sensing (RS image understanding system (RS-IUS, from sub-symbolic statistical model-based (inductive image segmentation to symbolic physical model-based (deductive image preliminary classification. Hence, a symbolic deductive pre-attentive vision first stage accomplishes image sub-symbolic segmentation and image symbolic pre-classification simultaneously. In the second part of this work a novel hybrid (combined deductive and inductive RS-IUS architecture featuring a symbolic deductive pre-attentive vision first stage is proposed and discussed in terms of: (a computational theory (system design; (b information/knowledge representation; (c algorithm design; and (d implementation. As proof-of-concept of symbolic physical model-based pre-attentive vision first stage, the spectral knowledge-based, operational, near real-time Satellite Image Automatic Mapper™ (SIAM™ is selected from existing literature. To the best of these authors’ knowledge, this is the first time a

  6. Video Bioinformatics Analysis of Human Embryonic Stem Cell Colony Growth

    Science.gov (United States)

    Lin, Sabrina; Fonteno, Shawn; Satish, Shruthi; Bhanu, Bir; Talbot, Prue

    2010-01-01

    Because video data are complex and are comprised of many images, mining information from video material is difficult to do without the aid of computer software. Video bioinformatics is a powerful quantitative approach for extracting spatio-temporal data from video images using computer software to perform dating mining and analysis. In this article, we introduce a video bioinformatics method for quantifying the growth of human embryonic stem cells (hESC) by analyzing time-lapse videos collected in a Nikon BioStation CT incubator equipped with a camera for video imaging. In our experiments, hESC colonies that were attached to Matrigel were filmed for 48 hours in the BioStation CT. To determine the rate of growth of these colonies, recipes were developed using CL-Quant software which enables users to extract various types of data from video images. To accurately evaluate colony growth, three recipes were created. The first segmented the image into the colony and background, the second enhanced the image to define colonies throughout the video sequence accurately, and the third measured the number of pixels in the colony over time. The three recipes were run in sequence on video data collected in a BioStation CT to analyze the rate of growth of individual hESC colonies over 48 hours. To verify the truthfulness of the CL-Quant recipes, the same data were analyzed manually using Adobe Photoshop software. When the data obtained using the CL-Quant recipes and Photoshop were compared, results were virtually identical, indicating the CL-Quant recipes were truthful. The method described here could be applied to any video data to measure growth rates of hESC or other cells that grow in colonies. In addition, other video bioinformatics recipes can be developed in the future for other cell processes such as migration, apoptosis, and cell adhesion. PMID:20495527

  7. IMAGE ANALYSIS BASED ON EDGE DETECTION TECHNIQUES

    Institute of Scientific and Technical Information of China (English)

    纳瑟; 刘重庆

    2002-01-01

    A method that incorporates edge detection technique, Markov Random field (MRF), watershed segmentation and merging techniques was presented for performing image segmentation and edge detection tasks. It first applies edge detection technique to obtain a Difference In Strength (DIS) map. An initial segmented result is obtained based on K-means clustering technique and the minimum distance. Then the region process is modeled by MRF to obtain an image that contains different intensity regions. The gradient values are calculated and then the watershed technique is used. DIS calculation is used for each pixel to define all the edges (weak or strong) in the image. The DIS map is obtained. This help as priority knowledge to know the possibility of the region segmentation by the next step (MRF), which gives an image that has all the edges and regions information. In MRF model,gray level l, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The segmentation results are improved by using watershed algorithm. After all pixels of the segmented regions are processed, a map of primitive region with edges is generated. The edge map is obtained using a merge process based on averaged intensity mean values. A common edge detectors that work on (MRF) segmented image are used and the results are compared. The segmentation and edge detection result is one closed boundary per actual region in the image.

  8. Integrating fuzzy object based image analysis and ant colony optimization for road extraction from remotely sensed images

    Science.gov (United States)

    Maboudi, Mehdi; Amini, Jalal; Malihi, Shirin; Hahn, Michael

    2018-04-01

    Updated road network as a crucial part of the transportation database plays an important role in various applications. Thus, increasing the automation of the road extraction approaches from remote sensing images has been the subject of extensive research. In this paper, we propose an object based road extraction approach from very high resolution satellite images. Based on the object based image analysis, our approach incorporates various spatial, spectral, and textural objects' descriptors, the capabilities of the fuzzy logic system for handling the uncertainties in road modelling, and the effectiveness and suitability of ant colony algorithm for optimization of network related problems. Four VHR optical satellite images which are acquired by Worldview-2 and IKONOS satellites are used in order to evaluate the proposed approach. Evaluation of the extracted road networks shows that the average completeness, correctness, and quality of the results can reach 89%, 93% and 83% respectively, indicating that the proposed approach is applicable for urban road extraction. We also analyzed the sensitivity of our algorithm to different ant colony optimization parameter values. Comparison of the achieved results with the results of four state-of-the-art algorithms and quantifying the robustness of the fuzzy rule set demonstrate that the proposed approach is both efficient and transferable to other comparable images.

  9. A NEW FRAMEWORK FOR OBJECT-BASED IMAGE ANALYSIS BASED ON SEGMENTATION SCALE SPACE AND RANDOM FOREST CLASSIFIER

    Directory of Open Access Journals (Sweden)

    A. Hadavand

    2015-12-01

    Full Text Available In this paper a new object-based framework is developed for automate scale selection in image segmentation. The quality of image objects have an important impact on further analyses. Due to the strong dependency of segmentation results to the scale parameter, choosing the best value for this parameter, for each class, becomes a main challenge in object-based image analysis. We propose a new framework which employs pixel-based land cover map to estimate the initial scale dedicated to each class. These scales are used to build segmentation scale space (SSS, a hierarchy of image objects. Optimization of SSS, respect to NDVI and DSM values in each super object is used to get the best scale in local regions of image scene. Optimized SSS segmentations are finally classified to produce the final land cover map. Very high resolution aerial image and digital surface model provided by ISPRS 2D semantic labelling dataset is used in our experiments. The result of our proposed method is comparable to those of ESP tool, a well-known method to estimate the scale of segmentation, and marginally improved the overall accuracy of classification from 79% to 80%.

  10. Developing a methodology for three-dimensional correlation of PET-CT images and whole-mount histopathology in non-small-cell lung cancer.

    Science.gov (United States)

    Dahele, M; Hwang, D; Peressotti, C; Sun, L; Kusano, M; Okhai, S; Darling, G; Yaffe, M; Caldwell, C; Mah, K; Hornby, J; Ehrlich, L; Raphael, S; Tsao, M; Behzadi, A; Weigensberg, C; Ung, Y C

    2008-10-01

    Understanding the three-dimensional (3D) volumetric relationship between imaging and functional or histopathologic heterogeneity of tumours is a key concept in the development of image-guided radiotherapy. Our aim was to develop a methodologic framework to enable the reconstruction of resected lung specimens containing non-small-cell lung cancer (NSCLC), to register the result in 3D with diagnostic imaging, and to import the reconstruction into a radiation treatment planning system. We recruited 12 patients for an investigation of radiology-pathology correlation (RPC) in nsclc. Before resection, imaging by positron emission tomography (PET) or computed tomography (CT) was obtained. Resected specimens were formalin-fixed for 1-24 hours before sectioning at 3-mm to 10-mm intervals. To try to retain the original shape, we embedded the specimens in agar before sectioning. Consecutive sections were laid out for photography and manually adjusted to maintain shape. Following embedding, the tissue blocks underwent whole-mount sectioning (4-mum sections) and staining with hematoxylin and eosin. Large histopathology slides were used to whole-mount entire sections for digitization. The correct sequence was maintained to assist in subsequent reconstruction. Using Photoshop (Adobe Systems Incorporated, San Jose, CA, U.S.A.), contours were placed on the photographic images to represent the external borders of the section and the extent of macroscopic disease. Sections were stacked in sequence and manually oriented in Photoshop. The macroscopic tumour contours were then transferred to MATLAB (The Mathworks, Natick, MA, U.S.A.) and stacked, producing 3D surface renderings of the resected specimen and embedded gross tumour. To evaluate the microscopic extent of disease, customized "tile-based" and commercial confocal panoramic laser scanning (TISSUEscope: Biomedical Photometrics, Waterloo, ON) systems were used to generate digital images of whole-mount histopathology sections

  11. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  12. Context-based coding of bilevel images enhanced by digital straight line analysis

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2006-01-01

    , or segmentation maps are also encoded efficiently. The algorithm is not targeted at document images with text, which can be coded efficiently with dictionary-based techniques as in JBIG2. The scheme is based on a local analysis of the digital straightness of the causal part of the object boundary, which is used...... in the context definition for arithmetic encoding. Tested on individual images of standard TV resolution binary shapes and the binary layers of a digital map, the proposed algorithm outperforms PWC, JBIG, JBIG2, and MPEG-4 CAE. On the binary shapes, the code lengths are reduced by 21%, 27 %, 28 %, and 41...

  13. A Framework for Reproducible Latent Fingerprint Enhancements.

    Science.gov (United States)

    Carasso, Alfred S

    2014-01-01

    Photoshop processing of latent fingerprints is the preferred methodology among law enforcement forensic experts, but that appproach is not fully reproducible and may lead to questionable enhancements. Alternative, independent, fully reproducible enhancements, using IDL Histogram Equalization and IDL Adaptive Histogram Equalization, can produce better-defined ridge structures, along with considerable background information. Applying a systematic slow motion smoothing procedure to such IDL enhancements, based on the rapid FFT solution of a Lévy stable fractional diffusion equation, can attenuate background detail while preserving ridge information. The resulting smoothed latent print enhancements are comparable to, but distinct from, forensic Photoshop images suitable for input into automated fingerprint identification systems, (AFIS). In addition, this progressive smoothing procedure can be reexamined by displaying the suite of progressively smoother IDL images. That suite can be stored, providing an audit trail that allows monitoring for possible loss of useful information, in transit to the user-selected optimal image. Such independent and fully reproducible enhancements provide a valuable frame of reference that may be helpful in informing, complementing, and possibly validating the forensic Photoshop methodology.

  14. An automated, high-throughput plant phenotyping system using machine learning-based plant segmentation and image analysis.

    Science.gov (United States)

    Lee, Unseok; Chang, Sungyul; Putra, Gian Anantrio; Kim, Hyoungseok; Kim, Dong Hwan

    2018-01-01

    A high-throughput plant phenotyping system automatically observes and grows many plant samples. Many plant sample images are acquired by the system to determine the characteristics of the plants (populations). Stable image acquisition and processing is very important to accurately determine the characteristics. However, hardware for acquiring plant images rapidly and stably, while minimizing plant stress, is lacking. Moreover, most software cannot adequately handle large-scale plant imaging. To address these problems, we developed a new, automated, high-throughput plant phenotyping system using simple and robust hardware, and an automated plant-imaging-analysis pipeline consisting of machine-learning-based plant segmentation. Our hardware acquires images reliably and quickly and minimizes plant stress. Furthermore, the images are processed automatically. In particular, large-scale plant-image datasets can be segmented precisely using a classifier developed using a superpixel-based machine-learning algorithm (Random Forest), and variations in plant parameters (such as area) over time can be assessed using the segmented images. We performed comparative evaluations to identify an appropriate learning algorithm for our proposed system, and tested three robust learning algorithms. We developed not only an automatic analysis pipeline but also a convenient means of plant-growth analysis that provides a learning data interface and visualization of plant growth trends. Thus, our system allows end-users such as plant biologists to analyze plant growth via large-scale plant image data easily.

  15. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  16. Stochastic geometry for image analysis

    CERN Document Server

    Descombes, Xavier

    2013-01-01

    This book develops the stochastic geometry framework for image analysis purpose. Two main frameworks are  described: marked point process and random closed sets models. We derive the main issues for defining an appropriate model. The algorithms for sampling and optimizing the models as well as for estimating parameters are reviewed.  Numerous applications, covering remote sensing images, biological and medical imaging, are detailed.  This book provides all the necessary tools for developing an image analysis application based on modern stochastic modeling.

  17. A survey of MRI-based medical image analysis for brain tumor studies

    Science.gov (United States)

    Bauer, Stefan; Wiest, Roland; Nolte, Lutz-P.; Reyes, Mauricio

    2013-07-01

    MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While the pioneering approaches applying automated methods for the analysis of brain tumor images date back almost two decades, the current methods are becoming more mature and coming closer to routine clinical application. This review aims to provide a comprehensive overview by giving a brief introduction to brain tumors and imaging of brain tumors first. Then, we review the state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas. The objective in the segmentation is outlining the tumor including its sub-compartments and surrounding tissues, while the main challenge in registration and modeling is the handling of morphological changes caused by the tumor. The qualities of different approaches are discussed with a focus on methods that can be applied on standard clinical imaging protocols. Finally, a critical assessment of the current state is performed and future developments and trends are addressed, giving special attention to recent developments in radiological tumor assessment guidelines.

  18. A survey of MRI-based medical image analysis for brain tumor studies

    International Nuclear Information System (INIS)

    Bauer, Stefan; Nolte, Lutz-P; Reyes, Mauricio; Wiest, Roland

    2013-01-01

    MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While the pioneering approaches applying automated methods for the analysis of brain tumor images date back almost two decades, the current methods are becoming more mature and coming closer to routine clinical application. This review aims to provide a comprehensive overview by giving a brief introduction to brain tumors and imaging of brain tumors first. Then, we review the state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas. The objective in the segmentation is outlining the tumor including its sub-compartments and surrounding tissues, while the main challenge in registration and modeling is the handling of morphological changes caused by the tumor. The qualities of different approaches are discussed with a focus on methods that can be applied on standard clinical imaging protocols. Finally, a critical assessment of the current state is performed and future developments and trends are addressed, giving special attention to recent developments in radiological tumor assessment guidelines. (topical review)

  19. Nephrus: expert system model in intelligent multilayers for evaluation of urinary system based on scintigraphic image analysis

    International Nuclear Information System (INIS)

    Silva, Jorge Wagner Esteves da; Schirru, Roberto; Boasquevisque, Edson Mendes

    1999-01-01

    Renal function can be measured noninvasively with radionuclides in a extremely safe way compared to other diagnosis techniques. Nevertheless, due to the fact that radioactive materials are used in this procedure, it is necessary to maximize its benefits, therefore all efforts are justifiable in the development of data analysis support tools for this diagnosis modality. The objective of this work is to develop a prototype for a system model based on Artificial Intelligence devices able to perform functions related to cintilographic image analysis of the urinary system. Rules used by medical experts in the analysis of images obtained with 99m Tc+DTPA and /or 99m Tc+DMSA were modeled and a Neural Network diagnosis technique was implemented. Special attention was given for designing programs user-interface. Human Factor Engineering techniques were taking in account allowing friendliness and robustness. The image segmentation adopts a model based on Ideal ROIs, which represent the normal anatomic concept for urinary system organs. Results obtained using Artificial Neural Networks for qualitative image analysis and knowledge model constructed show the feasibility of Artificial Neural Networks for qualitative image analysis and knowledge model constructed show feasibility of Artificial Intelligence implementation that uses inherent abilities of each technique in the medical diagnosis image analysis. (author)

  20. Information granules in image histogram analysis.

    Science.gov (United States)

    Wieclawek, Wojciech

    2018-04-01

    A concept of granular computing employed in intensity-based image enhancement is discussed. First, a weighted granular computing idea is introduced. Then, the implementation of this term in the image processing area is presented. Finally, multidimensional granular histogram analysis is introduced. The proposed approach is dedicated to digital images, especially to medical images acquired by Computed Tomography (CT). As the histogram equalization approach, this method is based on image histogram analysis. Yet, unlike the histogram equalization technique, it works on a selected range of the pixel intensity and is controlled by two parameters. Performance is tested on anonymous clinical CT series. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Quantitative Assessment of Pap Smear Cells by PC-Based Cytopathologic Image Analysis System and Support Vector Machine

    Science.gov (United States)

    Huang, Po-Chi; Chan, Yung-Kuan; Chan, Po-Chou; Chen, Yung-Fu; Chen, Rung-Ching; Huang, Yu-Ruei

    Cytologic screening has been widely used for controlling the prevalence of cervical cancer. Errors from sampling, screening and interpretation, still concealed some unpleasant results. This study aims at designing a cellular image analysis system based on feasible and available software and hardware for a routine cytologic laboratory. Totally 1814 cellular images from the liquid-based cervical smears with Papanicolaou stain in 100x, 200x, and 400x magnification were captured by a digital camera. Cell images were reviewed by pathologic experts with peer agreement and only 503 images were selected for further study. The images were divided into 4 diagnostic categories. A PC-based cellular image analysis system (PCCIA) was developed for computing morphometric parameters. Then support vector machine (SVM) was used to classify signature patterns. The results show that the selected 13 morphometric parameters can be used to correctly differentiate the dysplastic cells from the normal cells (pgynecologic cytologic specimens.

  2. [Present status and trend of heart fluid mechanics research based on medical image analysis].

    Science.gov (United States)

    Gan, Jianhong; Yin, Lixue; Xie, Shenghua; Li, Wenhua; Lu, Jing; Luo, Anguo

    2014-06-01

    With introduction of current main methods for heart fluid mechanics researches, we studied the characteristics and weakness for three primary analysis methods based on magnetic resonance imaging, color Doppler ultrasound and grayscale ultrasound image, respectively. It is pointed out that particle image velocity (PIV), speckle tracking and block match have the same nature, and three algorithms all adopt block correlation. The further analysis shows that, with the development of information technology and sensor, the research for cardiac function and fluid mechanics will focus on energy transfer process of heart fluid, characteristics of Chamber wall related to blood fluid and Fluid-structure interaction in the future heart fluid mechanics fields.

  3. Histogram analysis of T2*-based pharmacokinetic imaging in cerebral glioma grading.

    Science.gov (United States)

    Liu, Hua-Shan; Chiang, Shih-Wei; Chung, Hsiao-Wen; Tsai, Ping-Huei; Hsu, Fei-Ting; Cho, Nai-Yu; Wang, Chao-Ying; Chou, Ming-Chung; Chen, Cheng-Yu

    2018-03-01

    To investigate the feasibility of histogram analysis of the T2*-based permeability parameter volume transfer constant (K trans ) for glioma grading and to explore the diagnostic performance of the histogram analysis of K trans and blood plasma volume (v p ). We recruited 31 and 11 patients with high- and low-grade gliomas, respectively. The histogram parameters of K trans and v p , derived from the first-pass pharmacokinetic modeling based on the T2* dynamic susceptibility-weighted contrast-enhanced perfusion-weighted magnetic resonance imaging (T2* DSC-PW-MRI) from the entire tumor volume, were evaluated for differentiating glioma grades. Histogram parameters of K trans and v p showed significant differences between high- and low-grade gliomas and exhibited significant correlations with tumor grades. The mean K trans derived from the T2* DSC-PW-MRI had the highest sensitivity and specificity for differentiating high-grade gliomas from low-grade gliomas compared with other histogram parameters of K trans and v p . Histogram analysis of T2*-based pharmacokinetic imaging is useful for cerebral glioma grading. The histogram parameters of the entire tumor K trans measurement can provide increased accuracy with additional information regarding microvascular permeability changes for identifying high-grade brain tumors. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Digital image analysis

    DEFF Research Database (Denmark)

    Riber-Hansen, Rikke; Vainer, Ben; Steiniche, Torben

    2012-01-01

    Digital image analysis (DIA) is increasingly implemented in histopathological research to facilitate truly quantitative measurements, decrease inter-observer variation and reduce hands-on time. Originally, efforts were made to enable DIA to reproduce manually obtained results on histological slides...... reproducibility, application of stereology-based quantitative measurements, time consumption, optimization of histological slides, regions of interest selection and recent developments in staining and imaging techniques....

  5. [Perception of asymmetry smile: Attempt to evaluation through Photoshop].

    Science.gov (United States)

    Diakite, C; Diep, D; Labbe, D

    2016-04-01

    In the labial palliative surgery of facial paralysis, it can persist asymmetry smile. Evaluate the impact of an augmentation or reduction of the commissural course on the perception of a smile anomaly, and determine from which asymmetry threshold, the smile is estimated unsightly. We took a picture of two people with a smile not forced; including one with a "cuspid smile", and the another one with a "Mona Lisa" smile. The pictures obtained were modified by the Photoshop software, to simulate an asymmetry labial smile. The changes were related to the move of the left labial commissure, the left nasolabial furrow, and the left cheek using under-correction and overcorrection, every 4 mm. Three pictures with under-correction and four pictures with over-correction were obtained. These smiles were shown to three groups of five people, which included doctors in smile specialties, doctors in other specialties, and non-doctors. Participants were then asked to indicate on which of the pictures, the smile seemed abnormal. Between -8 mm under-correction, and +8 mm over-correction, the asymmetry of the commissural course does not hinder the perception of smile. In the labial palliative surgery of facial paralysis, in the case of persistent asymmetry, there is a tolerance in the perception of "normality" of smile concerning the amplitude of the commissural course going up to 8 mm of asymmetric with under-correction or over-correction. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  6. Computer-based quantitative computed tomography image analysis in idiopathic pulmonary fibrosis: A mini review.

    Science.gov (United States)

    Ohkubo, Hirotsugu; Nakagawa, Hiroaki; Niimi, Akio

    2018-01-01

    Idiopathic pulmonary fibrosis (IPF) is the most common type of progressive idiopathic interstitial pneumonia in adults. Many computer-based image analysis methods of chest computed tomography (CT) used in patients with IPF include the mean CT value of the whole lungs, density histogram analysis, density mask technique, and texture classification methods. Most of these methods offer good assessment of pulmonary functions, disease progression, and mortality. Each method has merits that can be used in clinical practice. One of the texture classification methods is reported to be superior to visual CT scoring by radiologist for correlation with pulmonary function and prediction of mortality. In this mini review, we summarize the current literature on computer-based CT image analysis of IPF and discuss its limitations and several future directions. Copyright © 2017 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.

  7. Rapid analysis and exploration of fluorescence microscopy images.

    Science.gov (United States)

    Pavie, Benjamin; Rajaram, Satwik; Ouyang, Austin; Altschuler, Jason M; Steininger, Robert J; Wu, Lani F; Altschuler, Steven J

    2014-03-19

    Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard. Here we present an alternate, cell-segmentation-free workflow based on PhenoRipper, an open-source software platform designed for the rapid analysis and exploration of microscopy images. The pipeline presented here is optimized for immunofluorescence microscopy images of cell cultures and requires minimal user intervention. Within half an hour, PhenoRipper can analyze data from a typical 96-well experiment and generate image profiles. Users can then visually explore their data, perform quality control on their experiment, ensure response to perturbations and check reproducibility of replicates. This facilitates a rapid feedback cycle between analysis and experiment, which is crucial during assay optimization. This protocol is useful not just as a first pass analysis for quality control, but also may be used as an end-to-end solution, especially for screening. The workflow described here scales to large data sets such as those generated by high-throughput screens, and has been shown to group experimental conditions by phenotype accurately over a wide range of biological systems. The PhenoBrowser interface provides an intuitive framework to explore the phenotypic space and relate image properties to biological annotations. Taken together, the protocol described here will lower the barriers to adopting quantitative analysis of image based screens.

  8. Computer-Based Image Analysis for Plus Disease Diagnosis in Retinopathy of Prematurity: Performance of the "i-ROP" System and Image Features Associated With Expert Diagnosis.

    Science.gov (United States)

    Ataer-Cansizoglu, Esra; Bolon-Canedo, Veronica; Campbell, J Peter; Bozkurt, Alican; Erdogmus, Deniz; Kalpathy-Cramer, Jayashree; Patel, Samir; Jonas, Karyn; Chan, R V Paul; Ostmo, Susan; Chiang, Michael F

    2015-11-01

    We developed and evaluated the performance of a novel computer-based image analysis system for grading plus disease in retinopathy of prematurity (ROP), and identified the image features, shapes, and sizes that best correlate with expert diagnosis. A dataset of 77 wide-angle retinal images from infants screened for ROP was collected. A reference standard diagnosis was determined for each image by combining image grading from 3 experts with the clinical diagnosis from ophthalmoscopic examination. Manually segmented images were cropped into a range of shapes and sizes, and a computer algorithm was developed to extract tortuosity and dilation features from arteries and veins. Each feature was fed into our system to identify the set of characteristics that yielded the highest-performing system compared to the reference standard, which we refer to as the "i-ROP" system. Among the tested crop shapes, sizes, and measured features, point-based measurements of arterial and venous tortuosity (combined), and a large circular cropped image (with radius 6 times the disc diameter), provided the highest diagnostic accuracy. The i-ROP system achieved 95% accuracy for classifying preplus and plus disease compared to the reference standard. This was comparable to the performance of the 3 individual experts (96%, 94%, 92%), and significantly higher than the mean performance of 31 nonexperts (81%). This comprehensive analysis of computer-based plus disease suggests that it may be feasible to develop a fully-automated system based on wide-angle retinal images that performs comparably to expert graders at three-level plus disease discrimination. Computer-based image analysis, using objective and quantitative retinal vascular features, has potential to complement clinical ROP diagnosis by ophthalmologists.

  9. UAV-based urban structural damage assessment using object-based image analysis and semantic reasoning

    Science.gov (United States)

    Fernandez Galarreta, J.; Kerle, N.; Gerke, M.

    2015-06-01

    Structural damage assessment is critical after disasters but remains a challenge. Many studies have explored the potential of remote sensing data, but limitations of vertical data persist. Oblique imagery has been identified as more useful, though the multi-angle imagery also adds a new dimension of complexity. This paper addresses damage assessment based on multi-perspective, overlapping, very high resolution oblique images obtained with unmanned aerial vehicles (UAVs). 3-D point-cloud assessment for the entire building is combined with detailed object-based image analysis (OBIA) of façades and roofs. This research focuses not on automatic damage assessment, but on creating a methodology that supports the often ambiguous classification of intermediate damage levels, aiming at producing comprehensive per-building damage scores. We identify completely damaged structures in the 3-D point cloud, and for all other cases provide the OBIA-based damage indicators to be used as auxiliary information by damage analysts. The results demonstrate the usability of the 3-D point-cloud data to identify major damage features. Also the UAV-derived and OBIA-processed oblique images are shown to be a suitable basis for the identification of detailed damage features on façades and roofs. Finally, we also demonstrate the possibility of aggregating the multi-perspective damage information at building level.

  10. Principal component analysis-based imaging angle determination for 3D motion monitoring using single-slice on-board imaging.

    Science.gov (United States)

    Chen, Ting; Zhang, Miao; Jabbour, Salma; Wang, Hesheng; Barbee, David; Das, Indra J; Yue, Ning

    2018-04-10

    Through-plane motion introduces uncertainty in three-dimensional (3D) motion monitoring when using single-slice on-board imaging (OBI) modalities such as cine MRI. We propose a principal component analysis (PCA)-based framework to determine the optimal imaging plane to minimize the through-plane motion for single-slice imaging-based motion monitoring. Four-dimensional computed tomography (4DCT) images of eight thoracic cancer patients were retrospectively analyzed. The target volumes were manually delineated at different respiratory phases of 4DCT. We performed automated image registration to establish the 4D respiratory target motion trajectories for all patients. PCA was conducted using the motion information to define the three principal components of the respiratory motion trajectories. Two imaging planes were determined perpendicular to the second and third principal component, respectively, to avoid imaging with the primary principal component of the through-plane motion. Single-slice images were reconstructed from 4DCT in the PCA-derived orthogonal imaging planes and were compared against the traditional AP/Lateral image pairs on through-plane motion, residual error in motion monitoring, absolute motion amplitude error and the similarity between target segmentations at different phases. We evaluated the significance of the proposed motion monitoring improvement using paired t test analysis. The PCA-determined imaging planes had overall less through-plane motion compared against the AP/Lateral image pairs. For all patients, the average through-plane motion was 3.6 mm (range: 1.6-5.6 mm) for the AP view and 1.7 mm (range: 0.6-2.7 mm) for the Lateral view. With PCA optimization, the average through-plane motion was 2.5 mm (range: 1.3-3.9 mm) and 0.6 mm (range: 0.2-1.5 mm) for the two imaging planes, respectively. The absolute residual error of the reconstructed max-exhale-to-inhale motion averaged 0.7 mm (range: 0.4-1.3 mm, 95% CI: 0.4-1.1 mm) using

  11. Estimation of physiological parameters using knowledge-based factor analysis of dynamic nuclear medicine image sequences

    International Nuclear Information System (INIS)

    Yap, J.T.; Chen, C.T.; Cooper, M.

    1995-01-01

    The authors have previously developed a knowledge-based method of factor analysis to analyze dynamic nuclear medicine image sequences. In this paper, the authors analyze dynamic PET cerebral glucose metabolism and neuroreceptor binding studies. These methods have shown the ability to reduce the dimensionality of the data, enhance the image quality of the sequence, and generate meaningful functional images and their corresponding physiological time functions. The new information produced by the factor analysis has now been used to improve the estimation of various physiological parameters. A principal component analysis (PCA) is first performed to identify statistically significant temporal variations and remove the uncorrelated variations (noise) due to Poisson counting statistics. The statistically significant principal components are then used to reconstruct a noise-reduced image sequence as well as provide an initial solution for the factor analysis. Prior knowledge such as the compartmental models or the requirement of positivity and simple structure can be used to constrain the analysis. These constraints are used to rotate the factors to the most physically and physiologically realistic solution. The final result is a small number of time functions (factors) representing the underlying physiological processes and their associated weighting images representing the spatial localization of these functions. Estimation of physiological parameters can then be performed using the noise-reduced image sequence generated from the statistically significant PCs and/or the final factor images and time functions. These results are compared to the parameter estimation using standard methods and the original raw image sequences. Graphical analysis was performed at the pixel level to generate comparable parametric images of the slope and intercept (influx constant and distribution volume)

  12. Quantitative analysis of receptor imaging

    International Nuclear Information System (INIS)

    Fu Zhanli; Wang Rongfu

    2004-01-01

    Model-based methods for quantitative analysis of receptor imaging, including kinetic, graphical and equilibrium methods, are introduced in detail. Some technical problem facing quantitative analysis of receptor imaging, such as the correction for in vivo metabolism of the tracer and the radioactivity contribution from blood volume within ROI, and the estimation of the nondisplaceable ligand concentration, is also reviewed briefly

  13. Exploration of mineral resource deposits based on analysis of aerial and satellite image data employing artificial intelligence methods

    Science.gov (United States)

    Osipov, Gennady

    2013-04-01

    We propose a solution to the problem of exploration of various mineral resource deposits, determination of their forms / classification of types (oil, gas, minerals, gold, etc.) with the help of satellite photography of the region of interest. Images received from satellite are processed and analyzed to reveal the presence of specific signs of deposits of various minerals. Course of data processing and making forecast can be divided into some stages: Pre-processing of images. Normalization of color and luminosity characteristics, determination of the necessary contrast level and integration of a great number of separate photos into a single map of the region are performed. Construction of semantic map image. Recognition of bitmapped image and allocation of objects and primitives known to system are realized. Intelligent analysis. At this stage acquired information is analyzed with the help of a knowledge base, which contain so-called "attention landscapes" of experts. Used methods of recognition and identification of images: a) combined method of image recognition, b)semantic analysis of posterized images, c) reconstruction of three-dimensional objects from bitmapped images, d)cognitive technology of processing and interpretation of images. This stage is fundamentally new and it distinguishes suggested technology from all others. Automatic registration of allocation of experts` attention - registration of so-called "attention landscape" of experts - is the base of the technology. Landscapes of attention are, essentially, highly effective filters that cut off unnecessary information and emphasize exactly the factors used by an expert for making a decision. The technology based on denoted principles involves the next stages, which are implemented in corresponding program agents. Training mode -> Creation of base of ophthalmologic images (OI) -> Processing and making generalized OI (GOI) -> Mode of recognition and interpretation of unknown images. Training mode

  14. An Object-Based Image Analysis Approach for Detecting Penguin Guano in very High Spatial Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Chandi Witharana

    2016-04-01

    Full Text Available The logistical challenges of Antarctic field work and the increasing availability of very high resolution commercial imagery have driven an interest in more efficient search and classification of remotely sensed imagery. This exploratory study employed geographic object-based analysis (GEOBIA methods to classify guano stains, indicative of chinstrap and Adélie penguin breeding areas, from very high spatial resolution (VHSR satellite imagery and closely examined the transferability of knowledge-based GEOBIA rules across different study sites focusing on the same semantic class. We systematically gauged the segmentation quality, classification accuracy, and the reproducibility of fuzzy rules. A master ruleset was developed based on one study site and it was re-tasked “without adaptation” and “with adaptation” on candidate image scenes comprising guano stains. Our results suggest that object-based methods incorporating the spectral, textural, spatial, and contextual characteristics of guano are capable of successfully detecting guano stains. Reapplication of the master ruleset on candidate scenes without modifications produced inferior classification results, while adapted rules produced comparable or superior results compared to the reference image. This work provides a road map to an operational “image-to-assessment pipeline” that will enable Antarctic wildlife researchers to seamlessly integrate VHSR imagery into on-demand penguin population census.

  15. Image-Based 3d Reconstruction and Analysis for Orthodontia

    Science.gov (United States)

    Knyaz, V. A.

    2012-08-01

    Among the main tasks of orthodontia are analysis of teeth arches and treatment planning for providing correct position for every tooth. The treatment plan is based on measurement of teeth parameters and designing perfect teeth arch curve which teeth are to create after treatment. The most common technique for teeth moving uses standard brackets which put on teeth and a wire of given shape which is clamped by these brackets for producing necessary forces to every tooth for moving it in given direction. The disadvantages of standard bracket technique are low accuracy of tooth dimensions measurements and problems with applying standard approach for wide variety of complex orthodontic cases. The image-based technique for orthodontic planning, treatment and documenting aimed at overcoming these disadvantages is proposed. The proposed approach provides performing accurate measurements of teeth parameters needed for adequate planning, designing correct teeth position and monitoring treatment process. The developed technique applies photogrammetric means for teeth arch 3D model generation, brackets position determination and teeth shifting analysis.

  16. Computer-assisted instruction; MR imaging of congenital heart disease

    International Nuclear Information System (INIS)

    Choi, Young Hi; Yu, Pil Mun; Lee, Sang Hoon; Choe, Yeon Hyeon; Kim, Yang Min

    1996-01-01

    To develop a software program for computer-assisted instruction on MR imaging of congenital heart disease for medical students and residents to achieve repetitive and effective self-learning. We used a film scanner(Scan Maker 35t) and IBM-PC(486 DX-2, 60 MHz) for acquisition and storage of image data. The accessories attached to the main processor were CD-ROM drive(Sony), sound card(Soundblaster-Pro), and speaker. We used software of Adobe Photoshop(v 3.0) and paint shop-pro(v 3.0) for preprocessing image data, and paintbrush from microsoft windows 3.1 for labelling. The language used for programming was visual basic(v 3.0) from microsoft corporation. We developed a software program for computer-assisted instruction on MR imaging of congenital heart disease as an effective educational tool

  17. Fiber array based hyperspectral Raman imaging for chemical selective analysis of malaria-infected red blood cells

    Energy Technology Data Exchange (ETDEWEB)

    Brückner, Michael [Leibniz Institute of Photonic Technology, 07745 Jena (Germany); Becker, Katja [Justus Liebig University Giessen, Biochemistry and Molecular Biology, 35392 Giessen (Germany); Popp, Jürgen [Leibniz Institute of Photonic Technology, 07745 Jena (Germany); Friedrich Schiller University Jena, Institute for Physical Chemistry, 07745 Jena (Germany); Friedrich Schiller University Jena, Abbe Centre of Photonics, 07745 Jena (Germany); Frosch, Torsten, E-mail: torsten.frosch@uni-jena.de [Leibniz Institute of Photonic Technology, 07745 Jena (Germany); Friedrich Schiller University Jena, Institute for Physical Chemistry, 07745 Jena (Germany); Friedrich Schiller University Jena, Abbe Centre of Photonics, 07745 Jena (Germany)

    2015-09-24

    A new setup for Raman spectroscopic wide-field imaging is presented. It combines the advantages of a fiber array based spectral translator with a tailor-made laser illumination system for high-quality Raman chemical imaging of sensitive biological samples. The Gaussian-like intensity distribution of the illuminating laser beam is shaped by a square-core optical multimode fiber to a top-hat profile with very homogeneous intensity distribution to fulfill the conditions of Koehler. The 30 m long optical fiber and an additional vibrator efficiently destroy the polarization and coherence of the illuminating light. This homogeneous, incoherent illumination is an essential prerequisite for stable quantitative imaging of complex biological samples. The fiber array translates the two-dimensional lateral information of the Raman stray light into separated spectral channels with very high contrast. The Raman image can be correlated with a corresponding white light microscopic image of the sample. The new setup enables simultaneous quantification of all Raman spectra across the whole spatial area with very good spectral resolution and thus outperforms other Raman imaging approaches based on scanning and tunable filters. The unique capabilities of the setup for fast, gentle, sensitive, and selective chemical imaging of biological samples were applied for automated hemozoin analysis. A special algorithm was developed to generate Raman images based on the hemozoin distribution in red blood cells without any influence from other Raman scattering. The new imaging setup in combination with the robust algorithm provides a novel, elegant way for chemical selective analysis of the malaria pigment hemozoin in early ring stages of Plasmodium falciparum infected erythrocytes. - Highlights: • Raman hyperspectral imaging allows for chemical selective analysis of biological samples with spatial heterogeneity. • A homogeneous, incoherent illumination is essential for reliable

  18. Fiber array based hyperspectral Raman imaging for chemical selective analysis of malaria-infected red blood cells

    International Nuclear Information System (INIS)

    Brückner, Michael; Becker, Katja; Popp, Jürgen; Frosch, Torsten

    2015-01-01

    A new setup for Raman spectroscopic wide-field imaging is presented. It combines the advantages of a fiber array based spectral translator with a tailor-made laser illumination system for high-quality Raman chemical imaging of sensitive biological samples. The Gaussian-like intensity distribution of the illuminating laser beam is shaped by a square-core optical multimode fiber to a top-hat profile with very homogeneous intensity distribution to fulfill the conditions of Koehler. The 30 m long optical fiber and an additional vibrator efficiently destroy the polarization and coherence of the illuminating light. This homogeneous, incoherent illumination is an essential prerequisite for stable quantitative imaging of complex biological samples. The fiber array translates the two-dimensional lateral information of the Raman stray light into separated spectral channels with very high contrast. The Raman image can be correlated with a corresponding white light microscopic image of the sample. The new setup enables simultaneous quantification of all Raman spectra across the whole spatial area with very good spectral resolution and thus outperforms other Raman imaging approaches based on scanning and tunable filters. The unique capabilities of the setup for fast, gentle, sensitive, and selective chemical imaging of biological samples were applied for automated hemozoin analysis. A special algorithm was developed to generate Raman images based on the hemozoin distribution in red blood cells without any influence from other Raman scattering. The new imaging setup in combination with the robust algorithm provides a novel, elegant way for chemical selective analysis of the malaria pigment hemozoin in early ring stages of Plasmodium falciparum infected erythrocytes. - Highlights: • Raman hyperspectral imaging allows for chemical selective analysis of biological samples with spatial heterogeneity. • A homogeneous, incoherent illumination is essential for reliable

  19. Fast Depiction Invariant Visual Similarity for Content Based Image Retrieval Based on Data-driven Visual Similarity using Linear Discriminant Analysis

    Science.gov (United States)

    Wihardi, Y.; Setiawan, W.; Nugraha, E.

    2018-01-01

    On this research we try to build CBIRS based on Learning Distance/Similarity Function using Linear Discriminant Analysis (LDA) and Histogram of Oriented Gradient (HoG) feature. Our method is invariant to depiction of image, such as similarity of image to image, sketch to image, and painting to image. LDA can decrease execution time compared to state of the art method, but it still needs an improvement in term of accuracy. Inaccuracy in our experiment happen because we did not perform sliding windows search and because of low number of negative samples as natural-world images.

  20. Image registration based on virtual frame sequence analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chen, H.; Ng, W.S. [Nanyang Technological University, Computer Integrated Medical Intervention Laboratory, School of Mechanical and Aerospace Engineering, Singapore (Singapore); Shi, D. (Nanyang Technological University, School of Computer Engineering, Singapore, Singpore); Wee, S.B. [Tan Tock Seng Hospital, Department of General Surgery, Singapore (Singapore)

    2007-08-15

    This paper is to propose a new framework for medical image registration with large nonrigid deformations, which still remains one of the biggest challenges for image fusion and further analysis in many medical applications. Registration problem is formulated as to recover a deformation process with the known initial state and final state. To deal with large nonlinear deformations, virtual frames are proposed to be inserted to model the deformation process. A time parameter is introduced and the deformation between consecutive frames is described with a linear affine transformation. Experiments are conducted with simple geometric deformation as well as complex deformations presented in MRI and ultrasound images. All the deformations are characterized with nonlinearity. The positive results demonstrated the effectiveness of this algorithm. The framework proposed in this paper is feasible to register medical images with large nonlinear deformations and is especially useful for sequential images. (orig.)

  1. Lattice and strain analysis of atomic resolution Z-contrast images based on template matching

    Energy Technology Data Exchange (ETDEWEB)

    Zuo, Jian-Min, E-mail: jianzuo@uiuc.edu [Department of Materials Science and Engineering, University of Illinois, Urbana, IL 61801 (United States); Seitz Materials Research Laboratory, University of Illinois, Urbana, IL 61801 (United States); Shah, Amish B. [Center for Microanalysis of Materials, Materials Research Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL 61801 (United States); Kim, Honggyu; Meng, Yifei; Gao, Wenpei [Department of Materials Science and Engineering, University of Illinois, Urbana, IL 61801 (United States); Seitz Materials Research Laboratory, University of Illinois, Urbana, IL 61801 (United States); Rouviére, Jean-Luc [CEA-INAC/UJF-Grenoble UMR-E, SP2M, LEMMA, Minatec, Grenoble 38054 (France)

    2014-01-15

    A real space approach is developed based on template matching for quantitative lattice analysis using atomic resolution Z-contrast images. The method, called TeMA, uses the template of an atomic column, or a group of atomic columns, to transform the image into a lattice of correlation peaks. This is helped by using a local intensity adjusted correlation and by the design of templates. Lattice analysis is performed on the correlation peaks. A reference lattice is used to correct for scan noise and scan distortions in the recorded images. Using these methods, we demonstrate that a precision of few picometers is achievable in lattice measurement using aberration corrected Z-contrast images. For application, we apply the methods to strain analysis of a molecular beam epitaxy (MBE) grown LaMnO{sub 3} and SrMnO{sub 3} superlattice. The results show alternating epitaxial strain inside the superlattice and its variations across interfaces at the spatial resolution of a single perovskite unit cell. Our methods are general, model free and provide high spatial resolution for lattice analysis. - Highlights: • A real space approach is developed for strain analysis using atomic resolution Z-contrast images and template matching. • A precision of few picometers is achievable in the measurement of lattice displacements. • The spatial resolution of a single perovskite unit cell is demonstrated for a LaMnO{sub 3} and SrMnO{sub 3} superlattice grown by MBE.

  2. A quality quantitative method of silicon direct bonding based on wavelet image analysis

    Science.gov (United States)

    Tan, Xiao; Tao, Zhi; Li, Haiwang; Xu, Tiantong; Yu, Mingxing

    2018-04-01

    The rapid development of MEMS (micro-electro-mechanical systems) has received significant attention from researchers in various fields and subjects. In particular, the MEMS fabrication process is elaborate and, as such, has been the focus of extensive research inquiries. However, in MEMS fabrication, component bonding is difficult to achieve and requires a complex approach. Thus, improvements in bonding quality are relatively important objectives. A higher quality bond can only be achieved with improved measurement and testing capabilities. In particular, the traditional testing methods mainly include infrared testing, tensile testing, and strength testing, despite the fact that using these methods to measure bond quality often results in low efficiency or destructive analysis. Therefore, this paper focuses on the development of a precise, nondestructive visual testing method based on wavelet image analysis that is shown to be highly effective in practice. The process of wavelet image analysis includes wavelet image denoising, wavelet image enhancement, and contrast enhancement, and as an end result, can display an image with low background noise. In addition, because the wavelet analysis software was developed with MATLAB, it can reveal the bonding boundaries and bonding rates to precisely indicate the bond quality at all locations on the wafer. This work also presents a set of orthogonal experiments that consist of three prebonding factors, the prebonding temperature, the positive pressure value and the prebonding time, which are used to analyze the prebonding quality. This method was used to quantify the quality of silicon-to-silicon wafer bonding, yielding standard treatment quantities that could be practical for large-scale use.

  3. An expert image analysis system for chromosome analysis application

    International Nuclear Information System (INIS)

    Wu, Q.; Suetens, P.; Oosterlinck, A.; Van den Berghe, H.

    1987-01-01

    This paper reports a recent study on applying a knowledge-based system approach as a new attempt to solve the problem of chromosome classification. A theoretical framework of an expert image analysis system is proposed, based on such a study. In this scheme, chromosome classification can be carried out under a hypothesize-and-verify paradigm, by integrating a rule-based component, in which the expertise of chromosome karyotyping is formulated with an existing image analysis system which uses conventional pattern recognition techniques. Results from the existing system can be used to bring in hypotheses, and with the rule-based verification and modification procedures, improvement of the classification performance can be excepted

  4. Analysis of rocket flight stability based on optical image measurement

    Science.gov (United States)

    Cui, Shuhua; Liu, Junhu; Shen, Si; Wang, Min; Liu, Jun

    2018-02-01

    Based on the abundant optical image measurement data from the optical measurement information, this paper puts forward the method of evaluating the rocket flight stability performance by using the measurement data of the characteristics of the carrier rocket in imaging. On the basis of the method of measuring the characteristics of the carrier rocket, the attitude parameters of the rocket body in the coordinate system are calculated by using the measurements data of multiple high-speed television sets, and then the parameters are transferred to the rocket body attack angle and it is assessed whether the rocket has a good flight stability flying with a small attack angle. The measurement method and the mathematical algorithm steps through the data processing test, where you can intuitively observe the rocket flight stability state, and also can visually identify the guidance system or failure analysis.

  5. Free and open source software for the manipulation of digital images.

    Science.gov (United States)

    Solomon, Robert W

    2009-06-01

    Free and open source software is a type of software that is nearly as powerful as commercial software but is freely downloadable. This software can do almost everything that the expensive programs can. GIMP (gnu image manipulation program) is the free program that is comparable to Photoshop, and versions are available for Windows, Macintosh, and Linux platforms. This article briefly describes how GIMP can be installed and used to manipulate radiology images. It is no longer necessary to budget large amounts of money for high-quality software to achieve the goals of image processing and document creation because free and open source software is available for the user to download at will.

  6. Explicit area-based accuracy assessment for mangrove tree crown delineation using Geographic Object-Based Image Analysis (GEOBIA)

    Science.gov (United States)

    Kamal, Muhammad; Johansen, Kasper

    2017-10-01

    Effective mangrove management requires spatially explicit information of mangrove tree crown map as a basis for ecosystem diversity study and health assessment. Accuracy assessment is an integral part of any mapping activities to measure the effectiveness of the classification approach. In geographic object-based image analysis (GEOBIA) the assessment of the geometric accuracy (shape, symmetry and location) of the created image objects from image segmentation is required. In this study we used an explicit area-based accuracy assessment to measure the degree of similarity between the results of the classification and reference data from different aspects, including overall quality (OQ), user's accuracy (UA), producer's accuracy (PA) and overall accuracy (OA). We developed a rule set to delineate the mangrove tree crown using WorldView-2 pan-sharpened image. The reference map was obtained by visual delineation of the mangrove tree crowns boundaries form a very high-spatial resolution aerial photograph (7.5cm pixel size). Ten random points with a 10 m radius circular buffer were created to calculate the area-based accuracy assessment. The resulting circular polygons were used to clip both the classified image objects and reference map for area comparisons. In this case, the area-based accuracy assessment resulted 64% and 68% for the OQ and OA, respectively. The overall quality of the calculation results shows the class-related area accuracy; which is the area of correctly classified as tree crowns was 64% out of the total area of tree crowns. On the other hand, the overall accuracy of 68% was calculated as the percentage of all correctly classified classes (tree crowns and canopy gaps) in comparison to the total class area (an entire image). Overall, the area-based accuracy assessment was simple to implement and easy to interpret. It also shows explicitly the omission and commission error variations of object boundary delineation with colour coded polygons.

  7. Image Analysis Based on Soft Computing and Applied on Space Shuttle During the Liftoff Process

    Science.gov (United States)

    Dominquez, Jesus A.; Klinko, Steve J.

    2007-01-01

    Imaging techniques based on Soft Computing (SC) and developed at Kennedy Space Center (KSC) have been implemented on a variety of prototype applications related to the safety operation of the Space Shuttle during the liftoff process. These SC-based prototype applications include detection and tracking of moving Foreign Objects Debris (FOD) during the Space Shuttle liftoff, visual anomaly detection on slidewires used in the emergency egress system for the Space Shuttle at the laJlIlch pad, and visual detection of distant birds approaching the Space Shuttle launch pad. This SC-based image analysis capability developed at KSC was also used to analyze images acquired during the accident of the Space Shuttle Columbia and estimate the trajectory and velocity of the foam that caused the accident.

  8. Container-Based Clinical Solutions for Portable and Reproducible Image Analysis.

    Science.gov (United States)

    Matelsky, Jordan; Kiar, Gregory; Johnson, Erik; Rivera, Corban; Toma, Michael; Gray-Roncal, William

    2018-05-08

    Medical imaging analysis depends on the reproducibility of complex computation. Linux containers enable the abstraction, installation, and configuration of environments so that software can be both distributed in self-contained images and used repeatably by tool consumers. While several initiatives in neuroimaging have adopted approaches for creating and sharing more reliable scientific methods and findings, Linux containers are not yet mainstream in clinical settings. We explore related technologies and their efficacy in this setting, highlight important shortcomings, demonstrate a simple use-case, and endorse the use of Linux containers for medical image analysis.

  9. Segmentation-based retrospective shading correction in fluorescence microscopy E. coli images for quantitative analysis

    Science.gov (United States)

    Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.

    2009-10-01

    Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.

  10. Heterogeneity, histological features and DNA ploidy in oral carcinoma by image-based analysis.

    Science.gov (United States)

    Diwakar, N; Sperandio, M; Sherriff, M; Brown, A; Odell, E W

    2005-04-01

    Oral squamous carcinomas appear heterogeneous on DNA ploidy analysis. However, this may be partly a result of sample dilution or the detection limit of techniques. The aim of this study was to determine whether oral squamous carcinomas are heterogeneous for ploidy status using image-based ploidy analysis and to determine whether ploidy status correlates with histological parameters. Multiple samples from 42 oral squamous carcinomas were analysed for DNA ploidy using an image-based system and scored for histological parameters. 22 were uniformly aneuploid, 1 uniformly tetraploid and 3 uniformly diploid. 16 appeared heterogeneous but only 8 appeared to be genuinely heterogeneous when minor ploidy histogram peaks were taken into account. Ploidy was closely related to nuclear pleomorphism but not differentiation. Sample variation, detection limits and diagnostic criteria account for much of the ploidy heterogeneity observed. Confident diagnosis of diploid status in an oral squamous cell carcinoma requires a minimum of 5 samples.

  11. Target Identification Using Harmonic Wavelet Based ISAR Imaging

    Science.gov (United States)

    Shreyamsha Kumar, B. K.; Prabhakar, B.; Suryanarayana, K.; Thilagavathi, V.; Rajagopal, R.

    2006-12-01

    A new approach has been proposed to reduce the computations involved in the ISAR imaging, which uses harmonic wavelet-(HW) based time-frequency representation (TFR). Since the HW-based TFR falls into a category of nonparametric time-frequency (T-F) analysis tool, it is computationally efficient compared to parametric T-F analysis tools such as adaptive joint time-frequency transform (AJTFT), adaptive wavelet transform (AWT), and evolutionary AWT (EAWT). Further, the performance of the proposed method of ISAR imaging is compared with the ISAR imaging by other nonparametric T-F analysis tools such as short-time Fourier transform (STFT) and Choi-Williams distribution (CWD). In the ISAR imaging, the use of HW-based TFR provides similar/better results with significant (92%) computational advantage compared to that obtained by CWD. The ISAR images thus obtained are identified using a neural network-based classification scheme with feature set invariant to translation, rotation, and scaling.

  12. Space-based infrared sensors of space target imaging effect analysis

    Science.gov (United States)

    Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang

    2018-02-01

    Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.

  13. Content-based Image Hiding Method for Secure Network Biometric Verification

    Directory of Open Access Journals (Sweden)

    Xiangjiu Che

    2011-08-01

    Full Text Available For secure biometric verification, most existing methods embed biometric information directly into the cover image, but content correlation analysis between the biometric image and the cover image is often ignored. In this paper, we propose a novel biometric image hiding approach based on the content correlation analysis to protect the network-based transmitted image. By using principal component analysis (PCA, the content correlation between the biometric image and the cover image is firstly analyzed. Then based on particle swarm optimization (PSO algorithm, some regions of the cover image are selected to represent the biometric image, in which the cover image can carry partial content of the biometric image. As a result of the correlation analysis, the unrepresented part of the biometric image is embedded into the cover image by using the discrete wavelet transform (DWT. Combined with human visual system (HVS model, this approach makes the hiding result perceptually invisible. The extensive experimental results demonstrate that the proposed hiding approach is robust against some common frequency and geometric attacks; it also provides an effective protection for the secure biometric verification.

  14. Utilizing a Photo-Analysis Software for Content Identifying Method (CIM

    Directory of Open Access Journals (Sweden)

    Nejad Nasim Sahraei

    2015-01-01

    Full Text Available Content Identifying Methodology or (CIM was developed to measure public preferences in order to reveal the common characteristics of landscapes and aspects of underlying perceptions including the individual's reactions to content and spatial configuration, therefore, it can assist with the identification of factors that influenced preference. Regarding the analysis of landscape photographs through CIM, there are several studies utilizing image analysis software, such as Adobe Photoshop, in order to identify the physical contents in the scenes. This study attempts to evaluate public’s ‘preferences for aesthetic qualities of pedestrian bridges in urban areas through a photo-questionnaire survey, in which respondents evaluated images of pedestrian bridges in urban areas. Two groups of images were evaluated as the most and least preferred scenes that concern the highest and lowest mean scores respectively. These two groups were analyzed by CIM and also evaluated based on the respondent’s description of each group to reveal the pattern of preferences and the factors that may affect them. Digimizer Software was employed to triangulate the two approaches and to determine the role of these factors on people’s preferences. This study attempts to introduce the useful software for image analysis which can measure the physical contents and also their spatial organization in the scenes. According to the findings, it is revealed that Digimizer could be a useful tool in CIM approaches through preference studies that utilizes photographs in place of the actual landscape in order to determine the most important factors in public preferences for pedestrian bridges in urban areas.

  15. Image analysis

    International Nuclear Information System (INIS)

    Berman, M.; Bischof, L.M.; Breen, E.J.; Peden, G.M.

    1994-01-01

    This paper provides an overview of modern image analysis techniques pertinent to materials science. The usual approach in image analysis contains two basic steps: first, the image is segmented into its constituent components (e.g. individual grains), and second, measurement and quantitative analysis are performed. Usually, the segmentation part of the process is the harder of the two. Consequently, much of the paper concentrates on this aspect, reviewing both fundamental segmentation tools (commonly found in commercial image analysis packages) and more advanced segmentation tools. There is also a review of the most widely used quantitative analysis methods for measuring the size, shape and spatial arrangements of objects. Many of the segmentation and analysis methods are demonstrated using complex real-world examples. Finally, there is a discussion of hardware and software issues. 42 refs., 17 figs

  16. Deep Learning MR Imaging-based Attenuation Correction for PET/MR Imaging.

    Science.gov (United States)

    Liu, Fang; Jang, Hyungseok; Kijowski, Richard; Bradshaw, Tyler; McMillan, Alan B

    2018-02-01

    Purpose To develop and evaluate the feasibility of deep learning approaches for magnetic resonance (MR) imaging-based attenuation correction (AC) (termed deep MRAC) in brain positron emission tomography (PET)/MR imaging. Materials and Methods A PET/MR imaging AC pipeline was built by using a deep learning approach to generate pseudo computed tomographic (CT) scans from MR images. A deep convolutional auto-encoder network was trained to identify air, bone, and soft tissue in volumetric head MR images coregistered to CT data for training. A set of 30 retrospective three-dimensional T1-weighted head images was used to train the model, which was then evaluated in 10 patients by comparing the generated pseudo CT scan to an acquired CT scan. A prospective study was carried out for utilizing simultaneous PET/MR imaging for five subjects by using the proposed approach. Analysis of covariance and paired-sample t tests were used for statistical analysis to compare PET reconstruction error with deep MRAC and two existing MR imaging-based AC approaches with CT-based AC. Results Deep MRAC provides an accurate pseudo CT scan with a mean Dice coefficient of 0.971 ± 0.005 for air, 0.936 ± 0.011 for soft tissue, and 0.803 ± 0.021 for bone. Furthermore, deep MRAC provides good PET results, with average errors of less than 1% in most brain regions. Significantly lower PET reconstruction errors were realized with deep MRAC (-0.7% ± 1.1) compared with Dixon-based soft-tissue and air segmentation (-5.8% ± 3.1) and anatomic CT-based template registration (-4.8% ± 2.2). Conclusion The authors developed an automated approach that allows generation of discrete-valued pseudo CT scans (soft tissue, bone, and air) from a single high-spatial-resolution diagnostic-quality three-dimensional MR image and evaluated it in brain PET/MR imaging. This deep learning approach for MR imaging-based AC provided reduced PET reconstruction error relative to a CT-based standard within the brain compared

  17. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    This book is a result of a collaboration between DTU Informatics at the Technical University of Denmark and the Laboratory of Computer Vision and Media Technology at Aalborg University. It is partly based on the book ”Image and Video Processing”, second edition by Thomas Moeslund. The aim...... of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code...

  18. Image sequence analysis workstation for multipoint motion analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  19. High-content analysis of single cells directly assembled on CMOS sensor based on color imaging.

    Science.gov (United States)

    Tanaka, Tsuyoshi; Saeki, Tatsuya; Sunaga, Yoshihiko; Matsunaga, Tadashi

    2010-12-15

    A complementary metal oxide semiconductor (CMOS) image sensor was applied to high-content analysis of single cells which were assembled closely or directly onto the CMOS sensor surface. The direct assembling of cell groups on CMOS sensor surface allows large-field (6.66 mm×5.32 mm in entire active area of CMOS sensor) imaging within a second. Trypan blue-stained and non-stained cells in the same field area on the CMOS sensor were successfully distinguished as white- and blue-colored images under white LED light irradiation. Furthermore, the chemiluminescent signals of each cell were successfully visualized as blue-colored images on CMOS sensor only when HeLa cells were placed directly on the micro-lens array of the CMOS sensor. Our proposed approach will be a promising technique for real-time and high-content analysis of single cells in a large-field area based on color imaging. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. SU-F-J-94: Development of a Plug-in Based Image Analysis Tool for Integration Into Treatment Planning

    Energy Technology Data Exchange (ETDEWEB)

    Owen, D; Anderson, C; Mayo, C; El Naqa, I; Ten Haken, R; Cao, Y; Balter, J; Matuszak, M [University of Michigan, Ann Arbor, MI (United States)

    2016-06-15

    Purpose: To extend the functionality of a commercial treatment planning system (TPS) to support (i) direct use of quantitative image-based metrics within treatment plan optimization and (ii) evaluation of dose-functional volume relationships to assist in functional image adaptive radiotherapy. Methods: A script was written that interfaces with a commercial TPS via an Application Programming Interface (API). The script executes a program that performs dose-functional volume analyses. Written in C#, the script reads the dose grid and correlates it with image data on a voxel-by-voxel basis through API extensions that can access registration transforms. A user interface was designed through WinForms to input parameters and display results. To test the performance of this program, image- and dose-based metrics computed from perfusion SPECT images aligned to the treatment planning CT were generated, validated, and compared. Results: The integration of image analysis information was successfully implemented as a plug-in to a commercial TPS. Perfusion SPECT images were used to validate the calculation and display of image-based metrics as well as dose-intensity metrics and histograms for defined structures on the treatment planning CT. Various biological dose correction models, custom image-based metrics, dose-intensity computations, and dose-intensity histograms were applied to analyze the image-dose profile. Conclusion: It is possible to add image analysis features to commercial TPSs through custom scripting applications. A tool was developed to enable the evaluation of image-intensity-based metrics in the context of functional targeting and avoidance. In addition to providing dose-intensity metrics and histograms that can be easily extracted from a plan database and correlated with outcomes, the system can also be extended to a plug-in optimization system, which can directly use the computed metrics for optimization of post-treatment tumor or normal tissue response

  1. Seismic zonation of Port-Au-Prince using pixel- and object-based imaging analysis methods on ASTER GDEM

    Science.gov (United States)

    Yong, Alan; Hough, Susan E.; Cox, Brady R.; Rathje, Ellen M.; Bachhuber, Jeff; Dulberg, Ranon; Hulslander, David; Christiansen, Lisa; and Abrams, Michael J.

    2011-01-01

    We report about a preliminary study to evaluate the use of semi-automated imaging analysis of remotely-sensed DEM and field geophysical measurements to develop a seismic-zonation map of Port-au-Prince, Haiti. For in situ data, VS30 values are derived from the MASW technique deployed in and around the city. For satellite imagery, we use an ASTER GDEM of Hispaniola. We apply both pixel- and object-based imaging methods on the ASTER GDEM to explore local topography (absolute elevation values) and classify terrain types such as mountains, alluvial fans and basins/near-shore regions. We assign NEHRP seismic site class ranges based on available VS30 values. A comparison of results from imagery-based methods to results from traditional geologic-based approaches reveals good overall correspondence. We conclude that image analysis of RS data provides reliable first-order site characterization results in the absence of local data and can be useful to refine detailed site maps with sparse local data.

  2. The establishment of Digital Image Capture System(DICS) using conventional simulator

    International Nuclear Information System (INIS)

    Oh, Tae Sung; Park, Jong Il; Byun, Young Sik; Shin, Hyun Kyoh

    2004-01-01

    The simulator is used to determine patient field and ensure the treatment field, which encompasses the required anatomy during patient normal movement such as during breathing. The latest simulator provide real time display of still, fluoroscopic and digitalized image, but conventional simulator is not yet. The purpose of this study is to introduce digital image capture system(DICS) using conventional simulator and clinical case using digital captured still and fluoroscopic image. We connect the video signal cable to the video terminal in the back up of simulator monitor, and connect the video jack to the A/D converter. After connection between the converter jack and computer, We can acquire still image and record fluoroscopic image with operating image capture program. The data created with this system can be used in patient treatment, and modified for verification by using image processing software. (j.e. photoshop, paintshop) DICS was able to establish easy and economical procedure. DCIS image was helpful for simulation. DICS imaging was powerful tool in the evaluation of the department specific patient positioning. Because the commercialized simulator based of digital capture is very expensive, it is not easily to establish DICS simulator in the most hospital. DICS using conventional simulator enable to utilize the practical use of image equal to high cost digitalized simulator and to research many clinical cases in case of using other software program.

  3. NEPHRUS: model of intelligent multilayers expert system for evaluation of the renal system based on scintigraphic images analysis

    International Nuclear Information System (INIS)

    Silva, Jose W.E. da; Schirru, Roberto; Boasquevisque, Edson M.

    1997-01-01

    This work develops a prototype for the system model based on Artificial Intelligence devices able to perform functions related to scintigraphic image analysis of the urinary system. Criteria used by medical experts for analysis images obtained with 99m Tc+DTPA and/or 99m Tc+DMSA were modeled and a multi resolution diagnosis technique was implemented. Special attention was given to the programs user interface design. Human Factor Engineering techniques were considered so as to ally friendliness and robustness. Results obtained using Artificial Neural Networks for the qualitative image analysis and the knowledge model constructed shows the feasibility of Artificial Intelligence implementation that use 'inherent' abilities of each technique in the resolution of diagnosis image analysis problems. (author). 12 refs., 2 figs., 2 tabs

  4. Theoretical analysis and experimental evaluation of a CsI(Tl) based electronic portal imaging system

    International Nuclear Information System (INIS)

    Sawant, Amit; Zeman, Herbert; Samant, Sanjiv; Lovhoiden, Gunnar; Weinberg, Brent; DiBianca, Frank

    2002-01-01

    This article discusses the design and analysis of a portal imaging system based on a thick transparent scintillator. A theoretical analysis using Monte Carlo simulation was performed to calculate the x-ray quantum detection efficiency (QDE), signal to noise ratio (SNR) and the zero frequency detective quantum efficiency [DQE(0)] of the system. A prototype electronic portal imaging device (EPID) was built, using a 12.7 mm thick, 20.32 cm diameter, CsI(Tl) scintillator, coupled to a liquid nitrogen cooled CCD TV camera. The system geometry of the prototype EPID was optimized to achieve high spatial resolution. The experimental evaluation of the prototype EPID involved the determination of contrast resolution, depth of focus, light scatter and mirror glare. Images of humanoid and contrast detail phantoms were acquired using the prototype EPID and were compared with those obtained using conventional and high contrast portal film and a commercial EPID. A theoretical analysis was also carried out for a proposed full field of view system using a large area, thinned CCD camera and a 12.7 mm thick CsI(Tl) crystal. Results indicate that this proposed design could achieve DQE(0) levels up to 11%, due to its order of magnitude higher QDE compared to phosphor screen-metal plate based EPID designs, as well as significantly higher light collection compared to conventional TV camera based systems

  5. An Imaging And Graphics Workstation For Image Sequence Analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  6. From Pixels to Geographic Objects in Remote Sensing Image Analysis

    NARCIS (Netherlands)

    Addink, E.A.; Van Coillie, Frieke M.B.; Jong, Steven M. de

    Traditional image analysis methods are mostly pixel-based and use the spectral differences of landscape elements at the Earth surface to classify these elements or to extract element properties from the Earth Observation image. Geographic object-based image analysis (GEOBIA) has received

  7. METHODS OF DISTANCE MEASUREMENT’S ACCURACY INCREASING BASED ON THE CORRELATION ANALYSIS OF STEREO IMAGES

    Directory of Open Access Journals (Sweden)

    V. L. Kozlov

    2018-01-01

    Full Text Available To solve the problem of increasing the accuracy of restoring a three-dimensional picture of space using two-dimensional digital images, it is necessary to use new effective techniques and algorithms for processing and correlation analysis of digital images. Actively developed tools that allow you to reduce the time costs for processing stereo images, improve the quality of the depth maps construction and automate their construction. The aim of the work is to investigate the possibilities of using various techniques for processing digital images to improve the measurements accuracy of the rangefinder based on the correlation analysis of the stereo image. The results of studies of the influence of color channel mixing techniques on the distance measurements accuracy for various functions realizing correlation processing of images are presented. Studies on the analysis of the possibility of using integral representation of images to reduce the time cost in constructing a depth map areproposed. The results of studies of the possibility of using images prefiltration before correlation processing when distance measuring by stereo imaging areproposed.It is obtained that using of uniform mixing of channels leads to minimization of the total number of measurement errors, and using of brightness extraction according to the sRGB standard leads to an increase of errors number for all of the considered correlation processing techniques. Integral representation of the image makes it possible to accelerate the correlation processing, but this method is useful for depth map calculating in images no more than 0.5 megapixels. Using of image filtration before correlation processing can provide, depending on the filter parameters, either an increasing of the correlation function value, which is useful for analyzing noisy images, or compression of the correlation function.

  8. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    Science.gov (United States)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  9. Oценка и улучшение качества сканированных карт (на базе Adobe Photoshop)

    OpenAIRE

    Bautrėnas, Artūras; Konstantinova, Jana; Pileckas, Marijus

    2006-01-01

    Nagrinėjamas kartografinių kūrinių vertimas skaitmeniniais skenuojant; gautų rastrinių žemėlapių kokybės vertinimo ir gerinimo bei įvairių defektų šalinimo, taikant Adobe PhotoShop programą, procesas; pateikiamas rastrinio žemėlapio rengimo numatytai paskirčiai algoritmas. Methodological guidelines for preparing raster maps using Adobe PhotoShop software are presented. The questions analysed: digitising maps through scanning, estimating and improving the quality of gathered raster maps, pr...

  10. FOAMED CEMENT COMPOSITES: DETECTION OF THE MODULUS OF ELASTICITY USING DIC ANALYSIS AND COMPARISON WITH OTHER METHODS

    Directory of Open Access Journals (Sweden)

    Jakub Ďureje

    2017-11-01

    Full Text Available A modulus of elasticity was determined for eight differently foamed cement paste samples. Samples were loaded in the laboratory by a hydraulic press. The force acting on the sample was read directly from the laboratory press. Digital Image Correlation (DIC analysis were used to draw deformations. Before loading pressure test was applied a random contrast pattern to the samples. Samples were captured by the camera in a one-second interval during the loading pressure test. The images were edited in the Adobe Photoshop Lightroom and then evaluated using Ncorr software. The result is a vertical and horizontal shift field. On the basis of the results obtained, it was possible to calculate the modulus of elasticity of each sample.

  11. Poka Yoke system based on image analysis and object recognition

    Science.gov (United States)

    Belu, N.; Ionescu, L. M.; Misztal, A.; Mazăre, A.

    2015-11-01

    Poka Yoke is a method of quality management which is related to prevent faults from arising during production processes. It deals with “fail-sating” or “mistake-proofing”. The Poka-yoke concept was generated and developed by Shigeo Shingo for the Toyota Production System. Poka Yoke is used in many fields, especially in monitoring production processes. In many cases, identifying faults in a production process involves a higher cost than necessary cost of disposal. Usually, poke yoke solutions are based on multiple sensors that identify some nonconformities. This means the presence of different equipment (mechanical, electronic) on production line. As a consequence, coupled with the fact that the method itself is an invasive, affecting the production process, would increase its price diagnostics. The bulky machines are the means by which a Poka Yoke system can be implemented become more sophisticated. In this paper we propose a solution for the Poka Yoke system based on image analysis and identification of faults. The solution consists of a module for image acquisition, mid-level processing and an object recognition module using associative memory (Hopfield network type). All are integrated into an embedded system with AD (Analog to Digital) converter and Zync 7000 (22 nm technology).

  12. Automated discrimination of lower and higher grade gliomas based on histopathological image analysis

    Directory of Open Access Journals (Sweden)

    Hojjat Seyed Mousavi

    2015-01-01

    Full Text Available Introduction: Histopathological images have rich structural information, are multi-channel in nature and contain meaningful pathological information at various scales. Sophisticated image analysis tools that can automatically extract discriminative information from the histopathology image slides for diagnosis remain an area of significant research activity. In this work, we focus on automated brain cancer grading, specifically glioma grading. Grading of a glioma is a highly important problem in pathology and is largely done manually by medical experts based on an examination of pathology slides (images. To complement the efforts of clinicians engaged in brain cancer diagnosis, we develop novel image processing algorithms and systems to automatically grade glioma tumor into two categories: Low-grade glioma (LGG and high-grade glioma (HGG which represent a more advanced stage of the disease. Results: We propose novel image processing algorithms based on spatial domain analysis for glioma tumor grading that will complement the clinical interpretation of the tissue. The image processing techniques are developed in close collaboration with medical experts to mimic the visual cues that a clinician looks for in judging of the grade of the disease. Specifically, two algorithmic techniques are developed: (1 A cell segmentation and cell-count profile creation for identification of Pseudopalisading Necrosis, and (2 a customized operation of spatial and morphological filters to accurately identify microvascular proliferation (MVP. In both techniques, a hierarchical decision is made via a decision tree mechanism. If either Pseudopalisading Necrosis or MVP is found present in any part of the histopathology slide, the whole slide is identified as HGG, which is consistent with World Health Organization guidelines. Experimental results on the Cancer Genome Atlas database are presented in the form of: (1 Successful detection rates of pseudopalisading necrosis

  13. Toward a universal, automated facial measurement tool in facial reanimation.

    Science.gov (United States)

    Hadlock, Tessa A; Urban, Luke S

    2012-01-01

    To describe a highly quantitative facial function-measuring tool that yields accurate, objective measures of facial position in significantly less time than existing methods. Facial Assessment by Computer Evaluation (FACE) software was designed for facial analysis. Outputs report the static facial landmark positions and dynamic facial movements relevant in facial reanimation. Fifty individuals underwent facial movement analysis using Photoshop-based measurements and the new software; comparisons of agreement and efficiency were made. Comparisons were made between individuals with normal facial animation and patients with paralysis to gauge sensitivity to abnormal movements. Facial measurements were matched using FACE software and Photoshop-based measures at rest and during expressions. The automated assessments required significantly less time than Photoshop-based assessments.FACE measurements easily revealed differences between individuals with normal facial animation and patients with facial paralysis. FACE software produces accurate measurements of facial landmarks and facial movements and is sensitive to paralysis. Given its efficiency, it serves as a useful tool in the clinical setting for zonal facial movement analysis in comprehensive facial nerve rehabilitation programs.

  14. A Python-Based Open Source System for Geographic Object-Based Image Analysis (GEOBIA Utilizing Raster Attribute Tables

    Directory of Open Access Journals (Sweden)

    Daniel Clewley

    2014-06-01

    Full Text Available A modular system for performing Geographic Object-Based Image Analysis (GEOBIA, using entirely open source (General Public License compatible software, is presented based around representing objects as raster clumps and storing attributes as a raster attribute table (RAT. The system utilizes a number of libraries, developed by the authors: The Remote Sensing and GIS Library (RSGISLib, the Raster I/O Simplification (RIOS Python Library, the KEA image format and TuiView image viewer. All libraries are accessed through Python, providing a common interface on which to build processing chains. Three examples are presented, to demonstrate the capabilities of the system: (1 classification of mangrove extent and change in French Guiana; (2 a generic scheme for the classification of the UN-FAO land cover classification system (LCCS and their subsequent translation to habitat categories; and (3 a national-scale segmentation for Australia. The system presented provides similar functionality to existing GEOBIA packages, but is more flexible, due to its modular environment, capable of handling complex classification processes and applying them to larger datasets.

  15. Image processing and analysis software development

    International Nuclear Information System (INIS)

    Shahnaz, R.

    1999-01-01

    The work presented in this project is aimed at developing a software 'IMAGE GALLERY' to investigate various image processing and analysis techniques. The work was divided into two parts namely the image processing techniques and pattern recognition, which further comprised of character and face recognition. Various image enhancement techniques including negative imaging, contrast stretching, compression of dynamic, neon, diffuse, emboss etc. have been studied. Segmentation techniques including point detection, line detection, edge detection have been studied. Also some of the smoothing and sharpening filters have been investigated. All these imaging techniques have been implemented in a window based computer program written in Visual Basic Neural network techniques based on Perception model have been applied for face and character recognition. (author)

  16. A fractal-based image encryption system

    KAUST Repository

    Abd-El-Hafiz, S. K.

    2014-12-01

    This study introduces a novel image encryption system based on diffusion and confusion processes in which the image information is hidden inside the complex details of fractal images. A simplified encryption technique is, first, presented using a single-fractal image and statistical analysis is performed. A general encryption system utilising multiple fractal images is, then, introduced to improve the performance and increase the encryption key up to hundreds of bits. This improvement is achieved through several parameters: feedback delay, multiplexing and independent horizontal or vertical shifts. The effect of each parameter is studied separately and, then, they are combined to illustrate their influence on the encryption quality. The encryption quality is evaluated using different analysis techniques such as correlation coefficients, differential attack measures, histogram distributions, key sensitivity analysis and the National Institute of Standards and Technology (NIST) statistical test suite. The obtained results show great potential compared to other techniques.

  17. Seismic-zonation of Port-au-Prince using pixel- and object-based imaging analysis methods on ASTER GDEM

    Science.gov (United States)

    Yong, A.; Hough, S.E.; Cox, B.R.; Rathje, E.M.; Bachhuber, J.; Dulberg, R.; Hulslander, D.; Christiansen, L.; Abrams, M.J.

    2011-01-01

    We report about a preliminary study to evaluate the use of semi-automated imaging analysis of remotely-sensed DEM and field geophysical measurements to develop a seismic-zonation map of Port-au-Prince, Haiti. For in situ data, Vs30 values are derived from the MASW technique deployed in and around the city. For satellite imagery, we use an ASTER GDEM of Hispaniola. We apply both pixel- and object-based imaging methods on the ASTER GDEM to explore local topography (absolute elevation values) and classify terrain types such as mountains, alluvial fans and basins/near-shore regions. We assign NEHRP seismic site class ranges based on available Vs30 values. A comparison of results from imagery-based methods to results from traditional geologic-based approaches reveals good overall correspondence. We conclude that image analysis of RS data provides reliable first-order site characterization results in the absence of local data and can be useful to refine detailed site maps with sparse local data. ?? 2011 American Society for Photogrammetry and Remote Sensing.

  18. Image analysis for material characterisation

    Science.gov (United States)

    Livens, Stefan

    In this thesis, a number of image analysis methods are presented as solutions to two applications concerning the characterisation of materials. Firstly, we deal with the characterisation of corrosion images, which is handled using a multiscale texture analysis method based on wavelets. We propose a feature transformation that deals with the problem of rotation invariance. Classification is performed with a Learning Vector Quantisation neural network and with combination of outputs. In an experiment, 86,2% of the images showing either pit formation or cracking, are correctly classified. Secondly, we develop an automatic system for the characterisation of silver halide microcrystals. These are flat crystals with a triangular or hexagonal base and a thickness in the 100 to 200 nm range. A light microscope is used to image them. A novel segmentation method is proposed, which allows to separate agglomerated crystals. For the measurement of shape, the ratio between the largest and the smallest radius yields the best results. The thickness measurement is based on the interference colours that appear for light reflected at the crystals. The mean colour of different thickness populations is determined, from which a calibration curve is derived. With this, the thickness of new populations can be determined accurately.

  19. Image-Based Reconstruction and Analysis of Dynamic Scenes in a Landslide Simulation Facility

    Science.gov (United States)

    Scaioni, M.; Crippa, J.; Longoni, L.; Papini, M.; Zanzi, L.

    2017-12-01

    The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time) reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC) technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.

  20. IMAGE-BASED RECONSTRUCTION AND ANALYSIS OF DYNAMIC SCENES IN A LANDSLIDE SIMULATION FACILITY

    Directory of Open Access Journals (Sweden)

    M. Scaioni

    2017-12-01

    Full Text Available The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.

  1. An Amateur's Guide to Observing and Imaging the Heavens

    Science.gov (United States)

    Morison, Ian

    2014-06-01

    Foreword; Acknowledgments; Prologue: a tale of two scopes; 1. Telescope and observing fundamentals; 2. Refractors; 3. Binoculars and spotting scopes; 4. The Newtonian telescope and its derivatives; 5. The Cassegrain telescope and its derivatives - Schmidt-Cassegrains and Maksutovs; 6. Telescope maintenance, collimation and star testing; 7. Telescope accessories: finders, eyepieces and bino-viewers; 8. Telescope mounts: alt/az and equatorial with their computerised variants; 9. The art of visual observing; 10. Visual observations of the Moon and planets; 11. Imaging the Moon and planets with DSLRs and web-cams; 12. Observing and imaging the Sun in white light and H-alpha; 13. Observing with an astro-video camera to 'see' faint objects; 14. Deep sky imaging with standard and H-alpha modified DSLR cameras; 15. Deep sky imaging with cooled CCD cameras; 16. Auto-guiding techniques and equipment; 17. Spectral studies of the Sun, stars and galaxies; 18. Improving and enhancing images in Photoshop; Index.

  2. Image Analysis of Eccentric Photorefraction

    Directory of Open Access Journals (Sweden)

    J. Dušek

    2004-01-01

    Full Text Available This article deals with image and data analysis of the recorded video-sequences of strabistic infants. It describes a unique noninvasive measuring system based on two measuring methods (position of I. Purkynje image with relation to the centre of the lens and eccentric photorefraction for infants. The whole process is divided into three steps. The aim of the first step is to obtain video sequences on our special system (Eye Movement Analyser. Image analysis of the recorded sequences is performed in order to obtain curves of basic eye reactions (accommodation and convergence. The last step is to calibrate of these curves to corresponding units (diopter and degrees of movement.

  3. Dictionary-based image reconstruction for superresolution in integrated circuit imaging.

    Science.gov (United States)

    Cilingiroglu, T Berkin; Uyar, Aydan; Tuysuzoglu, Ahmet; Karl, W Clem; Konrad, Janusz; Goldberg, Bennett B; Ünlü, M Selim

    2015-06-01

    Resolution improvement through signal processing techniques for integrated circuit imaging is becoming more crucial as the rapid decrease in integrated circuit dimensions continues. Although there is a significant effort to push the limits of optical resolution for backside fault analysis through the use of solid immersion lenses, higher order laser beams, and beam apodization, signal processing techniques are required for additional improvement. In this work, we propose a sparse image reconstruction framework which couples overcomplete dictionary-based representation with a physics-based forward model to improve resolution and localization accuracy in high numerical aperture confocal microscopy systems for backside optical integrated circuit analysis. The effectiveness of the framework is demonstrated on experimental data.

  4. Dimensionality Reduction of Hyperspectral Image with Graph-Based Discriminant Analysis Considering Spectral Similarity

    Directory of Open Access Journals (Sweden)

    Fubiao Feng

    2017-03-01

    Full Text Available Recently, graph embedding has drawn great attention for dimensionality reduction in hyperspectral imagery. For example, locality preserving projection (LPP utilizes typical Euclidean distance in a heat kernel to create an affinity matrix and projects the high-dimensional data into a lower-dimensional space. However, the Euclidean distance is not sufficiently correlated with intrinsic spectral variation of a material, which may result in inappropriate graph representation. In this work, a graph-based discriminant analysis with spectral similarity (denoted as GDA-SS measurement is proposed, which fully considers curves changing description among spectral bands. Experimental results based on real hyperspectral images demonstrate that the proposed method is superior to traditional methods, such as supervised LPP, and the state-of-the-art sparse graph-based discriminant analysis (SGDA.

  5. Frequency domain analysis of knock images

    Science.gov (United States)

    Qi, Yunliang; He, Xin; Wang, Zhi; Wang, Jianxin

    2014-12-01

    High speed imaging-based knock analysis has mainly focused on time domain information, e.g. the spark triggered flame speed, the time when end gas auto-ignition occurs and the end gas flame speed after auto-ignition. This study presents a frequency domain analysis on the knock images recorded using a high speed camera with direct photography in a rapid compression machine (RCM). To clearly visualize the pressure wave oscillation in the combustion chamber, the images were high-pass-filtered to extract the luminosity oscillation. The luminosity spectrum was then obtained by applying fast Fourier transform (FFT) to three basic colour components (red, green and blue) of the high-pass-filtered images. Compared to the pressure spectrum, the luminosity spectra better identify the resonant modes of pressure wave oscillation. More importantly, the resonant mode shapes can be clearly visualized by reconstructing the images based on the amplitudes of luminosity spectra at the corresponding resonant frequencies, which agree well with the analytical solutions for mode shapes of gas vibration in a cylindrical cavity.

  6. Mapping of crop calendar events by object-based analysis of MODIS and ASTER images

    Directory of Open Access Journals (Sweden)

    A.I. De Castro

    2014-06-01

    Full Text Available A method to generate crop calendar and phenology-related maps at a parcel level of four major irrigated crops (rice, maize, sunflower and tomato is shown. The method combines images from the ASTER and MODIS sensors in an object-based image analysis framework, as well as testing of three different fitting curves by using the TIMESAT software. Averaged estimation of calendar dates were 85%, from 92% in the estimation of emergence and harvest dates in rice to 69% in the case of harvest date in tomato.

  7. An Image Encryption Method Based on Bit Plane Hiding Technology

    Institute of Scientific and Technical Information of China (English)

    LIU Bin; LI Zhitang; TU Hao

    2006-01-01

    A novel image hiding method based on the correlation analysis of bit plane is described in this paper. Firstly, based on the correlation analysis, different bit plane of a secret image is hided in different bit plane of several different open images. And then a new hiding image is acquired by a nesting "Exclusive-OR" operation on those images obtained from the first step. At last, by employing image fusion technique, the final hiding result is achieved. The experimental result shows that the method proposed in this paper is effective.

  8. The influence of software filtering in digital mammography image quality

    Science.gov (United States)

    Michail, C.; Spyropoulou, V.; Kalyvas, N.; Valais, I.; Dimitropoulos, N.; Fountos, G.; Kandarakis, I.; Panayiotakis, G.

    2009-05-01

    Breast cancer is one of the most frequently diagnosed cancers among women. Several techniques have been developed to help in the early detection of breast cancer such as conventional and digital x-ray mammography, positron and single-photon emission mammography, etc. A key advantage in digital mammography is that images can be manipulated as simple computer image files. Thus non-dedicated commercially available image manipulation software can be employed to process and store the images. The image processing tools of the Photoshop (CS 2) software usually incorporate digital filters which may be used to reduce image noise, enhance contrast and increase spatial resolution. However, improving an image quality parameter may result in degradation of another. The aim of this work was to investigate the influence of three sharpening filters, named hereafter sharpen, sharpen more and sharpen edges on image resolution and noise. Image resolution was assessed by means of the Modulation Transfer Function (MTF).In conclusion it was found that the correct use of commercial non-dedicated software on digital mammograms may improve some aspects of image quality.

  9. The influence of software filtering in digital mammography image quality

    International Nuclear Information System (INIS)

    Michail, C; Spyropoulou, V; Valais, I; Panayiotakis, G; Kalyvas, N; Fountos, G; Kandarakis, I; Dimitropoulos, N

    2009-01-01

    Breast cancer is one of the most frequently diagnosed cancers among women. Several techniques have been developed to help in the early detection of breast cancer such as conventional and digital x-ray mammography, positron and single-photon emission mammography, etc. A key advantage in digital mammography is that images can be manipulated as simple computer image files. Thus non-dedicated commercially available image manipulation software can be employed to process and store the images. The image processing tools of the Photoshop (CS 2) software usually incorporate digital filters which may be used to reduce image noise, enhance contrast and increase spatial resolution. However, improving an image quality parameter may result in degradation of another. The aim of this work was to investigate the influence of three sharpening filters, named hereafter sharpen, sharpen more and sharpen edges on image resolution and noise. Image resolution was assessed by means of the Modulation Transfer Function (MTF).In conclusion it was found that the correct use of commercial non-dedicated software on digital mammograms may improve some aspects of image quality.

  10. Computer-based image analysis in radiological diagnostics and image-guided therapy: 3D-Reconstruction, contrast medium dynamics, surface analysis, radiation therapy and multi-modal image fusion

    International Nuclear Information System (INIS)

    Beier, J.

    2001-01-01

    This book deals with substantial subjects of postprocessing and analysis of radiological image data, a particular emphasis was put on pulmonary themes. For a multitude of purposes the developed methods and procedures can directly be transferred to other non-pulmonary applications. The work presented here is structured in 14 chapters, each describing a selected complex of research. The chapter order reflects the sequence of the processing steps starting from artefact reduction, segmentation, visualization, analysis, therapy planning and image fusion up to multimedia archiving. In particular, this includes virtual endoscopy with three different scene viewers (Chap. 6), visualizations of the lung disease bronchiectasis (Chap. 7), surface structure analysis of pulmonary tumors (Chap. 8), quantification of contrast medium dynamics from temporal 2D and 3D image sequences (Chap. 9) as well as multimodality image fusion of arbitrary tomographical data using several visualization techniques (Chap. 12). Thus, the software systems presented cover the majority of image processing applications necessary in radiology and were entirely developed, implemented and validated in the clinical routine of a university medical school. (orig.) [de

  11. Partial differential equation-based approach for empirical mode decomposition: application on image analysis.

    Science.gov (United States)

    Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques

    2012-09-01

    The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.

  12. Bone histomorphometry using free and commonly available software.

    Science.gov (United States)

    Egan, Kevin P; Brennan, Tracy A; Pignolo, Robert J

    2012-12-01

    Histomorphometric analysis is a widely used technique to assess changes in tissue structure and function. Commercially available programs that measure histomorphometric parameters can be cost-prohibitive. In this study, we compared an inexpensive method of histomorphometry to a current proprietary software program. Image J and Adobe Photoshop(®) were used to measure static and kinetic bone histomorphometric parameters. Photomicrographs of Goldner's trichrome-stained femurs were used to generate black-and-white image masks, representing bone and non-bone tissue, respectively, in Adobe Photoshop(®) . The masks were used to quantify histomorphometric parameters (bone volume, tissue volume, osteoid volume, mineralizing surface and interlabel width) in Image J. The resultant values obtained using Image J and the proprietary software were compared and differences found to be statistically non-significant. The wide-ranging use of histomorphometric analysis for assessing the basic morphology of tissue components makes it important to have affordable and accurate measurement options available for a diverse range of applications. Here we have developed and validated an approach to histomorphometry using commonly and freely available software that is comparable to a much more costly, commercially available software program. © 2012 Blackwell Publishing Limited.

  13. Optimization of shearography image quality analysis

    International Nuclear Information System (INIS)

    Rafhayudi Jamro

    2005-01-01

    Shearography is an optical technique based on speckle pattern to measure the deformation of the object surface in which the fringe pattern is obtained through the correlation analysis from the speckle pattern. Analysis of fringe pattern for engineering application is limited for qualitative measurement. Therefore, for further analysis that lead to qualitative data, series of image processing mechanism are involved. In this paper, the fringe pattern for qualitative analysis is discussed. In principal field of applications is qualitative non-destructive testing such as detecting discontinuity, defect in the material structure, locating fatigue zones and etc and all these required image processing application. In order to performed image optimisation successfully, the noise in the fringe pattern must be minimised and the fringe pattern itself must be maximise. This can be achieved by applying a filtering method with a kernel size ranging from 2 X 2 to 7 X 7 pixels size and also applying equalizer in the image processing. (Author)

  14. Operational Automatic Remote Sensing Image Understanding Systems: Beyond Geographic Object-Based and Object-Oriented Image Analysis (GEOBIA/GEOOIA. Part 2: Novel system Architecture, Information/Knowledge Representation, Algorithm Design and Implementation

    Directory of Open Access Journals (Sweden)

    Luigi Boschetti

    2012-09-01

    Full Text Available According to literature and despite their commercial success, state-of-the-art two-stage non-iterative geographic object-based image analysis (GEOBIA systems and three-stage iterative geographic object-oriented image analysis (GEOOIA systems, where GEOOIA/GEOBIA, remain affected by a lack of productivity, general consensus and research. To outperform the Quality Indexes of Operativeness (OQIs of existing GEOBIA/GEOOIA systems in compliance with the Quality Assurance Framework for Earth Observation (QA4EO guidelines, this methodological work is split into two parts. Based on an original multi-disciplinary Strengths, Weaknesses, Opportunities and Threats (SWOT analysis of the GEOBIA/GEOOIA approaches, the first part of this work promotes a shift of learning paradigm in the pre-attentive vision first stage of a remote sensing (RS image understanding system (RS-IUS, from sub-symbolic statistical model-based (inductive image segmentation to symbolic physical model-based (deductive image preliminary classification capable of accomplishing image sub-symbolic segmentation and image symbolic pre-classification simultaneously. In the present second part of this work, a novel hybrid (combined deductive and inductive RS-IUS architecture featuring a symbolic deductive pre-attentive vision first stage is proposed and discussed in terms of: (a computational theory (system design, (b information/knowledge representation, (c algorithm design and (d implementation. As proof-of-concept of symbolic physical model-based pre-attentive vision first stage, the spectral knowledge-based, operational, near real-time, multi-sensor, multi-resolution, application-independent Satellite Image Automatic Mapper™ (SIAM™ is selected from existing literature. To the best of these authors’ knowledge, this is the first time a symbolic syntactic inference system, like SIAM™, is made available to the RS community for operational use in a RS-IUS pre-attentive vision first stage

  15. Pluri-IQ: Quantification of Embryonic Stem Cell Pluripotency through an Image-Based Analysis Software

    Directory of Open Access Journals (Sweden)

    Tânia Perestrelo

    2017-08-01

    Full Text Available Image-based assays, such as alkaline phosphatase staining or immunocytochemistry for pluripotent markers, are common methods used in the stem cell field to assess pluripotency. Although an increased number of image-analysis approaches have been described, there is still a lack of software availability to automatically quantify pluripotency in large images after pluripotency staining. To address this need, we developed a robust and rapid image processing software, Pluri-IQ, which allows the automatic evaluation of pluripotency in large low-magnification images. Using mouse embryonic stem cells (mESC as a model, we combined an automated segmentation algorithm with a supervised machine-learning platform to classify colonies as pluripotent, mixed, or differentiated. In addition, Pluri-IQ allows the automatic comparison between different culture conditions. This efficient user-friendly open-source software can be easily implemented in images derived from pluripotent cells or cells that express pluripotent markers (e.g., OCT4-GFP and can be routinely used, decreasing image assessment bias.

  16. Automated quantification and sizing of unbranched filamentous cyanobacteria by model based object oriented image analysis

    OpenAIRE

    Zeder, M; Van den Wyngaert, S; Köster, O; Felder, K M; Pernthaler, J

    2010-01-01

    Quantification and sizing of filamentous cyanobacteria in environmental samples or cultures are time-consuming and are often performed by using manual or semiautomated microscopic analysis. Automation of conventional image analysis is difficult because filaments may exhibit great variations in length and patchy autofluorescence. Moreover, individual filaments frequently cross each other in microscopic preparations, as deduced by modeling. This paper describes a novel approach based on object-...

  17. Image analysis using reflected light: an underutilized tool for interpreting magnetic fabrics

    Science.gov (United States)

    Waters-Tormey, C. L.; Liner, T.; Miller, B.; Kelso, P. R.

    2010-12-01

    Grain shape fabric analysis is one of the most common tools used to compare magnetic fabric and handsample scale rock fabric. Usually, this image analysis uses photomicrographs taken under plane or polarized light, which may be problematic if there are several dominant magnetic carriers (e.g., magnetite and pyrrhotite). The method developed for this study uses reflected light photomicrographs, and is effective in assessing the relative contribution of different phases to the opaque mineral shape-preferred orientation (SPO). Mosaics of high-resolution photomicrographs are first assembled and processed in Adobe Photoshop®. The Adobe Illustrator® “Live Trace” tool, whose settings can be optimized for reflected light images, completes initial automatic grain tracing and phase separation. Checking and re-classification of phases using reflected light properties and trace editing occurs manually. Phase identification is confirmed by microprobe or quantitative EDS, after which grain traces are easily reclassified as needed. Traces are imported into SPO2003 (Launeau and Robin, 2005) for SPO analysis. The combination of image resolution and magnification used here includes grains down to 10 microns. This work is part of an ongoing study examining fabric development across strain gradients in the granulite facies Capricorn ridge shear zone exposed in the Mt. Hay block of central Australia (Waters-Tormey et al., 2009). Strain marker shape fabrics, mesoscale structures, and strain localization adjacent to major lithologic boundaries all indicate that the deformation involved flattening, but that components of the deformation have been partitioned into different lithological domains. Thin sections were taken from the two gabbroic map units which volumetrically dominate the shear zone (northern and southern) using samples with similar outcrop fabric intensity. Prior thermomagnetic analyses indicate these units contain magnetite ± titanomagnetite ± ilmenite ± pyrrhotite

  18. An explorative childhood pneumonia analysis based on ultrasonic imaging texture features

    Science.gov (United States)

    Zenteno, Omar; Diaz, Kristians; Lavarello, Roberto; Zimic, Mirko; Correa, Malena; Mayta, Holger; Anticona, Cynthia; Pajuelo, Monica; Oberhelman, Richard; Checkley, William; Gilman, Robert H.; Figueroa, Dante; Castañeda, Benjamín.

    2015-12-01

    According to World Health Organization, pneumonia is the respiratory disease with the highest pediatric mortality rate accounting for 15% of all deaths of children under 5 years old worldwide. The diagnosis of pneumonia is commonly made by clinical criteria with support from ancillary studies and also laboratory findings. Chest imaging is commonly done with chest X-rays and occasionally with a chest CT scan. Lung ultrasound is a promising alternative for chest imaging; however, interpretation is subjective and requires adequate training. In the present work, a two-class classification algorithm based on four Gray-level co-occurrence matrix texture features (i.e., Contrast, Correlation, Energy and Homogeneity) extracted from lung ultrasound images from children aged between six months and five years is presented. Ultrasound data was collected using a L14-5/38 linear transducer. The data consisted of 22 positive- and 68 negative-diagnosed B-mode cine-loops selected by a medical expert and captured in the facilities of the Instituto Nacional de Salud del Niño (Lima, Peru), for a total number of 90 videos obtained from twelve children diagnosed with pneumonia. The classification capacity of each feature was explored independently and the optimal threshold was selected by a receiver operator characteristic (ROC) curve analysis. In addition, a principal component analysis was performed to evaluate the combined performance of all the features. Contrast and correlation resulted the two more significant features. The classification performance of these two features by principal components was evaluated. The results revealed 82% sensitivity, 76% specificity, 78% accuracy and 0.85 area under the ROC.

  19. Evaluating the spatio-temporal performance of sky-imager-based solar irradiance analysis and forecasts

    Science.gov (United States)

    Schmidt, Thomas; Kalisch, John; Lorenz, Elke; Heinemann, Detlev

    2016-03-01

    Clouds are the dominant source of small-scale variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the worldwide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a very short term global horizontal irradiance (GHI) forecast experiment based on hemispheric sky images. A 2-month data set with images from one sky imager and high-resolution GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series into different cloud scenarios. Overall, the sky-imager-based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depends strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1-2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability, which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.

  20. Evaluating the spatio-temporal performance of sky-imager-based solar irradiance analysis and forecasts

    Directory of Open Access Journals (Sweden)

    T. Schmidt

    2016-03-01

    Full Text Available Clouds are the dominant source of small-scale variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the worldwide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a very short term global horizontal irradiance (GHI forecast experiment based on hemispheric sky images. A 2-month data set with images from one sky imager and high-resolution GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series into different cloud scenarios. Overall, the sky-imager-based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depends strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1–2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability, which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.

  1. Evaluating the spatio-temporal performance of sky imager based solar irradiance analysis and forecasts

    Science.gov (United States)

    Schmidt, T.; Kalisch, J.; Lorenz, E.; Heinemann, D.

    2015-10-01

    Clouds are the dominant source of variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the world-wide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a shortest-term global horizontal irradiance (GHI) forecast experiment based on hemispheric sky images. A two month dataset with images from one sky imager and high resolutive GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series in different cloud scenarios. Overall, the sky imager based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depend strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1-2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.

  2. Ultrasonic image analysis and image-guided interventions.

    Science.gov (United States)

    Noble, J Alison; Navab, Nassir; Becher, H

    2011-08-06

    The fields of medical image analysis and computer-aided interventions deal with reducing the large volume of digital images (X-ray, computed tomography, magnetic resonance imaging (MRI), positron emission tomography and ultrasound (US)) to more meaningful clinical information using software algorithms. US is a core imaging modality employed in these areas, both in its own right and used in conjunction with the other imaging modalities. It is receiving increased interest owing to the recent introduction of three-dimensional US, significant improvements in US image quality, and better understanding of how to design algorithms which exploit the unique strengths and properties of this real-time imaging modality. This article reviews the current state of art in US image analysis and its application in image-guided interventions. The article concludes by giving a perspective from clinical cardiology which is one of the most advanced areas of clinical application of US image analysis and describing some probable future trends in this important area of ultrasonic imaging research.

  3. Hand-Based Biometric Analysis

    Science.gov (United States)

    Bebis, George (Inventor); Amayeh, Gholamreza (Inventor)

    2015-01-01

    Hand-based biometric analysis systems and techniques are described which provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an in put image. Additionally, the analysis utilizes re-use of commonly-seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.

  4. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...

  5. Radiosurgical treatment planning for intracranial AVM based on images generated by principal component analysis. A simulation study

    International Nuclear Information System (INIS)

    Kawaguchi, Osamu; Kunieda, Etsuo; Nyui, Yoshiyuki

    2009-01-01

    One of the most important factors in stereotactic radiosurgery (SRS) for intracranial arteriovenous malformation (AVM) is to determine accurate target delineation of the nidus. However, since intracranial AVMs are complicated in structure, it is often difficult to clearly determine the target delineation. The purpose of this study was to investigate the usefulness of principal component analysis (PCA) on intra-arterial contrast enhanced dynamic CT (IADCT) images as a tool for delineating accurate target volumes for stereotactic radiosurgery of AVMs. IADCT and intravenous contrast-enhanced CT (IVCT) were used to examine 4 randomly selected cases of AVM. PCA images were generated from the IADCT data. The first component images were considered feeding artery predominant, the second component images were considered draining vein predominant, and the third component images were considered background. Target delineations were first carried out from IVCT, and then again while referring to the first and second components of the PCA images. Dose calculation simulations for radiosurgical treatment plans with IVCT and PCA images were performed. Dose volume histograms of the vein areas as well as the target volumes were compared. In all cases, the calculated target volumes based on IVCT images were larger than those based on PCA images, and the irradiation doses for the vein areas were reduced. In this study, we simulated radiosurgical treatment planning for intracranial AVM based on PCA images. By using PCA images, the irradiation doses for the vein areas were substantially reduced. (author)

  6. Developing students’ ideas about lens imaging: teaching experiments with an image-based approach

    Science.gov (United States)

    Grusche, Sascha

    2017-07-01

    Lens imaging is a classic topic in physics education. To guide students from their holistic viewpoint to the scientists’ analytic viewpoint, an image-based approach to lens imaging has recently been proposed. To study the effect of the image-based approach on undergraduate students’ ideas, teaching experiments are performed and evaluated using qualitative content analysis. Some of the students’ ideas have not been reported before, namely those related to blurry lens images, and those developed by the proposed teaching approach. To describe learning pathways systematically, a conception-versus-time coordinate system is introduced, specifying how teaching actions help students advance toward a scientific understanding.

  7. Countering Stryker’s Punch: Algorithmically Filling the Black Hole

    OpenAIRE

    Michael J. Bennett

    2017-01-01

    Two current digital image editing programs are examined in the context of filling in missing visual image data from hole-punched United States Farm Security Administration (FSA) negatives. Specifically, Photoshop's Content-Aware Fill feature and GIMP's Resynthesizer plugin are evaluated and contrasted against comparable images. A possible automated workflow geared towards large scale editing of similarly hole-punched negatives is also explored. Finally, potential future research based upon th...

  8. Application of a digital technique in evaluating the reliability of shade guides.

    Science.gov (United States)

    Cal, E; Sonugelen, M; Guneri, P; Kesercioglu, A; Kose, T

    2004-05-01

    There appears to be a need for a reliable method for quantification of tooth colour and analysis of shade. Therefore, the primary objective of this study was to show the applicability of graphic software in colour analysis and secondly to investigate the reliability of commercial shade guides produced by the same manufacturer, using this digital technique. After confirming the reliability and reproducibility of the digital method by using self-assessed coloured images, three shade guides of the same manufacturer were photographed in daylight and in studio environments with a digital camera and saved in tagged image file format (TIFF) format. Colour analysis of each photograph was performed using the Adobe Photoshop 4.0 graphic program. Luminosity, and red, green, blue (L and RGB) values of each shade tab of each shade guide were measured and the data were subjected to statistical analysis using the repeated measure Anova test. The L and RGB values of the images taken in daylight differed significantly from those of the images taken in studio environment (P < 0.05). In both environments, the luminosity and red values of the shade tabs were significantly different from each other (P < 0.05). It was concluded that, when the environmental conditions were kept constant, the Adobe Photoshop 4.0 colour analysis program could be used to analyse the colour of images. On the other hand, the results revealed that the accuracy of shade tabs widely being used in colour matching should be readdressed.

  9. A data grid for imaging-based clinical trials

    Science.gov (United States)

    Zhou, Zheng; Chao, Sander S.; Lee, Jasper; Liu, Brent; Documet, Jorge; Huang, H. K.

    2007-03-01

    Clinical trials play a crucial role in testing new drugs or devices in modern medicine. Medical imaging has also become an important tool in clinical trials because images provide a unique and fast diagnosis with visual observation and quantitative assessment. A typical imaging-based clinical trial consists of: 1) A well-defined rigorous clinical trial protocol, 2) a radiology core that has a quality control mechanism, a biostatistics component, and a server for storing and distributing data and analysis results; and 3) many field sites that generate and send image studies to the radiology core. As the number of clinical trials increases, it becomes a challenge for a radiology core servicing multiple trials to have a server robust enough to administrate and quickly distribute information to participating radiologists/clinicians worldwide. The Data Grid can satisfy the aforementioned requirements of imaging based clinical trials. In this paper, we present a Data Grid architecture for imaging-based clinical trials. A Data Grid prototype has been implemented in the Image Processing and Informatics (IPI) Laboratory at the University of Southern California to test and evaluate performance in storing trial images and analysis results for a clinical trial. The implementation methodology and evaluation protocol of the Data Grid are presented.

  10. A simple method for detecting tumor in T2-weighted MRI brain images. An image-based analysis

    International Nuclear Information System (INIS)

    Lau, Phooi-Yee; Ozawa, Shinji

    2006-01-01

    The objective of this paper is to present a decision support system which uses a computer-based procedure to detect tumor blocks or lesions in digitized medical images. The authors developed a simple method with a low computation effort to detect tumors on T2-weighted Magnetic Resonance Imaging (MRI) brain images, focusing on the connection between the spatial pixel value and tumor properties from four different perspectives: cases having minuscule differences between two images using a fixed block-based method, tumor shape and size using the edge and binary images, tumor properties based on texture values using spatial pixel intensity distribution controlled by a global discriminate value, and the occurrence of content-specific tumor pixel for threshold images. Measurements of the following medical datasets were performed: different time interval images, and different brain disease images on single and multiple slice images. Experimental results have revealed that our proposed technique incurred an overall error smaller than those in other proposed methods. In particular, the proposed method allowed decrements of false alarm and missed alarm errors, which demonstrate the effectiveness of our proposed technique. In this paper, we also present a prototype system, known as PCB, to evaluate the performance of the proposed methods by actual experiments, comparing the detection accuracy and system performance. (author)

  11. Content Based Retrieval System for Magnetic Resonance Images

    International Nuclear Information System (INIS)

    Trojachanets, Katarina

    2010-01-01

    The amount of medical images is continuously increasing as a consequence of the constant growth and development of techniques for digital image acquisition. Manual annotation and description of each image is impractical, expensive and time consuming approach. Moreover, it is an imprecise and insufficient way for describing all information stored in medical images. This induces the necessity for developing efficient image storage, annotation and retrieval systems. Content based image retrieval (CBIR) emerges as an efficient approach for digital image retrieval from large databases. It includes two phases. In the first phase, the visual content of the image is analyzed and the feature extraction process is performed. An appropriate descriptor, namely, feature vector is then associated with each image. These descriptors are used in the second phase, i.e. the retrieval process. With the aim to improve the efficiency and precision of the content based image retrieval systems, feature extraction and automatic image annotation techniques are subject of continuous researches and development. Including the classification techniques in the retrieval process enables automatic image annotation in an existing CBIR system. It contributes to more efficient and easier image organization in the system.Applying content based retrieval in the field of magnetic resonance is a big challenge. Magnetic resonance imaging is an image based diagnostic technique which is widely used in medical environment. According to this, the number of magnetic resonance images is enormously growing. Magnetic resonance images provide plentiful medical information, high resolution and specific nature. Thus, the capability of CBIR systems for image retrieval from large database is of great importance for efficient analysis of this kind of images. The aim of this thesis is to propose content based retrieval system architecture for magnetic resonance images. To provide the system efficiency, feature

  12. Silhouette-based approach of 3D image reconstruction for automated image acquisition using robotic arm

    Science.gov (United States)

    Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.

    2017-06-01

    This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.

  13. Interpretation of medical images by model guided analysis

    International Nuclear Information System (INIS)

    Karssemeijer, N.

    1989-01-01

    Progress in the development of digital pictorial information systems stimulates a growing interest in the use of image analysis techniques in medicine. Especially when precise quantitative information is required the use of fast and reproducable computer analysis may be more appropriate than relying on visual judgement only. Such quantitative information can be valuable, for instance, in diagnostics or in irradiation therapy planning. As medical images are mostly recorded in a prescribed way, human anatomy guarantees a common image structure for each particular type of exam. In this thesis it is investigated how to make use of this a priori knowledge to guide image analysis. For that purpose models are developed which are suited to capture common image structure. The first part of this study is devoted to an analysis of nuclear medicine images of myocardial perfusion. In ch. 2 a model of these images is designed in order to represent characteristic image properties. It is shown that for these relatively simple images a compact symbolic description can be achieved, without significant loss of diagnostically importance of several image properties. Possibilities for automatic interpretation of more complex images is investigated in the following chapters. The central topic is segmentation of organs. Two methods are proposed and tested on a set of abdominal X-ray CT scans. Ch. 3 describes a serial approach based on a semantic network and the use of search areas. Relational constraints are used to guide the image processing and to classify detected image segments. In teh ch.'s 4 and 5 a more general parallel approach is utilized, based on a markov random field image model. A stochastic model used to represent prior knowledge about the spatial arrangement of organs is implemented as an external field. (author). 66 refs.; 27 figs.; 6 tabs

  14. Page Layout Analysis of the Document Image Based on the Region Classification in a Decision Hierarchical Structure

    Directory of Open Access Journals (Sweden)

    Hossein Pourghassem

    2010-10-01

    Full Text Available The conversion of document image to its electronic version is a very important problem in the saving, searching and retrieval application in the official automation system. For this purpose, analysis of the document image is necessary. In this paper, a hierarchical classification structure based on a two-stage segmentation algorithm is proposed. In this structure, image is segmented using the proposed two-stage segmentation algorithm. Then, the type of the image regions such as document and non-document image is determined using multiple classifiers in the hierarchical classification structure. The proposed segmentation algorithm uses two algorithms based on wavelet transform and thresholding. Texture features such as correlation, homogeneity and entropy that extracted from co-occurrenc matrix and also two new features based on wavelet transform are used to classifiy and lable the regions of the image. The hierarchical classifier is consisted of two Multilayer Perceptron (MLP classifiers and a Support Vector Machine (SVM classifier. The proposed algorithm is evaluated on a database consisting of document and non-document images that provides from Internet. The experimental results show the efficiency of the proposed approach in the region segmentation and classification. The proposed algorithm provides accuracy rate of 97.5% on classification of the regions.

  15. Improvements in PIE-techniques at the IFE hot-laboratory. 'Neutron radiography, three dimensional profilometry and image compilation of PIE data for visualization in an image based user-interface'

    International Nuclear Information System (INIS)

    Jenssen, H.K.; Oberlaender, B.C.

    2002-01-01

    The PIE-techniques used at IFE are continuously improved through upgrading of equipment and methods, e.g. image handling techniques and components utilized in data acquisition and editing techniques. To improve the quality or spatial resolution of neutron radiographs the normal technique was complemented with another method, i.e. the dysprosium foil/X ray film technique is supplemented with a track-etch recorder consisting of a cellulose nitrate film. For further examination of the neutron radiographs the cellulose nitrate film can be digitized to allow electronic image treatment. Promising results were obtained with this technique on neutron radiographs, namely higher spatial resolution compared to the normal technique, high contrast and sharp neutron radiography images. The traditional uniaxial profilometry of fuel rods was modified so that diameter/bow measurements are possible at several angular orientations during one acquisition sequence. This extension is very useful in several ways, for instance the built-in data symmetry of the method is used to check the correctness of the measurement results. Diameter and bow measurements give information of cladding irregularities and fuel rod profiles. Implementation of electronic image handling techniques is particularly useful in PIE when data are collected and compiled in an image file. Inspection and examination of the file contents (examination results) are possible through an ideal user-interface, i.e. Adobe Photoshop software with navigator possibilities. Examples incorporating PIE data acquired from neutron radiography, visual inspection and ceramography are utilized for illustration of the user-interface and some of its possibilities. (author)

  16. Measuring stone surface area from a radiographic image is accurate and reproducible with the help of an imaging program.

    Science.gov (United States)

    Kurien, Abraham; Ganpule, Arvind; Muthu, V; Sabnis, R B; Desai, Mahesh

    2009-01-01

    The surface area of the stone from a radiographic image is one of the more suitable parameters defining stone bulk. The widely accepted method of measuring stone surface area is to count the number of square millimeters enclosed within a tracing of the stone outline on graph paper. This method is time consuming and cumbersome with potential for human error, especially when multiple measurements are needed. The purpose of this study was to evaluate the accuracy, efficiency, and reproducibility of a commercially available imaging program, Adobe Photoshop 7.0 for the measurement of stone surface area. The instructions to calculate area using the software are simple and easy in a Windows-based format. The accuracy of the imaging software was estimated by measuring surface areas of shapes of known mathematical areas. The efficiency and reproducibility were then evaluated from radiographs of 20 persons with radiopaque upper-tract urinary stones. The surface areas of stone images were measured using both graph paper and imaging software. Measurements were repeated after 10 days to assess the reproducibility of the techniques. The time taken to measure the area by the two methods was also assessed separately. The accuracy of the imaging software was estimated to be 98.7%. The correlation coefficient between the two methods was R(2) = 0.97. The mean percentage variation using the imaging software was 0.68%, while it was 6.36% with the graph paper. The mean time taken to measure using the image analyzer and graph paper was 1.9 +/- 0.8 minutes and 4.5 +/- 1.08 minutes, respectively (P stone surface area from radiographs compared with manual measurements using graph paper.

  17. Object-based image analysis and data mining for building ontology of informal urban settlements

    Science.gov (United States)

    Khelifa, Dejrriri; Mimoun, Malki

    2012-11-01

    During recent decades, unplanned settlements have been appeared around the big cities in most developing countries and as consequence, numerous problems have emerged. Thus the identification of different kinds of settlements is a major concern and challenge for authorities of many countries. Very High Resolution (VHR) Remotely Sensed imagery has proved to be a very promising way to detect different kinds of settlements, especially through the using of new objectbased image analysis (OBIA). The most important key is in understanding what characteristics make unplanned settlements differ from planned ones, where most experts characterize unplanned urban areas by small building sizes at high densities, no orderly road arrangement and Lack of green spaces. Knowledge about different kinds of settlements can be captured as a domain ontology that has the potential to organize knowledge in a formal, understandable and sharable way. In this work we focus on extracting knowledge from VHR images and expert's knowledge. We used an object based strategy by segmenting a VHR image taken over urban area into regions of homogenous pixels at adequate scale level and then computing spectral, spatial and textural attributes for each region to create objects. A genetic-based data mining was applied to generate high predictive and comprehensible classification rules based on selected samples from the OBIA result. Optimized intervals of relevant attributes are found, linked with land use types for forming classification rules. The unplanned areas were separated from the planned ones, through analyzing of the line segments detected from the input image. Finally a simple ontology was built based on the previous processing steps. The approach has been tested to VHR images of one of the biggest Algerian cities, that has grown considerably in recent decades.

  18. Image analysis for gene expression based phenotype characterization in yeast cells

    NARCIS (Netherlands)

    Tleis, M.

    2016-01-01

    Image analysis of objects in the microscope scale requires accuracy so that measurements can be used to differentiate between groups of objects that are being studied. This thesis deals with measurements in yeast biology that are obtained through microscope images. We study the algorithms and

  19. Image formation and image analysis in electron microscopy

    International Nuclear Information System (INIS)

    Heel, M. van.

    1981-01-01

    This thesis covers various aspects of image formation and image analysis in electron microscopy. The imaging of relatively strong objects in partially coherent illumination, the coherence properties of thermionic emission sources and the detection of objects in quantum noise limited images are considered. IMAGIC, a fast, flexible and friendly image analysis software package is described. Intelligent averaging of molecular images is discussed. (C.F.)

  20. Big data in multiple sclerosis: development of a web-based longitudinal study viewer in an imaging informatics-based eFolder system for complex data analysis and management

    Science.gov (United States)

    Ma, Kevin; Wang, Ximing; Lerner, Alex; Shiroishi, Mark; Amezcua, Lilyana; Liu, Brent

    2015-03-01

    In the past, we have developed and displayed a multiple sclerosis eFolder system for patient data storage, image viewing, and automatic lesion quantification results stored in DICOM-SR format. The web-based system aims to be integrated in DICOM-compliant clinical and research environments to aid clinicians in patient treatments and disease tracking. This year, we have further developed the eFolder system to handle big data analysis and data mining in today's medical imaging field. The database has been updated to allow data mining and data look-up from DICOM-SR lesion analysis contents. Longitudinal studies are tracked, and any changes in lesion volumes and brain parenchyma volumes are calculated and shown on the webbased user interface as graphical representations. Longitudinal lesion characteristic changes are compared with patients' disease history, including treatments, symptom progressions, and any other changes in the disease profile. The image viewer is updated such that imaging studies can be viewed side-by-side to allow visual comparisons. We aim to use the web-based medical imaging informatics eFolder system to demonstrate big data analysis in medical imaging, and use the analysis results to predict MS disease trends and patterns in Hispanic and Caucasian populations in our pilot study. The discovery of disease patterns among the two ethnicities is a big data analysis result that will help lead to personalized patient care and treatment planning.

  1. GRAIN-SIZE MEASUREMENTS OF FLUVIAL GRAVEL BARS USING OBJECT-BASED IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Pedro Castro

    2018-01-01

    Full Text Available Traditional techniques for classifying the average grain size in gravel bars require manual measurements of each grain diameter. Aiming productivity, more efficient methods have been developed by applying remote sensing techniques and digital image processing. This research proposes an Object-Based Image Analysis methodology to classify gravel bars in fluvial channels. First, the study evaluates the performance of multiresolution segmentation algorithm (available at the software eCognition Developer in performing shape recognition. The linear regression model was applied to assess the correlation between the gravels’ reference delineation and the gravels recognized by the segmentation algorithm. Furthermore, the supervised classification was validated by comparing the results with field data using the t-statistic test and the kappa index. Afterwards, the grain size distribution in gravel bars along the upper Bananeiras River, Brazil was mapped. The multiresolution segmentation results did not prove to be consistent with all the samples. Nonetheless, the P01 sample showed an R2 =0.82 for the diameter estimation and R2=0.45 the recognition of the eliptical ft. The t-statistic showed no significant difference in the efficiencies of the grain size classifications by the field survey data and the Object-based supervised classification (t = 2.133 for a significance level of 0.05. However, the kappa index was 0.54. The analysis of the both segmentation and classification results did not prove to be replicable.

  2. Research on lossless compression of true color RGB image with low time and space complexity

    Science.gov (United States)

    Pan, ShuLin; Xie, ChengJun; Xu, Lin

    2008-12-01

    Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.

  3. Wavelet-Based Bayesian Methods for Image Analysis and Automatic Target Recognition

    National Research Council Canada - National Science Library

    Nowak, Robert

    2001-01-01

    .... We have developed two new techniques. First, we have develop a wavelet-based approach to image restoration and deconvolution problems using Bayesian image models and an alternating-maximation method...

  4. Reliability of the imaging software in the preoperative planning of the open-wedge high tibial osteotomy.

    Science.gov (United States)

    Lee, Yong Seuk; Kim, Min Kyu; Byun, Hae Won; Kim, Sang Bum; Kim, Jin Goo

    2015-03-01

    The purpose of this study was to verify a recently developed picture-archiving and communications system-photoshop method by comparing reliabilities between real-size paper template and the PACS-photoshop methods in preoperative planning of open-wedge high tibial osteotomy. A prospective case series was conducted, including patients with medial osteoarthritis undergoing open-wedge high tibial osteotomy. In the preoperative planning, the picture-archiving and communications system-photoshop method and real-size paper template method were used simultaneously in all patients. Preoperative hip-knee-ankle angle, height, and angle of the osteotomy were evaluated. The reliability of this newly devised method was evaluated, and the consistency between the two methods was also evaluated using intra-class correlation coefficient. Using the picture-archiving and communications system-photoshop method, the mean correction angle and height of osteotomy gap of rater-1 were 11.7° ± 3.6° and 10.7 ± 3.6 mm, respectively. The mean correction angle and height of osteotomy gap of rater-2 were 12.0 ± 2.6 and 10.8 ± 3.6, respectively. The inter- and intra-rater reliabilities of the correction angle were 0.956 ~ 0.979 and 0.980 ~ 0.992, respectively. The inter- and intra-rater reliabilities of the height of the osteotomy gap were 0.968 ~ 0.985 and 0.971 ~ 0.994, respectively (p photoshop method, mean values of the correction angle and height of the osteotomy gap were 11.9° ± 3.6° and 10.8 ± 3.6 mm, respectively. Consistency between the two methods by comparing the means of the correction angle and the height of the osteotomy gap were 0.985 and 0.985, respectively (p photoshop method enables direct measurement of the height of the osteotomy gap with high reliability.

  5. Plant phenomics: an overview of image acquisition technologies and image data analysis algorithms.

    Science.gov (United States)

    Perez-Sanz, Fernando; Navarro, Pedro J; Egea-Cortines, Marcos

    2017-11-01

    The study of phenomes or phenomics has been a central part of biology. The field of automatic phenotype acquisition technologies based on images has seen an important advance in the last years. As with other high-throughput technologies, it addresses a common set of problems, including data acquisition and analysis. In this review, we give an overview of the main systems developed to acquire images. We give an in-depth analysis of image processing with its major issues and the algorithms that are being used or emerging as useful to obtain data out of images in an automatic fashion. © The Author 2017. Published by Oxford University Press.

  6. Connecting imaging mass spectrometry and magnetic resonance imaging-based anatomical atlases for automated anatomical interpretation and differential analysis.

    Science.gov (United States)

    Verbeeck, Nico; Spraggins, Jeffrey M; Murphy, Monika J M; Wang, Hui-Dong; Deutch, Ariel Y; Caprioli, Richard M; Van de Plas, Raf

    2017-07-01

    Imaging mass spectrometry (IMS) is a molecular imaging technology that can measure thousands of biomolecules concurrently without prior tagging, making it particularly suitable for exploratory research. However, the data size and dimensionality often makes thorough extraction of relevant information impractical. To help guide and accelerate IMS data analysis, we recently developed a framework that integrates IMS measurements with anatomical atlases, opening up opportunities for anatomy-driven exploration of IMS data. One example is the automated anatomical interpretation of ion images, where empirically measured ion distributions are automatically decomposed into their underlying anatomical structures. While offering significant potential, IMS-atlas integration has thus far been restricted to the Allen Mouse Brain Atlas (AMBA) and mouse brain samples. Here, we expand the applicability of this framework by extending towards new animal species and a new set of anatomical atlases retrieved from the Scalable Brain Atlas (SBA). Furthermore, as many SBA atlases are based on magnetic resonance imaging (MRI) data, a new registration pipeline was developed that enables direct non-rigid IMS-to-MRI registration. These developments are demonstrated on protein-focused FTICR IMS measurements from coronal brain sections of a Parkinson's disease (PD) rat model. The measurements are integrated with an MRI-based rat brain atlas from the SBA. The new rat-focused IMS-atlas integration is used to perform automated anatomical interpretation and to find differential ions between healthy and diseased tissue. IMS-atlas integration can serve as an important accelerator in IMS data exploration, and with these new developments it can now be applied to a wider variety of animal species and modalities. This article is part of a Special Issue entitled: MALDI Imaging, edited by Dr. Corinna Henkel and Prof. Peter Hoffmann. Copyright © 2017. Published by Elsevier B.V.

  7. Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.

    Science.gov (United States)

    Liu, Min; Wang, Xueping; Zhang, Hongzhong

    2018-03-01

    In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Comparison of subset-based local and FE-based global digital image correlation: Theoretical error analysis and validation

    KAUST Repository

    Pan, B.; Wang, Bo; Lubineau, Gilles

    2016-01-01

    Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances

  9. Pattern-based compression of multi-band image data for landscape analysis

    CERN Document Server

    Myers, Wayne L; Patil, Ganapati P

    2006-01-01

    This book describes an integrated approach to using remotely sensed data in conjunction with geographic information systems for landscape analysis. Remotely sensed data are compressed into an analytical image-map that is compatible with the most popular geographic information systems as well as freeware viewers. The approach is most effective for landscapes that exhibit a pronounced mosaic pattern of land cover. The image maps are much more compact than the original remotely sensed data, which enhances utility on the internet. As value-added products, distribution of image-maps is not affected by copyrights on original multi-band image data.

  10. Hessian-based quantitative image analysis of host-pathogen confrontation assays.

    Science.gov (United States)

    Cseresnyes, Zoltan; Kraibooj, Kaswara; Figge, Marc Thilo

    2018-03-01

    Host-fungus interactions have gained a lot of interest in the past few decades, mainly due to an increasing number of fungal infections that are often associated with a high mortality rate in the absence of effective therapies. These interactions can be studied at the genetic level or at the functional level via imaging. Here, we introduce a new image processing method that quantifies the interaction between host cells and fungal invaders, for example, alveolar macrophages and the conidia of Aspergillus fumigatus. The new technique relies on the information content of transmitted light bright field microscopy images, utilizing the Hessian matrix eigenvalues to distinguish between unstained macrophages and the background, as well as between macrophages and fungal conidia. The performance of the new algorithm was measured by comparing the results of our method with that of an alternative approach that was based on fluorescence images from the same dataset. The comparison shows that the new algorithm performs very similarly to the fluorescence-based version. Consequently, the new algorithm is able to segment and characterize unlabeled cells, thus reducing the time and expense that would be spent on the fluorescent labeling in preparation for phagocytosis assays. By extending the proposed method to the label-free segmentation of fungal conidia, we will be able to reduce the need for fluorescence-based imaging even further. Our approach should thus help to minimize the possible side effects of fluorescence labeling on biological functions. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  11. Finite-Element Software for Conceptual Design

    DEFF Research Database (Denmark)

    Lindemann, J.; Sandberg, G.; Damkilde, Lars

    2010-01-01

    and research. Forcepad is an effort to provide a conceptual design and teaching tool in a finite-element software package. Forcepad is a two-dimensional finite-element application based on the same conceptual model as image editing applications such as Adobe Photoshop or Microsoft Paint. Instead of using...

  12. New approach to gallbladder ultrasonic images analysis and lesions recognition.

    Science.gov (United States)

    Bodzioch, Sławomir; Ogiela, Marek R

    2009-03-01

    This paper presents a new approach to gallbladder ultrasonic image processing and analysis towards detection of disease symptoms on processed images. First, in this paper, there is presented a new method of filtering gallbladder contours from USG images. A major stage in this filtration is to segment and section off areas occupied by the said organ. In most cases this procedure is based on filtration that plays a key role in the process of diagnosing pathological changes. Unfortunately ultrasound images present among the most troublesome methods of analysis owing to the echogenic inconsistency of structures under observation. This paper provides for an inventive algorithm for the holistic extraction of gallbladder image contours. The algorithm is based on rank filtration, as well as on the analysis of histogram sections on tested organs. The second part concerns detecting lesion symptoms of the gallbladder. Automating a process of diagnosis always comes down to developing algorithms used to analyze the object of such diagnosis and verify the occurrence of symptoms related to given affection. Usually the final stage is to make a diagnosis based on the detected symptoms. This last stage can be carried out through either dedicated expert systems or more classic pattern analysis approach like using rules to determine illness basing on detected symptoms. This paper discusses the pattern analysis algorithms for gallbladder image interpretation towards classification of the most frequent illness symptoms of this organ.

  13. Nondestructive Analysis of Tumor-Associated Membrane Protein Integrating Imaging and Amplified Detection in situ Based on Dual-Labeled DNAzyme.

    Science.gov (United States)

    Chen, Xiaoxia; Zhao, Jing; Chen, Tianshu; Gao, Tao; Zhu, Xiaoli; Li, Genxi

    2018-01-01

    Comprehensive analysis of the expression level and location of tumor-associated membrane proteins (TMPs) is of vital importance for the profiling of tumor cells. Currently, two kinds of independent techniques, i.e. ex situ detection and in situ imaging, are usually required for the quantification and localization of TMPs respectively, resulting in some inevitable problems. Methods: Herein, based on a well-designed and fluorophore-labeled DNAzyme, we develop an integrated and facile method, in which imaging and quantification of TMPs in situ are achieved simultaneously in a single system. The labeled DNAzyme not only produces localized fluorescence for the visualization of TMPs but also catalyzes the cleavage of a substrate to produce quantitative fluorescent signals that can be collected from solution for the sensitive detection of TMPs. Results: Results from the DNAzyme-based in situ imaging and quantification of TMPs match well with traditional immunofluorescence and western blotting. In addition to the advantage of two-in-one, the DNAzyme-based method is highly sensitivity, allowing the detection of TMPs in only 100 cells. Moreover, the method is nondestructive. Cells after analysis could retain their physiological activity and could be cultured for other applications. Conclusion: The integrated system provides solid results for both imaging and quantification of TMPs, making it a competitive method over some traditional techniques for the analysis of TMPs, which offers potential application as a toolbox in the future.

  14. Fringe image analysis based on the amplitude modulation method.

    Science.gov (United States)

    Gai, Shaoyan; Da, Feipeng

    2010-05-10

    A novel phase-analysis method is proposed. To get the fringe order of a fringe image, the amplitude-modulation fringe pattern is carried out, which is combined with the phase-shift method. The primary phase value is obtained by a phase-shift algorithm, and the fringe-order information is encoded in the amplitude-modulation fringe pattern. Different from other methods, the amplitude-modulation fringe identifies the fringe order by the amplitude of the fringe pattern. In an amplitude-modulation fringe pattern, each fringe has its own amplitude; thus, the order information is integrated in one fringe pattern, and the absolute fringe phase can be calculated correctly and quickly with the amplitude-modulation fringe image. The detailed algorithm is given, and the error analysis of this method is also discussed. Experimental results are presented by a full-field shape measurement system where the data has been processed using the proposed algorithm. (c) 2010 Optical Society of America.

  15. High-speed MRF-based segmentation algorithm using pixonal images

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Hassanpour, H.; Naimi, H. M.

    2013-01-01

    Segmentation is one of the most complicated procedures in the image processing that has important role in the image analysis. In this paper, an improved pixon-based method for image segmentation is proposed. In proposed algorithm, complex partial differential equations (PDEs) is used as a kernel...... function to make pixonal image. Using this kernel function causes noise on images to reduce and an image not to be over-segment when the pixon-based method is used. Utilising the PDE-based method leads to elimination of some unnecessary details and results in a fewer pixon number, faster performance...... and more robustness against unwanted environmental noises. As the next step, the appropriate pixons are extracted and eventually, we segment the image with the use of a Markov random field. The experimental results indicate that the proposed pixon-based approach has a reduced computational load...

  16. Mapping urban impervious surface using object-based image analysis with WorldView-3 satellite imagery

    Science.gov (United States)

    Iabchoon, Sanwit; Wongsai, Sangdao; Chankon, Kanoksuk

    2017-10-01

    Land use and land cover (LULC) data are important to monitor and assess environmental change. LULC classification using satellite images is a method widely used on a global and local scale. Especially, urban areas that have various LULC types are important components of the urban landscape and ecosystem. This study aims to classify urban LULC using WorldView-3 (WV-3) very high-spatial resolution satellite imagery and the object-based image analysis method. A decision rules set was applied to classify the WV-3 images in Kathu subdistrict, Phuket province, Thailand. The main steps were as follows: (1) the image was ortho-rectified with ground control points and using the digital elevation model, (2) multiscale image segmentation was applied to divide the image pixel level into image object level, (3) development of the decision ruleset for LULC classification using spectral bands, spectral indices, spatial and contextual information, and (4) accuracy assessment was computed using testing data, which sampled by statistical random sampling. The results show that seven LULC classes (water, vegetation, open space, road, residential, building, and bare soil) were successfully classified with overall classification accuracy of 94.14% and a kappa coefficient of 92.91%.

  17. Leveraging Metadata to Create Interactive Images... Today!

    Science.gov (United States)

    Hurt, Robert L.; Squires, G. K.; Llamas, J.; Rosenthal, C.; Brinkworth, C.; Fay, J.

    2011-01-01

    The image gallery for NASA's Spitzer Space Telescope has been newly rebuilt to fully support the Astronomy Visualization Metadata (AVM) standard to create a new user experience both on the website and in other applications. We encapsulate all the key descriptive information for a public image, including color representations and astronomical and sky coordinates and make it accessible in a user-friendly form on the website, but also embed the same metadata within the image files themselves. Thus, images downloaded from the site will carry with them all their descriptive information. Real-world benefits include display of general metadata when such images are imported into image editing software (e.g. Photoshop) or image catalog software (e.g. iPhoto). More advanced support in Microsoft's WorldWide Telescope can open a tagged image after it has been downloaded and display it in its correct sky position, allowing comparison with observations from other observatories. An increasing number of software developers are implementing AVM support in applications and an online image archive for tagged images is under development at the Spitzer Science Center. Tagging images following the AVM offers ever-increasing benefits to public-friendly imagery in all its standard forms (JPEG, TIFF, PNG). The AVM standard is one part of the Virtual Astronomy Multimedia Project (VAMP); http://www.communicatingastronomy.org

  18. Computerised image analysis of biocrystallograms originating from agricultural products

    DEFF Research Database (Denmark)

    Andersen, Jens-Otto; Henriksen, Christian B.; Laursen, J.

    1999-01-01

    Procedures are presented for computerised image analysis of iocrystallogram images, originating from biocrystallization investigations of agricultural products. The biocrystallization method is based on the crystallographic phenomenon that when adding biological substances, such as plant extracts...... on up to eight parameters indicated strong relationships, with R2 up to 0.98. It is concluded that the procedures were able to discriminate the seven groups of images, and are applicable for biocrystallization investigations of agricultural products. Perspectives for the application of image analysis...

  19. LSB Based Quantum Image Steganography Algorithm

    Science.gov (United States)

    Jiang, Nan; Zhao, Na; Wang, Luo

    2016-01-01

    Quantum steganography is the technique which hides a secret message into quantum covers such as quantum images. In this paper, two blind LSB steganography algorithms in the form of quantum circuits are proposed based on the novel enhanced quantum representation (NEQR) for quantum images. One algorithm is plain LSB which uses the message bits to substitute for the pixels' LSB directly. The other is block LSB which embeds a message bit into a number of pixels that belong to one image block. The extracting circuits can regain the secret message only according to the stego cover. Analysis and simulation-based experimental results demonstrate that the invisibility is good, and the balance between the capacity and the robustness can be adjusted according to the needs of applications.

  20. An automated classification system for the differentiation of obstructive lung diseases based on the textural analysis of HRCT images

    International Nuclear Information System (INIS)

    Park, Seong Hoon; Seo, Joon Beom; Kim, Nam Kug; Lee, Young Kyung; Kim, Song Soo; Chae, Eun Jin; Lee, June Goo

    2007-01-01

    To develop an automated classification system for the differentiation of obstructive lung diseases based on the textural analysis of HRCT images, and to evaluate the accuracy and usefulness of the system. For textural analysis, histogram features, gradient features, run length encoding, and a co-occurrence matrix were employed. A Bayesian classifier was used for automated classification. The images (image number n = 256) were selected from the HRCT images obtained from 17 healthy subjects (n = 67), 26 patients with bronchiolitis obliterans (n = 70), 28 patients with mild centrilobular emphysema (n = 65), and 21 patients with panlobular emphysema or severe centrilobular emphysema (n = 63). An five-fold cross-validation method was used to assess the performance of the system. Class-specific sensitivities were analyzed and the overall accuracy of the system was assessed with kappa statistics. The sensitivity of the system for each class was as follows: normal lung 84.9%, bronchiolitis obliterans 83.8%, mild centrilobular emphysema 77.0%, and panlobular emphysema or severe centrilobular emphysema 95.8%. The overall performance for differentiating each disease and the normal lung was satisfactory with a kappa value of 0.779. An automated classification system for the differentiation between obstructive lung diseases based on the textural analysis of HRCT images was developed. The proposed system discriminates well between the various obstructive lung diseases and the normal lung

  1. Hierarchical Factoring Based On Image Analysis And Orthoblique Rotations.

    Science.gov (United States)

    Stankov, L

    1979-07-01

    The procedure for hierarchical factoring suggested by Schmid and Leiman (1957) is applied within the framework of image analysis and orthoblique rotational procedures. It is shown that this approach necessarily leads to correlated higher order factors. Also, one can obtain a smaller number of factors than produced by typical hierarchical procedures.

  2. Methods in quantitative image analysis.

    Science.gov (United States)

    Oberholzer, M; Ostreicher, M; Christen, H; Brühlmann, M

    1996-05-01

    The main steps of image analysis are image capturing, image storage (compression), correcting imaging defects (e.g. non-uniform illumination, electronic-noise, glare effect), image enhancement, segmentation of objects in the image and image measurements. Digitisation is made by a camera. The most modern types include a frame-grabber, converting the analog-to-digital signal into digital (numerical) information. The numerical information consists of the grey values describing the brightness of every point within the image, named a pixel. The information is stored in bits. Eight bits are summarised in one byte. Therefore, grey values can have a value between 0 and 256 (2(8)). The human eye seems to be quite content with a display of 5-bit images (corresponding to 64 different grey values). In a digitised image, the pixel grey values can vary within regions that are uniform in the original scene: the image is noisy. The noise is mainly manifested in the background of the image. For an optimal discrimination between different objects or features in an image, uniformity of illumination in the whole image is required. These defects can be minimised by shading correction [subtraction of a background (white) image from the original image, pixel per pixel, or division of the original image by the background image]. The brightness of an image represented by its grey values can be analysed for every single pixel or for a group of pixels. The most frequently used pixel-based image descriptors are optical density, integrated optical density, the histogram of the grey values, mean grey value and entropy. The distribution of the grey values existing within an image is one of the most important characteristics of the image. However, the histogram gives no information about the texture of the image. The simplest way to improve the contrast of an image is to expand the brightness scale by spreading the histogram out to the full available range. Rules for transforming the grey value

  3. Spinal imaging and image analysis

    CERN Document Server

    Yao, Jianhua

    2015-01-01

    This book is instrumental to building a bridge between scientists and clinicians in the field of spine imaging by introducing state-of-the-art computational methods in the context of clinical applications.  Spine imaging via computed tomography, magnetic resonance imaging, and other radiologic imaging modalities, is essential for noninvasively visualizing and assessing spinal pathology. Computational methods support and enhance the physician’s ability to utilize these imaging techniques for diagnosis, non-invasive treatment, and intervention in clinical practice. Chapters cover a broad range of topics encompassing radiological imaging modalities, clinical imaging applications for common spine diseases, image processing, computer-aided diagnosis, quantitative analysis, data reconstruction and visualization, statistical modeling, image-guided spine intervention, and robotic surgery. This volume serves a broad audience as  contributions were written by both clinicians and researchers, which reflects the inte...

  4. Optimization of an Image-Based Talking Head System

    Directory of Open Access Journals (Sweden)

    Kang Liu

    2009-01-01

    Full Text Available This paper presents an image-based talking head system, which includes two parts: analysis and synthesis. The audiovisual analysis part creates a face model of a recorded human subject, which is composed of a personalized 3D mask as well as a large database of mouth images and their related information. The synthesis part generates natural looking facial animations from phonetic transcripts of text. A critical issue of the synthesis is the unit selection which selects and concatenates these appropriate mouth images from the database such that they match the spoken words of the talking head. Selection is based on lip synchronization and the similarity of consecutive images. The unit selection is refined in this paper, and Pareto optimization is used to train the unit selection. Experimental results of subjective tests show that most people cannot distinguish our facial animations from real videos.

  5. Extraction of Terraces on the Loess Plateau from High-Resolution DEMs and Imagery Utilizing Object-Based Image Analysis

    Directory of Open Access Journals (Sweden)

    Hanqing Zhao

    2017-05-01

    Full Text Available Abstract: Terraces are typical artificial landforms on the Loess Plateau, with ecological functions in water and soil conservation, agricultural production, and biodiversity. Recording the spatial distribution of terraces is the basis of monitoring their extent and understanding their ecological effects. The current terrace extraction method mainly relies on high-resolution imagery, but its accuracy is limited due to vegetation coverage distorting the features of terraces in imagery. High-resolution topographic data reflecting the morphology of true terrace surfaces are needed. Terraces extraction on the Loess Plateau is challenging because of the complex terrain and diverse vegetation after the implementation of “vegetation recovery”. This study presents an automatic method of extracting terraces based on 1 m resolution digital elevation models (DEMs and 0.3 m resolution Worldview-3 imagery as auxiliary information used for object-based image analysis (OBIA. A multi-resolution segmentation method was used where slope, positive and negative terrain index (PN, accumulative curvature slope (AC, and slope of slope (SOS were determined as input layers for image segmentation by correlation analysis and Sheffield entropy method. The main classification features based on DEMs were chosen from the terrain features derived from terrain factors and texture features by gray-level co-occurrence matrix (GLCM analysis; subsequently, these features were determined by the importance analysis on classification and regression tree (CART analysis. Extraction rules based on DEMs were generated from the classification features with a total classification accuracy of 89.96%. The red band and near-infrared band of images were used to exclude construction land, which is easily confused with small-size terraces. As a result, the total classification accuracy was increased to 94%. The proposed method ensures comprehensive consideration of terrain, texture, shape, and

  6. IMAGE DESCRIPTIONS FOR SKETCH BASED IMAGE RETRIEVAL

    OpenAIRE

    SAAVEDRA RONDO, JOSE MANUEL; SAAVEDRA RONDO, JOSE MANUEL

    2008-01-01

    Due to the massive use of Internet together with the proliferation of media devices, content based image retrieval has become an active discipline in computer science. A common content based image retrieval approach requires that the user gives a regular image (e.g, a photo) as a query. However, having a regular image as query may be a serious problem. Indeed, people commonly use an image retrieval system because they do not count on the desired image. An easy alternative way t...

  7. Cardiovascular imaging environment: will the future be cloud-based?

    Science.gov (United States)

    Kawel-Boehm, Nadine; Bluemke, David A

    2017-07-01

    In cardiovascular CT and MR imaging large datasets have to be stored, post-processed, analyzed and distributed. Beside basic assessment of volume and function in cardiac magnetic resonance imaging e.g., more sophisticated quantitative analysis is requested requiring specific software. Several institutions cannot afford various types of software and provide expertise to perform sophisticated analysis. Areas covered: Various cloud services exist related to data storage and analysis specifically for cardiovascular CT and MR imaging. Instead of on-site data storage, cloud providers offer flexible storage services on a pay-per-use basis. To avoid purchase and maintenance of specialized software for cardiovascular image analysis, e.g. to assess myocardial iron overload, MR 4D flow and fractional flow reserve, evaluation can be performed with cloud based software by the consumer or complete analysis is performed by the cloud provider. However, challenges to widespread implementation of cloud services include regulatory issues regarding patient privacy and data security. Expert commentary: If patient privacy and data security is guaranteed cloud imaging is a valuable option to cope with storage of large image datasets and offer sophisticated cardiovascular image analysis for institutions of all sizes.

  8. Real-time Image Processing for Microscopy-based Label-free Imaging Flow Cytometry in a Microfluidic Chip.

    Science.gov (United States)

    Heo, Young Jin; Lee, Donghyeon; Kang, Junsu; Lee, Keondo; Chung, Wan Kyun

    2017-09-14

    Imaging flow cytometry (IFC) is an emerging technology that acquires single-cell images at high-throughput for analysis of a cell population. Rich information that comes from high sensitivity and spatial resolution of a single-cell microscopic image is beneficial for single-cell analysis in various biological applications. In this paper, we present a fast image-processing pipeline (R-MOD: Real-time Moving Object Detector) based on deep learning for high-throughput microscopy-based label-free IFC in a microfluidic chip. The R-MOD pipeline acquires all single-cell images of cells in flow, and identifies the acquired images as a real-time process with minimum hardware that consists of a microscope and a high-speed camera. Experiments show that R-MOD has the fast and reliable accuracy (500 fps and 93.3% mAP), and is expected to be used as a powerful tool for biomedical and clinical applications.

  9. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    Directory of Open Access Journals (Sweden)

    Zhuowen Lv

    2015-01-01

    Full Text Available Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach.

  10. Method for evaluation of human induced pluripotent stem cell quality using image analysis based on the biological morphology of cells.

    Science.gov (United States)

    Wakui, Takashi; Matsumoto, Tsuyoshi; Matsubara, Kenta; Kawasaki, Tomoyuki; Yamaguchi, Hiroshi; Akutsu, Hidenori

    2017-10-01

    We propose an image analysis method for quality evaluation of human pluripotent stem cells based on biologically interpretable features. It is important to maintain the undifferentiated state of induced pluripotent stem cells (iPSCs) while culturing the cells during propagation. Cell culture experts visually select good quality cells exhibiting the morphological features characteristic of undifferentiated cells. Experts have empirically determined that these features comprise prominent and abundant nucleoli, less intercellular spacing, and fewer differentiating cellular nuclei. We quantified these features based on experts' visual inspection of phase contrast images of iPSCs and found that these features are effective for evaluating iPSC quality. We then developed an iPSC quality evaluation method using an image analysis technique. The method allowed accurate classification, equivalent to visual inspection by experts, of three iPSC cell lines.

  11. Web-based spatial analysis with the ILWIS open source GIS software and satellite images from GEONETCast

    Science.gov (United States)

    Lemmens, R.; Maathuis, B.; Mannaerts, C.; Foerster, T.; Schaeffer, B.; Wytzisk, A.

    2009-12-01

    This paper involves easy accessible integrated web-based analysis of satellite images with a plug-in based open source software. The paper is targeted to both users and developers of geospatial software. Guided by a use case scenario, we describe the ILWIS software and its toolbox to access satellite images through the GEONETCast broadcasting system. The last two decades have shown a major shift from stand-alone software systems to networked ones, often client/server applications using distributed geo-(web-)services. This allows organisations to combine without much effort their own data with remotely available data and processing functionality. Key to this integrated spatial data analysis is a low-cost access to data from within a user-friendly and flexible software. Web-based open source software solutions are more often a powerful option for developing countries. The Integrated Land and Water Information System (ILWIS) is a PC-based GIS & Remote Sensing software, comprising a complete package of image processing, spatial analysis and digital mapping and was developed as commercial software from the early nineties onwards. Recent project efforts have migrated ILWIS into a modular, plug-in-based open source software, and provide web-service support for OGC-based web mapping and processing. The core objective of the ILWIS Open source project is to provide a maintainable framework for researchers and software developers to implement training components, scientific toolboxes and (web-) services. The latest plug-ins have been developed for multi-criteria decision making, water resources analysis and spatial statistics analysis. The development of this framework is done since 2007 in the context of 52°North, which is an open initiative that advances the development of cutting edge open source geospatial software, using the GPL license. GEONETCast, as part of the emerging Global Earth Observation System of Systems (GEOSS), puts essential environmental data at the

  12. A voxel-based morphometry and diffusion tensor imaging analysis of asymptomatic Parkinson's disease-related G2019S LRRK2 mutation carriers.

    Science.gov (United States)

    Thaler, Avner; Artzi, Moran; Mirelman, Anat; Jacob, Yael; Helmich, Rick C; van Nuenen, Bart F L; Gurevich, Tanya; Orr-Urtreger, Avi; Marder, Karen; Bressman, Susan; Bloem, Bastiaan R; Hendler, Talma; Giladi, Nir; Ben Bashat, Dafna

    2014-05-01

    Patients with Parkinson's disease have reduced gray matter volume and fractional anisotropy in both cortical and sub-cortical structures, yet changes in the pre-motor phase of the disease are unknown. A comprehensive imaging study using voxel-based morphometry and diffusion tensor imaging tract-based spatial statistics analysis was performed on 64 Ashkenazi Jewish asymptomatic first degree relatives of patients with Parkinson's disease (30 mutation carriers), who carry the G2019S mutation in the leucine-rich repeat kinase 2 (LRRK2) gene. No between-group differences in gray matter volume could be noted in either whole-brain or volume-of-interest analysis. Diffusion tensor imaging analysis did not identify group differences in white matter areas, and volume-of-interest analysis identified no differences in diffusivity parameters in Parkinson's disease-related structures. G2019S carriers do not manifest changes in gray matter volume or diffusivity parameters in Parkinson's disease-related structures prior to the appearance of motor symptoms. © 2014 International Parkinson and Movement Disorder Society.

  13. DTI analysis methods : Voxel-based analysis

    NARCIS (Netherlands)

    Van Hecke, Wim; Leemans, Alexander; Emsell, Louise

    2016-01-01

    Voxel-based analysis (VBA) of diffusion tensor imaging (DTI) data permits the investigation of voxel-wise differences or changes in DTI metrics in every voxel of a brain dataset. It is applied primarily in the exploratory analysis of hypothesized group-level alterations in DTI parameters, as it does

  14. Standardization of Image Quality Analysis – ISO 19264

    DEFF Research Database (Denmark)

    Wüller, Dietmar; Kejser, Ulla Bøgvad

    2016-01-01

    There are a variety of image quality analysis tools available for the archiving world, which are based on different test charts and analysis algorithms. ISO has formed a working group in 2012 to harmonize these approaches and create a standard way of analyzing the image quality for archiving...... systems. This has resulted in three documents that have been or are going to be published soon. ISO 19262 defines the terms used in the area of image capture to unify the language. ISO 19263 describes the workflow issues and provides detailed information on how the measurements are done. Last...... but not least ISO 19264 describes the measurements in detail and provides aims and tolerance levels for the different aspects. This paper will present the new ISO 19264 technical specification to analyze image quality based on a single capture of a multi-pattern test chart, and discuss the reasoning behind its...

  15. Secure thin client architecture for DICOM image analysis

    Science.gov (United States)

    Mogatala, Harsha V. R.; Gallet, Jacqueline

    2005-04-01

    This paper presents a concept of Secure Thin Client (STC) Architecture for Digital Imaging and Communications in Medicine (DICOM) image analysis over Internet. STC Architecture provides in-depth analysis and design of customized reports for DICOM images using drag-and-drop and data warehouse technology. Using a personal computer and a common set of browsing software, STC can be used for analyzing and reporting detailed patient information, type of examinations, date, Computer Tomography (CT) dose index, and other relevant information stored within the images header files as well as in the hospital databases. STC Architecture is three-tier architecture. The First-Tier consists of drag-and-drop web based interface and web server, which provides customized analysis and reporting ability to the users. The Second-Tier consists of an online analytical processing (OLAP) server and database system, which serves fast, real-time, aggregated multi-dimensional data using OLAP technology. The Third-Tier consists of a smart algorithm based software program which extracts DICOM tags from CT images in this particular application, irrespective of CT vendor's, and transfers these tags into a secure database system. This architecture provides Winnipeg Regional Health Authorities (WRHA) with quality indicators for CT examinations in the hospitals. It also provides health care professionals with analytical tool to optimize radiation dose and image quality parameters. The information is provided to the user by way of a secure socket layer (SSL) and role based security criteria over Internet. Although this particular application has been developed for WRHA, this paper also discusses the effort to extend the Architecture to other hospitals in the region. Any DICOM tag from any imaging modality could be tracked with this software.

  16. KALMAN FILTER BASED FEATURE ANALYSIS FOR TRACKING PEOPLE FROM AIRBORNE IMAGES

    Directory of Open Access Journals (Sweden)

    B. Sirmacek

    2012-09-01

    Full Text Available Recently, analysis of man events in real-time using computer vision techniques became a very important research field. Especially, understanding motion of people can be helpful to prevent unpleasant conditions. Understanding behavioral dynamics of people can also help to estimate future states of underground passages, shopping center like public entrances, or streets. In order to bring an automated solution to this problem, we propose a novel approach using airborne image sequences. Although airborne image resolutions are not enough to see each person in detail, we can still notice a change of color components in the place where a person exists. Therefore, we propose a color feature detection based probabilistic framework in order to detect people automatically. Extracted local features behave as observations of the probability density function (pdf of the people locations to be estimated. Using an adaptive kernel density estimation method, we estimate the corresponding pdf. First, we use estimated pdf to detect boundaries of dense crowds. After that, using background information of dense crowds and previously extracted local features, we detect other people in non-crowd regions automatically for each image in the sequence. We benefit from Kalman filtering to track motion of detected people. To test our algorithm, we use a stadium entrance image data set taken from airborne camera system. Our experimental results indicate possible usage of the algorithm in real-life man events. We believe that the proposed approach can also provide crucial information to police departments and crisis management teams to achieve more detailed observations of people in large open area events to prevent possible accidents or unpleasant conditions.

  17. Histological image classification using biologically interpretable shape-based features

    International Nuclear Information System (INIS)

    Kothari, Sonal; Phan, John H; Young, Andrew N; Wang, May D

    2013-01-01

    Automatic cancer diagnostic systems based on histological image classification are important for improving therapeutic decisions. Previous studies propose textural and morphological features for such systems. These features capture patterns in histological images that are useful for both cancer grading and subtyping. However, because many of these features lack a clear biological interpretation, pathologists may be reluctant to adopt these features for clinical diagnosis. We examine the utility of biologically interpretable shape-based features for classification of histological renal tumor images. Using Fourier shape descriptors, we extract shape-based features that capture the distribution of stain-enhanced cellular and tissue structures in each image and evaluate these features using a multi-class prediction model. We compare the predictive performance of the shape-based diagnostic model to that of traditional models, i.e., using textural, morphological and topological features. The shape-based model, with an average accuracy of 77%, outperforms or complements traditional models. We identify the most informative shapes for each renal tumor subtype from the top-selected features. Results suggest that these shapes are not only accurate diagnostic features, but also correlate with known biological characteristics of renal tumors. Shape-based analysis of histological renal tumor images accurately classifies disease subtypes and reveals biologically insightful discriminatory features. This method for shape-based analysis can be extended to other histological datasets to aid pathologists in diagnostic and therapeutic decisions

  18. Infrared and visible image fusion based on robust principal component analysis and compressed sensing

    Science.gov (United States)

    Li, Jun; Song, Minghui; Peng, Yuanxi

    2018-03-01

    Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.

  19. Pixel extraction based integral imaging with controllable viewing direction

    International Nuclear Information System (INIS)

    Ji, Chao-Chao; Deng, Huan; Wang, Qiong-Hua

    2012-01-01

    We propose pixel extraction based integral imaging with a controllable viewing direction. The proposed integral imaging can provide viewers three-dimensional (3D) images in a very small viewing angle. The viewing angle and the viewing direction of the reconstructed 3D images are controlled by the pixels extracted from an elemental image array. Theoretical analysis and a 3D display experiment of the viewing direction controllable integral imaging are carried out. The experimental results verify the correctness of the theory. A 3D display based on the integral imaging can protect the viewer’s privacy and has huge potential for a television to show multiple 3D programs at the same time. (paper)

  20. Retinal imaging and image analysis

    NARCIS (Netherlands)

    Abramoff, M.D.; Garvin, Mona K.; Sonka, Milan

    2010-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of

  1. Decomposition of Polarimetric SAR Images Based on Second- and Third-order Statics Analysis

    Science.gov (United States)

    Kojima, S.; Hensley, S.

    2012-12-01

    There are many papers concerning the research of the decomposition of polerimetric SAR imagery. Most of them are based on second-order statics analysis that Freeman and Durden [1] suggested for the reflection symmetry condition that implies that the co-polarization and cross-polarization correlations are close to zero. Since then a number of improvements and enhancements have been proposed to better understand the underlying backscattering mechanisms present in polarimetric SAR images. For example, Yamaguchi et al. [2] added the helix component into Freeman's model and developed a 4 component scattering model for the non-reflection symmetry condition. In addition, Arii et al. [3] developed an adaptive model-based decomposition method that could estimate both the mean orientation angle and a degree of randomness for the canopy scattering for each pixel in a SAR image without the reflection symmetry condition. This purpose of this research is to develop a new decomposition method based on second- and third-order statics analysis to estimate the surface, dihedral, volume and helix scattering components from polarimetric SAR images without the specific assumptions concerning the model for the volume scattering. In addition, we evaluate this method by using both simulation and real UAVSAR data and compare this method with other methods. We express the volume scattering component using the wire formula and formulate the relationship equation between backscattering echo and each component such as the surface, dihedral, volume and helix via linearization based on second- and third-order statics. In third-order statics, we calculate the correlation of the correlation coefficients for each polerimetric data and get one new relationship equation to estimate each polarization component such as HH, VV and VH for the volume. As a result, the equation for the helix component in this method is the same formula as one in Yamaguchi's method. However, the equation for the volume

  2. Gap Acceptance During Lane Changes by Large-Truck Drivers-An Image-Based Analysis.

    Science.gov (United States)

    Nobukawa, Kazutoshi; Bao, Shan; LeBlanc, David J; Zhao, Ding; Peng, Huei; Pan, Christopher S

    2016-03-01

    This paper presents an analysis of rearward gap acceptance characteristics of drivers of large trucks in highway lane change scenarios. The range between the vehicles was inferred from camera images using the estimated lane width obtained from the lane tracking camera as the reference. Six-hundred lane change events were acquired from a large-scale naturalistic driving data set. The kinematic variables from the image-based gap analysis were filtered by the weighted linear least squares in order to extrapolate them at the lane change time. In addition, the time-to-collision and required deceleration were computed, and potential safety threshold values are provided. The resulting range and range rate distributions showed directional discrepancies, i.e., in left lane changes, large trucks are often slower than other vehicles in the target lane, whereas they are usually faster in right lane changes. Video observations have confirmed that major motivations for changing lanes are different depending on the direction of move, i.e., moving to the left (faster) lane occurs due to a slower vehicle ahead or a merging vehicle on the right-hand side, whereas right lane changes are frequently made to return to the original lane after passing.

  3. Construction of finite element model and stress analysis of anterior cruciate ligament tibial insertion.

    Science.gov (United States)

    Dai, Can; Yang, Liu; Guo, Lin; Wang, Fuyou; Gou, Jingyue; Deng, Zhilong

    2015-01-01

    The aim of the present study was to develop a more realistic finite element (FE) model of the human anterior cruciate ligament (ACL) tibial insertion and to analyze the stress distribution in the ACL internal fibers under load. The ACL tibial insertions were processed histologically. With Photoshop software, digital images taken from the histological slides were collaged, contour lines were drawn, and different gray values were filled based on the structure. The data were exported to Amira software and saved as ".hmascii" file. This document was imported into HyperMesh software. The solid mesh model generated using HyperMesh software was imported into Abaqus software. The material properties were introduced, boundary conditions were set, and load was added to carry out the FE analysis. The stress distribution of the ACL internal fibers was uneven. The lowest stress could be observed in the ACL lateral fibers under tensile and shear load. The establishment of ACL tibial insertion FE model and mechanical analysis could reveal the stress distribution in the ACL internal fibers under load. There was greater load carrying capacity in the ACL lateral fibers which could sustain greater tensile and shear forces.

  4. Facilitating in vivo tumor localization by principal component analysis based on dynamic fluorescence molecular imaging

    Science.gov (United States)

    Gao, Yang; Chen, Maomao; Wu, Junyu; Zhou, Yuan; Cai, Chuangjian; Wang, Daliang; Luo, Jianwen

    2017-09-01

    Fluorescence molecular imaging has been used to target tumors in mice with xenograft tumors. However, tumor imaging is largely distorted by the aggregation of fluorescent probes in the liver. A principal component analysis (PCA)-based strategy was applied on the in vivo dynamic fluorescence imaging results of three mice with xenograft tumors to facilitate tumor imaging, with the help of a tumor-specific fluorescent probe. Tumor-relevant features were extracted from the original images by PCA and represented by the principal component (PC) maps. The second principal component (PC2) map represented the tumor-related features, and the first principal component (PC1) map retained the original pharmacokinetic profiles, especially of the liver. The distribution patterns of the PC2 map of the tumor-bearing mice were in good agreement with the actual tumor location. The tumor-to-liver ratio and contrast-to-noise ratio were significantly higher on the PC2 map than on the original images, thus distinguishing the tumor from its nearby fluorescence noise of liver. The results suggest that the PC2 map could serve as a bioimaging marker to facilitate in vivo tumor localization, and dynamic fluorescence molecular imaging with PCA could be a valuable tool for future studies of in vivo tumor metabolism and progression.

  5. Quantification of sterol-specific response in human macrophages using automated imaged-based analysis.

    Science.gov (United States)

    Gater, Deborah L; Widatalla, Namareq; Islam, Kinza; AlRaeesi, Maryam; Teo, Jeremy C M; Pearson, Yanthe E

    2017-12-13

    The transformation of normal macrophage cells into lipid-laden foam cells is an important step in the progression of atherosclerosis. One major contributor to foam cell formation in vivo is the intracellular accumulation of cholesterol. Here, we report the effects of various combinations of low-density lipoprotein, sterols, lipids and other factors on human macrophages, using an automated image analysis program to quantitatively compare single cell properties, such as cell size and lipid content, in different conditions. We observed that the addition of cholesterol caused an increase in average cell lipid content across a range of conditions. All of the sterol-lipid mixtures examined were capable of inducing increases in average cell lipid content, with variations in the distribution of the response, in cytotoxicity and in how the sterol-lipid combination interacted with other activating factors. For example, cholesterol and lipopolysaccharide acted synergistically to increase cell lipid content while also increasing cell survival compared with the addition of lipopolysaccharide alone. Additionally, ergosterol and cholesteryl hemisuccinate caused similar increases in lipid content but also exhibited considerably greater cytotoxicity than cholesterol. The use of automated image analysis enables us to assess not only changes in average cell size and content, but also to rapidly and automatically compare population distributions based on simple fluorescence images. Our observations add to increasing understanding of the complex and multifactorial nature of foam-cell formation and provide a novel approach to assessing the heterogeneity of macrophage response to a variety of factors.

  6. Canny edge-based deformable image registration.

    Science.gov (United States)

    Kearney, Vasant; Huang, Yihui; Mao, Weihua; Yuan, Baohong; Tang, Liping

    2017-02-07

    This work focuses on developing a 2D Canny edge-based deformable image registration (Canny DIR) algorithm to register in vivo white light images taken at various time points. This method uses a sparse interpolation deformation algorithm to sparsely register regions of the image with strong edge information. A stability criterion is enforced which removes regions of edges that do not deform in a smooth uniform manner. Using a synthetic mouse surface ground truth model, the accuracy of the Canny DIR algorithm was evaluated under axial rotation in the presence of deformation. The accuracy was also tested using fluorescent dye injections, which were then used for gamma analysis to establish a second ground truth. The results indicate that the Canny DIR algorithm performs better than rigid registration, intensity corrected Demons, and distinctive features for all evaluation matrices and ground truth scenarios. In conclusion Canny DIR performs well in the presence of the unique lighting and shading variations associated with white-light-based image registration.

  7. Individual Building Extraction from TerraSAR-X Images Based on Ontological Semantic Analysis

    Directory of Open Access Journals (Sweden)

    Rong Gui

    2016-08-01

    Full Text Available Accurate building information plays a crucial role for urban planning, human settlements and environmental management. Synthetic aperture radar (SAR images, which deliver images with metric resolution, allow for analyzing and extracting detailed information on urban areas. In this paper, we consider the problem of extracting individual buildings from SAR images based on domain ontology. By analyzing a building scattering model with different orientations and structures, the building ontology model is set up to express multiple characteristics of individual buildings. Under this semantic expression framework, an object-based SAR image segmentation method is adopted to provide homogeneous image objects, and three categories of image object features are extracted. Semantic rules are implemented by organizing image object features, and the individual building objects expression based on an ontological semantic description is formed. Finally, the building primitives are used to detect buildings among the available image objects. Experiments on TerraSAR-X images of Foshan city, China, with a spatial resolution of 1.25 m × 1.25 m, have shown the total extraction rates are above 84%. The results indicate the ontological semantic method can exactly extract flat-roof and gable-roof buildings larger than 250 pixels with different orientations.

  8. Comparison of imaging-based gross tumor volume and pathological volume determined by whole-mount serial sections in primary cervical cancer

    Directory of Open Access Journals (Sweden)

    Zhang Y

    2013-07-01

    Full Text Available Ying Zhang,1,* Jing Hu,1,* Jianping Li,1 Ning Wang,1 Weiwei Li,1 Yongchun Zhou,1 Junyue Liu,1 Lichun Wei,1 Mei Shi,1 Shengjun Wang,2 Jing Wang,2 Xia Li,3 Wanling Ma4 1Department of Radiation Oncology, 2Department of Nuclear Medicine, 3Department of Pathology, 4Department of Radiology, Xijing Hospital, Xi'an, People's Republic of China*These authors contributed equally to this workObjective: To investigate the accuracy of imaging-based gross tumor volume (GTV compared with pathological volume in cervical cancer.Methods: Ten patients with International Federation of Gynecology and Obstetrics stage I–II cervical cancer were eligible for investigation and underwent surgery in this study. Magnetic resonance imaging (MRI and fluorine-18 fluorodeoxyglucose positron emission tomography (18F-FDG PET/computed tomography (CT scans were taken the day before surgery. The GTVs under MRI and 18F-FDG PET/CT (GTV-MRI, GTV-PET, GTV-CT were calculated automatically by Eclipse treatment-planning systems. Specimens of excised uterine cervix and cervical cancer were consecutively sliced and divided into whole-mount serial sections. The tumor border of hematoxylin and eosin-stained sections was outlined under a microscope by an experienced pathologist. GTV through pathological image (GTV-path was calculated with Adobe Photoshop.Results: The GTVs (average ± standard deviation delineated and calculated under CT, MRI, PET, and histopathological sections were 19.41 ± 11.96 cm3, 12.66 ± 10.53 cm3, 11.07 ± 9.44 cm3, and 10.79 ± 8.71 cm3, respectively. The volume of GTV-CT or GTV-MR was bigger than GTV-path, and the difference was statistically significant (P 0.05. Spearman correlation analysis showed that GTV-CT, GTV-MRI, and GTV-PET were significantly correlated with GTV-path (P < 0.01. There was no significant difference in the lesion coverage factor among the three modalities.Conclusion: The present study showed that GTV defined under 40% of maximum standardized

  9. A fast global fitting algorithm for fluorescence lifetime imaging microscopy based on image segmentation.

    Science.gov (United States)

    Pelet, S; Previte, M J R; Laiho, L H; So, P T C

    2004-10-01

    Global fitting algorithms have been shown to improve effectively the accuracy and precision of the analysis of fluorescence lifetime imaging microscopy data. Global analysis performs better than unconstrained data fitting when prior information exists, such as the spatial invariance of the lifetimes of individual fluorescent species. The highly coupled nature of global analysis often results in a significantly slower convergence of the data fitting algorithm as compared with unconstrained analysis. Convergence speed can be greatly accelerated by providing appropriate initial guesses. Realizing that the image morphology often correlates with fluorophore distribution, a global fitting algorithm has been developed to assign initial guesses throughout an image based on a segmentation analysis. This algorithm was tested on both simulated data sets and time-domain lifetime measurements. We have successfully measured fluorophore distribution in fibroblasts stained with Hoechst and calcein. This method further allows second harmonic generation from collagen and elastin autofluorescence to be differentiated in fluorescence lifetime imaging microscopy images of ex vivo human skin. On our experimental measurement, this algorithm increased convergence speed by over two orders of magnitude and achieved significantly better fits. Copyright 2004 Biophysical Society

  10. Comparison of subset-based local and FE-based global digital image correlation: Theoretical error analysis and validation

    KAUST Repository

    Pan, B.

    2016-03-22

    Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances of these different DIC approaches have been experimentally investigated using numerical and real-world experimental tests. The results have shown that in typical cases, where the subset (element) size is no less than a few pixels and the local deformation within a subset (element) can be well approximated by the adopted shape functions, the subset-based local DIC outperforms FE-based global DIC approaches because the former provides slightly smaller root-mean-square errors and offers much higher computation efficiency. Here we investigate the theoretical origin and lay a solid theoretical basis for the previous comparison. We assume that systematic errors due to imperfect intensity interpolation and undermatched shape functions are negligibly small, and perform a theoretical analysis of the random errors or standard deviation (SD) errors in the displacements measured by two local DIC approaches (i.e., a subset-based local DIC and an element-based local DIC) and two FE-based global DIC approaches (i.e., Q4-DIC and Q8-DIC). The equations that govern the random errors in the displacements measured by these local and global DIC approaches are theoretically derived. The correctness of the theoretically predicted SD errors is validated through numerical translation tests under various noise levels. We demonstrate that the SD errors induced by the Q4-element-based local DIC, the global Q4-DIC and the global Q8-DIC are 4, 1.8-2.2 and 1.2-1.6 times greater, respectively, than that associated with the subset-based local DIC, which is consistent with our conclusions from previous work. © 2016 Elsevier Ltd. All rights reserved.

  11. Research on Copy-Move Image Forgery Detection Using Features of Discrete Polar Complex Exponential Transform

    Science.gov (United States)

    Gan, Yanfen; Zhong, Junliu

    2015-12-01

    With the aid of sophisticated photo-editing software, such as Photoshop, copy-move image forgery operation has been widely applied and has become a major concern in the field of information security in the modern society. A lot of work on detecting this kind of forgery has gained great achievements, but the detection results of geometrical transformations of copy-move regions are not so satisfactory. In this paper, a new method based on the Polar Complex Exponential Transform is proposed. This method addresses issues in image geometric moment, focusing on constructing rotation invariant moment and extracting features of the rotation invariant moment. In order to reduce rounding errors of the transform from the Polar coordinate system to the Cartesian coordinate system, a new transformation method is presented and discussed in detail at the same time. The new method constructs a 9 × 9 shrunk template to transform the Cartesian coordinate system back to the Polar coordinate system. It can reduce transform errors to a much greater degree. Forgery detection, such as copy-move image forgery detection, is a difficult procedure, but experiments prove our method is a great improvement in detecting and identifying forgery images affected by the rotated transform.

  12. Hyperspectral image analysis. A tutorial

    International Nuclear Information System (INIS)

    Amigo, José Manuel; Babamoradi, Hamid; Elcoroaristizabal, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares – Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case. - Highlights: • Comprehensive tutorial of Hyperspectral Image analysis. • Hierarchical discrimination of six classes of plastics containing flame retardant. • Step by step guidelines to perform class-modeling on hyperspectral images. • Fusion of multivariate data analysis and digital image processing methods. • Promising methodology for real-time detection of plastics containing flame retardant.

  13. Distributed and hierarchical object-based image analysis for damage assessment: a case study of 2008 Wenchuan earthquake, China

    Directory of Open Access Journals (Sweden)

    Jing Sun

    2016-11-01

    Full Text Available Object-based image analysis (OBIA is an emerging technique for analyzing remote sensing image based on object properties including spectral, geometry, contextual and texture information. To reduce the computational cost of this comprehensive OBIA and make it more feasible in disaster responses, we developed a unique approach – distributed and hierarchical OBIA approach for damage assessment. This study demonstrated a completed classification of YingXiu town, heavily devastated by the 2008 Wenchuan earthquake using Quickbrid imagery. Two distinctive areas, mountainous areas and urban, were analyzed separately. This approach does not require substantial processing power and large amounts of available memory because image of a large disaster-affected area was split in smaller pieces. Two or more computers could be used in parallel to process and analyze these sub-images based on different requirements. The approach can be applicable in other cases whereas the established set of rules can be adopted in similar study areas. More experiments will be carried out in future studies to prove its feasibility.

  14. Monte Carlo simulation of grating-based neutron phase contrast imaging at CPHS

    International Nuclear Information System (INIS)

    Zhang Ran; Chen Zhiqiang; Huang Zhifeng; Xiao Yongshun; Wang Xuewu; Wie Jie; Loong, C.-K.

    2011-01-01

    Since the launching of the Compact Pulsed Hadron Source (CPHS) project of Tsinghua University in 2009, works have begun on the design and engineering of an imaging/radiography instrument for the neutron source provided by CPHS. The instrument will perform basic tasks such as transmission imaging and computerized tomography. Additionally, we include in the design the utilization of coded-aperture and grating-based phase contrast methodology, as well as the options of prompt gamma-ray analysis and neutron-energy selective imaging. Previously, we had implemented the hardware and data-analysis software for grating-based X-ray phase contrast imaging. Here, we investigate Geant4-based Monte Carlo simulations of neutron refraction phenomena and then model the grating-based neutron phase contrast imaging system according to the classic-optics-based method. The simulated experimental results of the retrieving phase shift gradient information by five-step phase-stepping approach indicate the feasibility of grating-based neutron phase contrast imaging as an option for the cold neutron imaging instrument at the CPHS.

  15. Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors

    Science.gov (United States)

    Dutton, Neale A. W.; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K.

    2016-01-01

    SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed. PMID:27447643

  16. GANALYZER: A TOOL FOR AUTOMATIC GALAXY IMAGE ANALYSIS

    International Nuclear Information System (INIS)

    Shamir, Lior

    2011-01-01

    We describe Ganalyzer, a model-based tool that can automatically analyze and classify galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze ∼10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large data sets of galaxy images collected by autonomous sky surveys such as SDSS, LSST, or DES. The software is available for free download at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer, and the data used in the experiment are available at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer/GalaxyImages.zip.

  17. Ganalyzer: A Tool for Automatic Galaxy Image Analysis

    Science.gov (United States)

    Shamir, Lior

    2011-08-01

    We describe Ganalyzer, a model-based tool that can automatically analyze and classify galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze ~10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large data sets of galaxy images collected by autonomous sky surveys such as SDSS, LSST, or DES. The software is available for free download at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer, and the data used in the experiment are available at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer/GalaxyImages.zip.

  18. Power Transmission Tower Series Extraction in PolSAR Image Based on Time-Frequency Analysis and A-Contrario Theory

    Directory of Open Access Journals (Sweden)

    Dongqing Peng

    2016-11-01

    Full Text Available Based on Time-Frequency (TF analysis and a-contrario theory, this paper presents a new approach for extraction of linear arranged power transmission tower series in Polarimetric Synthetic Aperture Radar (PolSAR images. Firstly, the PolSAR multidimensional information is analyzed using a linear TF decomposition approach. The stationarity of each pixel is assessed by testing the maximum likelihood ratio statistics of the coherency matrix. Then, based on the maximum likelihood log-ratio image, a Cell-Averaging Constant False Alarm Rate (CA-CFAR detector with Weibull clutter background and a post-processing operator is used to detect point-like targets in the image. Finally, a searching approach based on a-contrario theory is applied to extract the linear arranged targets from detected point-like targets. The experimental results on three sets of PolSAR data verify the effectiveness of this approach.

  19. Medical image registration for analysis

    International Nuclear Information System (INIS)

    Petrovic, V.

    2006-01-01

    Full text: Image registration techniques represent a rich family of image processing and analysis tools that aim to provide spatial correspondences across sets of medical images of similar and disparate anatomies and modalities. Image registration is a fundamental and usually the first step in medical image analysis and this paper presents a number of advanced techniques as well as demonstrates some of the advanced medical image analysis techniques they make possible. A number of both rigid and non-rigid medical image alignment algorithms of equivalent and merely consistent anatomical structures respectively are presented. The algorithms are compared in terms of their practical aims, inputs, computational complexity and level of operator (e.g. diagnostician) interaction. In particular, the focus of the methods discussion is placed on the applications and practical benefits of medical image registration. Results of medical image registration on a number of different imaging modalities and anatomies are presented demonstrating the accuracy and robustness of their application. Medical image registration is quickly becoming ubiquitous in medical imaging departments with the results of such algorithms increasingly used in complex medical image analysis and diagnostics. This paper aims to demonstrate at least part of the reason why

  20. Advances in Reasoning-Based Image Processing Intelligent Systems Conventional and Intelligent Paradigms

    CERN Document Server

    Nakamatsu, Kazumi

    2012-01-01

    The book puts special stress on the contemporary techniques for reasoning-based image processing and analysis: learning based image representation and advanced video coding; intelligent image processing and analysis in medical vision systems; similarity learning models for image reconstruction; visual perception for mobile robot motion control, simulation of human brain activity in the analysis of video sequences; shape-based invariant features extraction; essential of paraconsistent neural networks, creativity and intelligent representation in computational systems. The book comprises 14 chapters. Each chapter is a small monograph, representing resent investigations of authors in the area. The topics of the chapters cover wide scientific and application areas and complement each-other very well. The chapters’ content is based on fundamental theoretical presentations, followed by experimental results and comparison with similar techniques. The size of the chapters is well-ballanced which permits a thorough ...

  1. General Staining and Segmentation Procedures for High Content Imaging and Analysis.

    Science.gov (United States)

    Chambers, Kevin M; Mandavilli, Bhaskar S; Dolman, Nick J; Janes, Michael S

    2018-01-01

    Automated quantitative fluorescence microscopy, also known as high content imaging (HCI), is a rapidly growing analytical approach in cell biology. Because automated image analysis relies heavily on robust demarcation of cells and subcellular regions, reliable methods for labeling cells is a critical component of the HCI workflow. Labeling of cells for image segmentation is typically performed with fluorescent probes that bind DNA for nuclear-based cell demarcation or with those which react with proteins for image analysis based on whole cell staining. These reagents, along with instrument and software settings, play an important role in the successful segmentation of cells in a population for automated and quantitative image analysis. In this chapter, we describe standard procedures for labeling and image segmentation in both live and fixed cell samples. The chapter will also provide troubleshooting guidelines for some of the common problems associated with these aspects of HCI.

  2. Reducing the negative effects of media exposure on body image: Testing the effectiveness of subvertising and disclaimer labels.

    Science.gov (United States)

    Frederick, David A; Sandhu, Gaganjyot; Scott, Terri; Akbari, Yasmin

    2016-06-01

    Body image activists have proposed adding disclaimer labels to digitally altered media as a way to promote positive body image. Another approach advocated by activists is to alter advertisements through subvertising (adding social commentary to the image to undermine the message of the advertisement). We examined if body image could be enhanced by attaching Photoshop disclaimers or subvertising to thin-ideal media images of swimsuit models. In Study 1 (N=1268), adult women exposed to disclaimers or subvertising did not report higher body state satisfaction or lower drive for thinness than women exposed to unaltered images. In Study 2 (N=820), adult women who were exposed to disclaimers or subvertising did not report higher state body satisfaction or lower state social appearance comparisons than women exposed to unaltered images or to no images. These results raise questions about the effectiveness of disclaimers and subvertising for promoting body satisfaction. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Agreement between digital image analysis and clinical spectrophotometer in CIEL*C*h° coordinate differences and total color difference (ΔE) measurements of dental ceramic shade tabs.

    Science.gov (United States)

    Farah, Ra'fat I

    2016-01-01

    The objectives of this in vitro study were: 1) to test the agreement among color coordinate differences and total color difference (ΔL*, ΔC*, Δh°, and ΔE) measurements obtained by digital image analysis (DIA) and spectrophotometer, and 2) to test the reliability of each method for obtaining color differences. A digital camera was used to record standardized images of each of the 15 shade tabs from the IPS e.max shade guide placed edge-to-edge in a phantom head with a reference shade tab. The images were analyzed using image-editing software (Adobe Photoshop) to obtain the color differences between the middle area of each test shade tab and the corresponding area of the reference tab. The color differences for the same shade tab areas were also measured using a spectrophotometer. To assess the reliability, measurements for the 15 shade tabs were repeated twice using the two methods. The Intraclass Correlation Coefficient (ICC) and the Dahlberg index were used to calculate agreement and reliability. The total agreement of the two methods for measuring ΔL*, ΔC*, Δh°, and ΔE, according to the ICC, exceeded 0.82. The Dahlberg indices for ΔL* and ΔE were 2.18 and 2.98, respectively. For the reliability calculation, the ICCs for the DIA and the spectrophotometer ΔE were 0.91 and 0.94, respectively. High agreement was obtained between the DIA and spectrophotometer results for the ΔL*, ΔC*, Δh°, and ΔE measurements. Further, the reliability of the measurements for the spectrophotometer was slightly higher than the reliability of all measurements in the DIA.

  4. The Digital Image Processing And Quantitative Analysis In Microscopic Image Characterization

    International Nuclear Information System (INIS)

    Ardisasmita, M. Syamsa

    2000-01-01

    Many electron microscopes although have produced digital images, but not all of them are equipped with a supporting unit to process and analyse image data quantitatively. Generally the analysis of image has to be made visually and the measurement is realized manually. The development of mathematical method for geometric analysis and pattern recognition, allows automatic microscopic image analysis with computer. Image processing program can be used for image texture and structure periodic analysis by the application of Fourier transform. Because the development of composite materials. Fourier analysis in frequency domain become important for measure the crystallography orientation. The periodic structure analysis and crystal orientation are the key to understand many material properties like mechanical strength. stress, heat conductivity, resistance, capacitance and other material electric and magnetic properties. In this paper will be shown the application of digital image processing in microscopic image characterization and analysis in microscopic image

  5. Pathological diagnosis of bladder cancer by image analysis of hypericin induced fluorescence cystoscopic images

    Science.gov (United States)

    Kah, James C. Y.; Olivo, Malini C.; Lau, Weber K. O.; Sheppard, Colin J. R.

    2005-08-01

    Photodynamic diagnosis of bladder carcinoma based on hypericin fluorescence cystoscopy has shown to have a higher degree of sensitivity for the detection of flat bladder carcinoma compared to white light cystoscopy. The potential of the photosensitizer hypericin-induced fluorescence in performing non-invasive optical biopsy to grade bladder cancer in vivo using fluorescence cystoscopic image analysis without surgical resection for tissue biopsy is investigated in this study. The correlation between tissue fluorescence and histopathology of diseased tissue was explored and a diagnostic algorithm based on fluorescence image analysis was developed to classify the bladder cancer without surgical resection for tissue biopsy. Preliminary results suggest a correlation between tissue fluorescence and bladder cancer grade. By combining both the red-to-blue and red-to-green intensity ratios into a 2D scatter plot yields an average sensitivity and specificity of around 70% and 85% respectively for pathological cancer grading of the three different grades of bladder cancer. Therefore, the diagnostic algorithm based on colorimetric intensity ratio analysis of hypericin fluorescence cystoscopic images developed in this preliminary study shows promising potential to optically diagnose and grade bladder cancer in vivo.

  6. Land Cover/Land Use Classification and Change Detection Analysis with Astronaut Photography and Geographic Object-Based Image Analysis

    Science.gov (United States)

    Hollier, Andi B.; Jagge, Amy M.; Stefanov, William L.; Vanderbloemen, Lisa A.

    2017-01-01

    For over fifty years, NASA astronauts have taken exceptional photographs of the Earth from the unique vantage point of low Earth orbit (as well as from lunar orbit and surface of the Moon). The Crew Earth Observations (CEO) Facility is the NASA ISS payload supporting astronaut photography of the Earth surface and atmosphere. From aurora to mountain ranges, deltas, and cities, there are over two million images of the Earth's surface dating back to the Mercury missions in the early 1960s. The Gateway to Astronaut Photography of Earth website (eol.jsc.nasa.gov) provides a publically accessible platform to query and download these images at a variety of spatial resolutions and perform scientific research at no cost to the end user. As a demonstration to the science, application, and education user communities we examine astronaut photography of the Washington D.C. metropolitan area for three time steps between 1998 and 2016 using Geographic Object-Based Image Analysis (GEOBIA) to classify and quantify land cover/land use and provide a template for future change detection studies with astronaut photography.

  7. Some selected quantitative methods of thermal image analysis in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Bridge Crack Detection Using Multi-Rotary Uav and Object-Base Image Analysis

    Science.gov (United States)

    Rau, J. Y.; Hsiao, K. W.; Jhan, J. P.; Wang, S. H.; Fang, W. C.; Wang, J. L.

    2017-08-01

    Bridge is an important infrastructure for human life. Thus, the bridge safety monitoring and maintaining is an important issue to the government. Conventionally, bridge inspection were conducted by human in-situ visual examination. This procedure sometimes require under bridge inspection vehicle or climbing under the bridge personally. Thus, its cost and risk is high as well as labor intensive and time consuming. Particularly, its documentation procedure is subjective without 3D spatial information. In order cope with these challenges, this paper propose the use of a multi-rotary UAV that equipped with a SONY A7r2 high resolution digital camera, 50 mm fixed focus length lens, 135 degrees up-down rotating gimbal. The target bridge contains three spans with a total of 60 meters long, 20 meters width and 8 meters height above the water level. In the end, we took about 10,000 images, but some of them were acquired by hand held method taken on the ground using a pole with 2-8 meters long. Those images were processed by Agisoft PhotoscanPro to obtain exterior and interior orientation parameters. A local coordinate system was defined by using 12 ground control points measured by a total station. After triangulation and camera self-calibration, the RMS of control points is less than 3 cm. A 3D CAD model that describe the bridge surface geometry was manually measured by PhotoscanPro. They were composed of planar polygons and will be used for searching related UAV images. Additionally, a photorealistic 3D model can be produced for 3D visualization. In order to detect cracks on the bridge surface, we utilize object-based image analysis (OBIA) technique to segment the image into objects. Later, we derive several object features, such as density, area/bounding box ratio, length/width ratio, length, etc. Then, we can setup a classification rule set to distinguish cracks. Further, we apply semi-global-matching (SGM) to obtain 3D crack information and based on image scale we

  9. BRIDGE CRACK DETECTION USING MULTI-ROTARY UAV AND OBJECT-BASE IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    J. Y. Rau

    2017-08-01

    Full Text Available Bridge is an important infrastructure for human life. Thus, the bridge safety monitoring and maintaining is an important issue to the government. Conventionally, bridge inspection were conducted by human in-situ visual examination. This procedure sometimes require under bridge inspection vehicle or climbing under the bridge personally. Thus, its cost and risk is high as well as labor intensive and time consuming. Particularly, its documentation procedure is subjective without 3D spatial information. In order cope with these challenges, this paper propose the use of a multi-rotary UAV that equipped with a SONY A7r2 high resolution digital camera, 50 mm fixed focus length lens, 135 degrees up-down rotating gimbal. The target bridge contains three spans with a total of 60 meters long, 20 meters width and 8 meters height above the water level. In the end, we took about 10,000 images, but some of them were acquired by hand held method taken on the ground using a pole with 2–8 meters long. Those images were processed by Agisoft PhotoscanPro to obtain exterior and interior orientation parameters. A local coordinate system was defined by using 12 ground control points measured by a total station. After triangulation and camera self-calibration, the RMS of control points is less than 3 cm. A 3D CAD model that describe the bridge surface geometry was manually measured by PhotoscanPro. They were composed of planar polygons and will be used for searching related UAV images. Additionally, a photorealistic 3D model can be produced for 3D visualization. In order to detect cracks on the bridge surface, we utilize object-based image analysis (OBIA technique to segment the image into objects. Later, we derive several object features, such as density, area/bounding box ratio, length/width ratio, length, etc. Then, we can setup a classification rule set to distinguish cracks. Further, we apply semi-global-matching (SGM to obtain 3D crack information and based

  10. MO-F-BRA-04: Voxel-Based Statistical Analysis of Deformable Image Registration Error via a Finite Element Method.

    Science.gov (United States)

    Li, S; Lu, M; Kim, J; Glide-Hurst, C; Chetty, I; Zhong, H

    2012-06-01

    Purpose Clinical implementation of adaptive treatment planning is limited by the lack of quantitative tools to assess deformable image registration errors (R-ERR). The purpose of this study was to develop a method, using finite element modeling (FEM), to estimate registration errors based on mechanical changes resulting from them. Methods An experimental platform to quantify the correlation between registration errors and their mechanical consequences was developed as follows: diaphragm deformation was simulated on the CT images in patients with lung cancer using a finite element method (FEM). The simulated displacement vector fields (F-DVF) were used to warp each CT image to generate a FEM image. B-Spline based (Elastix) registrations were performed from reference to FEM images to generate a registration DVF (R-DVF). The F- DVF was subtracted from R-DVF. The magnitude of the difference vector was defined as the registration error, which is a consequence of mechanically unbalanced energy (UE), computed using 'in-house-developed' FEM software. A nonlinear regression model was used based on imaging voxel data and the analysis considered clustered voxel data within images. Results A regression model analysis showed that UE was significantly correlated with registration error, DVF and the product of registration error and DVF respectively with R̂2=0.73 (R=0.854). The association was verified independently using 40 tracked landmarks. A linear function between the means of UE values and R- DVF*R-ERR has been established. The mean registration error (N=8) was 0.9 mm. 85.4% of voxels fit this model within one standard deviation. Conclusions An encouraging relationship between UE and registration error has been found. These experimental results suggest the feasibility of UE as a valuable tool for evaluating registration errors, thus supporting 4D and adaptive radiotherapy. The research was supported by NIH/NCI R01CA140341. © 2012 American Association of Physicists in

  11. Single-Image Super-Resolution Based on Rational Fractal Interpolation.

    Science.gov (United States)

    Zhang, Yunfeng; Fan, Qinglan; Bao, Fangxun; Liu, Yifang; Zhang, Caiming

    2018-08-01

    This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.

  12. Content-Based Image Retrial Based on Hadoop

    Directory of Open Access Journals (Sweden)

    DongSheng Yin

    2013-01-01

    Full Text Available Generally, time complexity of algorithms for content-based image retrial is extremely high. In order to retrieve images on large-scale databases efficiently, a new way for retrieving based on Hadoop distributed framework is proposed. Firstly, a database of images features is built by using Speeded Up Robust Features algorithm and Locality-Sensitive Hashing and then perform the search on Hadoop platform in a parallel way specially designed. Considerable experimental results show that it is able to retrieve images based on content on large-scale cluster and image sets effectively.

  13. Multi-Modality Medical Image Fusion Based on Wavelet Analysis and Quality Evaluation

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Multi-modality medical image fusion has more and more important applications in medical image analysisand understanding. In this paper, we develop and apply a multi-resolution method based on wavelet pyramid to fusemedical images from different modalities such as PET-MRI and CT-MRI. In particular, we evaluate the different fusionresults when applying different selection rules and obtain optimum combination of fusion parameters.

  14. GEOPOSITIONING PRECISION ANALYSIS OF MULTIPLE IMAGE TRIANGULATION USING LRO NAC LUNAR IMAGES

    Directory of Open Access Journals (Sweden)

    K. Di

    2016-06-01

    Full Text Available This paper presents an empirical analysis of the geopositioning precision of multiple image triangulation using Lunar Reconnaissance Orbiter Camera (LROC Narrow Angle Camera (NAC images at the Chang’e-3(CE-3 landing site. Nine LROC NAC images are selected for comparative analysis of geopositioning precision. Rigorous sensor models of the images are established based on collinearity equations with interior and exterior orientation elements retrieved from the corresponding SPICE kernels. Rational polynomial coefficients (RPCs of each image are derived by least squares fitting using vast number of virtual control points generated according to rigorous sensor models. Experiments of different combinations of images are performed for comparisons. The results demonstrate that the plane coordinates can achieve a precision of 0.54 m to 2.54 m, with a height precision of 0.71 m to 8.16 m when only two images are used for three-dimensional triangulation. There is a general trend that the geopositioning precision, especially the height precision, is improved with the convergent angle of the two images increasing from several degrees to about 50°. However, the image matching precision should also be taken into consideration when choosing image pairs for triangulation. The precisions of using all the 9 images are 0.60 m, 0.50 m, 1.23 m in along-track, cross-track, and height directions, which are better than most combinations of two or more images. However, triangulation with selected fewer images could produce better precision than that using all the images.

  15. An analysis of line-drawings based upon automatically inferred grammar and its application to chest x-ray images

    International Nuclear Information System (INIS)

    Nakayama, Akira; Yoshida, Yuuji; Fukumura, Teruo

    1984-01-01

    There is a technique using inferring grammer as image- structure analyzing technique. This technique involves a few problems if it is applied to naturally obtained images, as the practical grammatical technique for two-dimensional image is not established. The authors developed a technique which solved the above problems for the main purpose of the automated structure analysis of naturally obtained image. The first half of this paper describes on the automatic inference of line drawing generation grammar and the line drawing analysis based on that automatic inference. The second half of the paper reports on the actual analysis. The proposed technique is that to extract object line drawings out of the line drawings containing noise. The technique was evaluated for its effectiveness with an example of extracting rib center lines out of thin line chest X-ray images having practical scale and complexity. In this example, the total number of characteristic points (ends, branch points and intersections) composing line drawings per one image was 377, and the total number of line segments composing line drawings was 566 on average per sheet. The extraction ratio was 86.6 % which seemed to be proper when the complexity of input line drawings was considered. Further, the result was compared with the identified rib center lines with the automatic screening system AISCR-V3 for comparison with the conventional processing technique, and it was satisfactory when the versatility of this method was considered. (Wakatsuki, Y.)

  16. Oncological image analysis.

    Science.gov (United States)

    Brady, Sir Michael; Highnam, Ralph; Irving, Benjamin; Schnabel, Julia A

    2016-10-01

    Cancer is one of the world's major healthcare challenges and, as such, an important application of medical image analysis. After a brief introduction to cancer, we summarise some of the major developments in oncological image analysis over the past 20 years, but concentrating those in the authors' laboratories, and then outline opportunities and challenges for the next decade. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. OCML-based colour image encryption

    International Nuclear Information System (INIS)

    Rhouma, Rhouma; Meherzi, Soumaya; Belghith, Safya

    2009-01-01

    The chaos-based cryptographic algorithms have suggested some new ways to develop efficient image-encryption schemes. While most of these schemes are based on low-dimensional chaotic maps, it has been proposed recently to use high-dimensional chaos namely spatiotemporal chaos, which is modelled by one-way coupled-map lattices (OCML). Owing to their hyperchaotic behaviour, such systems are assumed to enhance the cryptosystem security. In this paper, we propose an OCML-based colour image encryption scheme with a stream cipher structure. We use a 192-bit-long external key to generate the initial conditions and the parameters of the OCML. We have made several tests to check the security of the proposed cryptosystem namely, statistical tests including histogram analysis, calculus of the correlation coefficients of adjacent pixels, security test against differential attack including calculus of the number of pixel change rate (NPCR) and unified average changing intensity (UACI), and entropy calculus. The cryptosystem speed is analyzed and tested as well.

  18. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation

    International Nuclear Information System (INIS)

    Zhao, Zhanqi; Möller, Knut; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich

    2014-01-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton–Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR C ) and (4) GREIT with individual thorax geometry (GR T ). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal–Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms. (paper)

  19. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    Science.gov (United States)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  20. Gabor Analysis for Imaging

    DEFF Research Database (Denmark)

    Christensen, Ole; Feichtinger, Hans G.; Paukner, Stephan

    2015-01-01

    , it characterizes a function by its transform over phase space, which is the time–frequency plane (TF-plane) in a musical context or the location–wave-number domain in the context of image processing. Since the transition from the signal domain to the phase space domain introduces an enormous amount of data...... of the generalities relevant for an understanding of Gabor analysis of functions on Rd. We pay special attention to the case d = 2, which is the most important case for image processing and image analysis applications. The chapter is organized as follows. Section 2 presents central tools from functional analysis......, the application of Gabor expansions to image representation is considered in Sect. 6....

  1. Artificial intelligence and medical imaging. Expert systems and image analysis

    International Nuclear Information System (INIS)

    Wackenheim, A.; Zoellner, G.; Horviller, S.; Jacqmain, T.

    1987-01-01

    This paper gives an overview on the existing systems for automated image analysis and interpretation in medical imaging, especially in radiology. The example of ORFEVRE, the system for the analysis of CAT-scan images of the cervical triplet (c3-c5) by image analysis and subsequent expert-system is given and discussed in detail. Possible extensions are described [fr

  2. Integrated optical 3D digital imaging based on DSP scheme

    Science.gov (United States)

    Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

    2008-03-01

    We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

  3. Security Analysis of A Chaos-based Image Encryption Algorithm

    OpenAIRE

    Lian, Shiguo; Sun, Jinsheng; Wang, Zhiquan

    2006-01-01

    The security of Fridrich Image Encryption Algorithm against brute-force attack, statistical attack, known-plaintext attack and select-plaintext attack is analyzed by investigating the properties of the involved chaotic maps and diffusion functions. Based on the given analyses, some means are proposed to strengthen the overall performance of the focused cryptosystem.

  4. A Blind Adaptive Color Image Watermarking Scheme Based on Principal Component Analysis, Singular Value Decomposition and Human Visual System

    Directory of Open Access Journals (Sweden)

    M. Imran

    2017-09-01

    Full Text Available A blind adaptive color image watermarking scheme based on principal component analysis, singular value decomposition, and human visual system is proposed. The use of principal component analysis to decorrelate the three color channels of host image, improves the perceptual quality of watermarked image. Whereas, human visual system and fuzzy inference system helped to improve both imperceptibility and robustness by selecting adaptive scaling factor, so that, areas more prone to noise can be added with more information as compared to less prone areas. To achieve security, location of watermark embedding is kept secret and used as key at the time of watermark extraction, whereas, for capacity both singular values and vectors are involved in watermark embedding process. As a result, four contradictory requirements; imperceptibility, robustness, security and capacity are achieved as suggested by results. Both subjective and objective methods are acquired to examine the performance of proposed schemes. For subjective analysis the watermarked images and watermarks extracted from attacked watermarked images are shown. For objective analysis of proposed scheme in terms of imperceptibility, peak signal to noise ratio, structural similarity index, visual information fidelity and normalized color difference are used. Whereas, for objective analysis in terms of robustness, normalized correlation, bit error rate, normalized hamming distance and global authentication rate are used. Security is checked by using different keys to extract the watermark. The proposed schemes are compared with state-of-the-art watermarking techniques and found better performance as suggested by results.

  5. Gap Acceptance During Lane Changes by Large-Truck Drivers—An Image-Based Analysis

    Science.gov (United States)

    Nobukawa, Kazutoshi; Bao, Shan; LeBlanc, David J.; Zhao, Ding; Peng, Huei; Pan, Christopher S.

    2016-01-01

    This paper presents an analysis of rearward gap acceptance characteristics of drivers of large trucks in highway lane change scenarios. The range between the vehicles was inferred from camera images using the estimated lane width obtained from the lane tracking camera as the reference. Six-hundred lane change events were acquired from a large-scale naturalistic driving data set. The kinematic variables from the image-based gap analysis were filtered by the weighted linear least squares in order to extrapolate them at the lane change time. In addition, the time-to-collision and required deceleration were computed, and potential safety threshold values are provided. The resulting range and range rate distributions showed directional discrepancies, i.e., in left lane changes, large trucks are often slower than other vehicles in the target lane, whereas they are usually faster in right lane changes. Video observations have confirmed that major motivations for changing lanes are different depending on the direction of move, i.e., moving to the left (faster) lane occurs due to a slower vehicle ahead or a merging vehicle on the right-hand side, whereas right lane changes are frequently made to return to the original lane after passing. PMID:26924947

  6. Performance of a gaseous detector based energy dispersive X-ray fluorescence imaging system: Analysis of human teeth treated with dental amalgam

    International Nuclear Information System (INIS)

    Silva, A.L.M.; Figueroa, R.; Jaramillo, A.; Carvalho, M.L.; Veloso, J.F.C.A.

    2013-01-01

    Energy dispersive X-ray fluorescence (EDXRF) imaging systems are of great interest in many applications of different areas, once they allow us to get images of the spatial elemental distribution in the samples. The detector system used in this study is based on a micro patterned gas detector, named Micro-Hole and Strip Plate. The full field of view system, with an active area of 28 × 28 mm 2 presents some important features for EDXRF imaging applications, such as a position resolution below 125 μm, an intrinsic energy resolution of about 14% full width at half maximum for 5.9 keV X-rays, and a counting rate capability of 0.5 MHz. In this work, analysis of human teeth treated by dental amalgam was performed by using the EDXRF imaging system mentioned above. The goal of the analysis is to evaluate the system capabilities in the biomedical field by measuring the drift of the major constituents of a dental amalgam, Zn and Hg, throughout the tooth structures. The elemental distribution pattern of these elements obtained during the analysis suggests diffusion of these elements from the amalgam to teeth tissues. - Highlights: • Demonstration of an EDXRF imaging system based on a 2D-MHSP detector for biological analysis • Evaluation of the drift of the dental amalgam constituents, throughout the teeth • Observation of Hg diffusion, due to hydroxyapatite crystal defects that compose the teeth tissues

  7. Performance of a gaseous detector based energy dispersive X-ray fluorescence imaging system: Analysis of human teeth treated with dental amalgam

    Energy Technology Data Exchange (ETDEWEB)

    Silva, A.L.M. [I3N, Physics Dept, University of Aveiro, 3810-193 Aveiro (Portugal); Figueroa, R.; Jaramillo, A. [Physics Department, Universidad de La Frontera, Temuco (Chile); Carvalho, M.L. [Atomic Physics Centre, University of Lisbon, 1649-03 Lisboa (Portugal); Veloso, J.F.C.A., E-mail: joao.veloso@ua.pt [I3N, Physics Dept, University of Aveiro, 3810-193 Aveiro (Portugal)

    2013-08-01

    Energy dispersive X-ray fluorescence (EDXRF) imaging systems are of great interest in many applications of different areas, once they allow us to get images of the spatial elemental distribution in the samples. The detector system used in this study is based on a micro patterned gas detector, named Micro-Hole and Strip Plate. The full field of view system, with an active area of 28 × 28 mm{sup 2} presents some important features for EDXRF imaging applications, such as a position resolution below 125 μm, an intrinsic energy resolution of about 14% full width at half maximum for 5.9 keV X-rays, and a counting rate capability of 0.5 MHz. In this work, analysis of human teeth treated by dental amalgam was performed by using the EDXRF imaging system mentioned above. The goal of the analysis is to evaluate the system capabilities in the biomedical field by measuring the drift of the major constituents of a dental amalgam, Zn and Hg, throughout the tooth structures. The elemental distribution pattern of these elements obtained during the analysis suggests diffusion of these elements from the amalgam to teeth tissues. - Highlights: • Demonstration of an EDXRF imaging system based on a 2D-MHSP detector for biological analysis • Evaluation of the drift of the dental amalgam constituents, throughout the teeth • Observation of Hg diffusion, due to hydroxyapatite crystal defects that compose the teeth tissues.

  8. A comparative analysis of pixel- and object-based detection of landslides from very high-resolution images

    Science.gov (United States)

    Keyport, Ren N.; Oommen, Thomas; Martha, Tapas R.; Sajinkumar, K. S.; Gierke, John S.

    2018-02-01

    A comparative analysis of landslides detected by pixel-based and object-oriented analysis (OOA) methods was performed using very high-resolution (VHR) remotely sensed aerial images for the San Juan La Laguna, Guatemala, which witnessed widespread devastation during the 2005 Hurricane Stan. A 3-band orthophoto of 0.5 m spatial resolution together with a 115 field-based landslide inventory were used for the analysis. A binary reference was assigned with a zero value for landslide and unity for non-landslide pixels. The pixel-based analysis was performed using unsupervised classification, which resulted in 11 different trial classes. Detection of landslides using OOA includes 2-step K-means clustering to eliminate regions based on brightness; elimination of false positives using object properties such as rectangular fit, compactness, length/width ratio, mean difference of objects, and slope angle. Both overall accuracy and F-score for OOA methods outperformed pixel-based unsupervised classification methods in both landslide and non-landslide classes. The overall accuracy for OOA and pixel-based unsupervised classification was 96.5% and 94.3%, respectively, whereas the best F-score for landslide identification for OOA and pixel-based unsupervised methods: were 84.3% and 77.9%, respectively.Results indicate that the OOA is able to identify the majority of landslides with a few false positive when compared to pixel-based unsupervised classification.

  9. Analysis of high-throughput plant image data with the information system IAP

    Directory of Open Access Journals (Sweden)

    Klukas Christian

    2012-06-01

    Full Text Available This work presents a sophisticated information system, the Integrated Analysis Platform (IAP, an approach supporting large-scale image analysis for different species and imaging systems. In its current form, IAP supports the investigation of Maize, Barley and Arabidopsis plants based on images obtained in different spectra.

  10. Mass distribution of fission fragments using SSNTDs based image analysis system

    International Nuclear Information System (INIS)

    Kolekar, R.V.; Sharma, D.N.

    2006-01-01

    Lexan polycarbonate track detector was used to obtain mass distribution of fission fragments from 252 Cf planchette source, Normally, if the fission fragments are incident perpendicular to the lexan surface, the diameter of heavy fragment is greater than that of lighter fragment. In practical problems fission fragments are incident on the detector at all angles. So, in the present experiment, lexan detector was exposed to 252 Cf planchette source in 2π geometry. Fission fragments were incident on the detector with various angles. So the projected fission track length for fission fragment of same energy is different because of different angle of incidence. Image analysis software was used to measure the projected track length. But the problem is that for fission fragment having greater angle of incidence the entire track length is not focused on the surface. So reduced track length is measured. This problem is solved by taking two images, one at the surface and one at the tip of track and then overlapping both the images using image analysis software. The projected track length and the depth of the track were used to get the angle of incidence. Fission track lengths were measured for same angle of incidence. In all 500 track lengths were measured and plot for mass distribution for fission fragment was obtained.(author)

  11. Chromatic Image Analysis For Quantitative Thermal Mapping

    Science.gov (United States)

    Buck, Gregory M.

    1995-01-01

    Chromatic image analysis system (CIAS) developed for use in noncontact measurements of temperatures on aerothermodynamic models in hypersonic wind tunnels. Based on concept of temperature coupled to shift in color spectrum for optical measurement. Video camera images fluorescence emitted by phosphor-coated model at two wavelengths. Temperature map of model then computed from relative brightnesses in video images of model at those wavelengths. Eliminates need for intrusive, time-consuming, contact temperature measurements by gauges, making it possible to map temperatures on complex surfaces in timely manner and at reduced cost.

  12. A New Images Hiding Scheme Based on Chaotic Sequences

    Institute of Scientific and Technical Information of China (English)

    LIU Nian-sheng; GUO Dong-hui; WU Bo-xi; Parr G

    2005-01-01

    We propose a data hidding technique in a still image. This technique is based on chaotic sequence in the transform domain of covert image. We use different chaotic random sequences multiplied by multiple sensitive images, respectively, to spread the spectrum of sensitive images. Multiple sensitive images are hidden in a covert image as a form of noise. The results of theoretical analysis and computer simulation show the new hiding technique have better properties with high security, imperceptibility and capacity for hidden information in comparison with the conventional scheme such as LSB (Least Significance Bit).

  13. Analysis of PET hypoxia imaging in the quantitative imaging for personalized cancer medicine program

    International Nuclear Information System (INIS)

    Yeung, Ivan; Driscoll, Brandon; Keller, Harald; Shek, Tina; Jaffray, David; Hedley, David

    2014-01-01

    Quantitative imaging is an important tool in clinical trials of testing novel agents and strategies for cancer treatment. The Quantitative Imaging Personalized Cancer Medicine Program (QIPCM) provides clinicians and researchers participating in multi-center clinical trials with a central repository for their imaging data. In addition, a set of tools provide standards of practice (SOP) in end-to-end quality assurance of scanners and image analysis. The four components for data archiving and analysis are the Clinical Trials Patient Database, the Clinical Trials PACS, the data analysis engine(s) and the high-speed networks that connect them. The program provides a suite of software which is able to perform RECIST, dynamic MRI, CT and PET analysis. The imaging data can be assessed securely from remote and analyzed by researchers with these software tools, or with tools provided by the users and installed at the server. Alternatively, QIPCM provides a service for data analysis on the imaging data according developed SOP. An example of a clinical study in which patients with unresectable pancreatic adenocarcinoma were studied with dynamic PET-FAZA for hypoxia measurement will be discussed. We successfully quantified the degree of hypoxia as well as tumor perfusion in a group of 20 patients in terms of SUV and hypoxic fraction. It was found that there is no correlation between bulk tumor perfusion and hypoxia status in this cohort. QIPCM also provides end-to-end QA testing of scanners used in multi-center clinical trials. Based on quality assurance data from multiple CT-PET scanners, we concluded that quality control of imaging was vital in the success in multi-center trials as different imaging and reconstruction parameters in PET imaging could lead to very different results in hypoxia imaging. (author)

  14. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  15. Dental Videographic Analysis using Digital Age Media.

    Science.gov (United States)

    Agarwal, Anirudh; Seth, Karan; Parmar, Siddharaj; Jhawar, Rahul

    2016-01-01

    This study was to evaluate a new method of smile analysis using videographic and photographic softwares (as in this study Photoshop Elements X, Windows Movie Maker 2012) as primary assessment tools and to develop an index for malocclusion and treatment plan that could be used in assessing severity of maloc-clussion. Agarwal A, Seth K, Parmar S, Jhawar R. Dental Videographic Analysis using Digital Age Media. Int J Clin Pediatr Dent 2016;9(4):355-363.

  16. Linear-fitting-based similarity coefficient map for tissue dissimilarity analysis in -w magnetic resonance imaging

    International Nuclear Information System (INIS)

    Yu Shao-De; Wu Shi-Bin; Xie Yao-Qin; Wang Hao-Yu; Wei Xin-Hua; Chen Xin; Pan Wan-Long; Hu Jiani

    2015-01-01

    Similarity coefficient mapping (SCM) aims to improve the morphological evaluation of weighted magnetic resonance imaging However, how to interpret the generated SCM map is still pending. Moreover, is it probable to extract tissue dissimilarity messages based on the theory behind SCM? The primary purpose of this paper is to address these two questions. First, the theory of SCM was interpreted from the perspective of linear fitting. Then, a term was embedded for tissue dissimilarity information. Finally, our method was validated with sixteen human brain image series from multi-echo . Generated maps were investigated from signal-to-noise ratio (SNR) and perceived visual quality, and then interpreted from intra- and inter-tissue intensity. Experimental results show that both perceptibility of anatomical structures and tissue contrast are improved. More importantly, tissue similarity or dissimilarity can be quantified and cross-validated from pixel intensity analysis. This method benefits image enhancement, tissue classification, malformation detection and morphological evaluation. (paper)

  17. ANALYSIS OF SST IMAGES BY WEIGHTED ENSEMBLE TRANSFORM KALMAN FILTER

    OpenAIRE

    Sai , Gorthi; Beyou , Sébastien; Memin , Etienne

    2011-01-01

    International audience; This paper presents a novel, efficient scheme for the analysis of Sea Surface Temperature (SST) ocean images. We consider the estimation of the velocity fields and vorticity values from a sequence of oceanic images. The contribution of this paper lies in proposing a novel, robust and simple approach based onWeighted Ensemble Transform Kalman filter (WETKF) data assimilation technique for the analysis of real SST images, that may contain coast regions or large areas of ...

  18. Object-Based Image Analysis in Wetland Research: A Review

    Directory of Open Access Journals (Sweden)

    Iryna Dronova

    2015-05-01

    Full Text Available The applications of object-based image analysis (OBIA in remote sensing studies of wetlands have been growing over recent decades, addressing tasks from detection and delineation of wetland bodies to comprehensive analyses of within-wetland cover types and their change. Compared to pixel-based approaches, OBIA offers several important benefits to wetland analyses related to smoothing of the local noise, incorporating meaningful non-spectral features for class separation and accounting for landscape hierarchy of wetland ecosystem organization and structure. However, there has been little discussion on whether unique challenges of wetland environments can be uniformly addressed by OBIA across different types of data, spatial scales and research objectives, and to what extent technical and conceptual aspects of this framework may themselves present challenges in a complex wetland setting. This review presents a synthesis of 73 studies that applied OBIA to different types of remote sensing data, spatial scale and research objectives. It summarizes the progress and scope of OBIA uses in wetlands, key benefits of this approach, factors related to accuracy and uncertainty in its applications and the main research needs and directions to expand the OBIA capacity in the future wetland studies. Growing demands for higher-accuracy wetland characterization at both regional and local scales together with advances in very high resolution remote sensing and novel tasks in wetland restoration monitoring will likely continue active exploration of the OBIA potential in these diverse and complex environments.

  19. Microscopy image segmentation tool: Robust image data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Valmianski, Ilya, E-mail: ivalmian@ucsd.edu; Monton, Carlos; Schuller, Ivan K. [Department of Physics and Center for Advanced Nanoscience, University of California San Diego, 9500 Gilman Drive, La Jolla, California 92093 (United States)

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  20. Microscopy image segmentation tool: Robust image data analysis

    Science.gov (United States)

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.

    2014-03-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  1. Microscopy image segmentation tool: Robust image data analysis

    International Nuclear Information System (INIS)

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.

    2014-01-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy

  2. Application of three-class ROC analysis to task-based image quality assessment of simultaneous dual-isotope myocardial perfusion SPECT (MPS).

    Science.gov (United States)

    He, Xin; Song, Xiyun; Frey, Eric C

    2008-11-01

    The diagnosis of cardiac disease using dual-isotope myocardial perfusion SPECT (MPS) is based on the defect status in both stress and rest images, and can be modeled as a three-class task of classifying patients as having no, reversible, or fixed perfusion defects. Simultaneous acquisition protocols for dual-isotope MPS imaging have gained much interest due to their advantages including perfect registration of the (201)Tl and (99m)Tc images in space and time, increased patient comfort, and higher clinical throughput. As a result of simultaneous acquisition, however, crosstalk contamination, where photons emitted by one isotope contribute to the image of the other isotope, degrades image quality. Minimizing the crosstalk is important in obtaining the best possible image quality. One way to minimize the crosstalk is to optimize the injected activity of the two isotopes by considering the three-class nature of the diagnostic problem. To effectively do so, we have previously developed a three-class receiver operating characteristic (ROC) analysis methodology that extends and unifies the decision theoretic, linear discriminant analysis, and psychophysical foundations of binary ROC analysis in a three-class paradigm. In this work, we applied the proposed three-class ROC methodology to the assessment of the image quality of simultaneous dual-isotope MPS imaging techniques and the determination of the optimal injected activity combination. In addition to this application, the rapid development of diagnostic imaging techniques has produced an increasing number of clinical diagnostic tasks that involve not only disease detection, but also disease characterization and are thus multiclass tasks. This paper provides a practical example of the application of the proposed three-class ROC analysis methodology to medical problems.

  3. A vegetation height classification approach based on texture analysis of a single VHR image

    International Nuclear Information System (INIS)

    Petrou, Z I; Manakos, I; Stathaki, T; Tarantino, C; Adamo, M; Blonda, P

    2014-01-01

    Vegetation height is a crucial feature in various applications related to ecological mapping, enhancing the discrimination among different land cover or habitat categories and facilitating a series of environmental tasks, ranging from biodiversity monitoring and assessment to landscape characterization, disaster management and conservation planning. Primary sources of information on vegetation height include in situ measurements and data from active satellite or airborne sensors, which, however, may often be non-affordable or unavailable for certain regions. Alternative approaches on extracting height information from very high resolution (VHR) satellite imagery based on texture analysis, have recently been presented, with promising results. Following the notion that multispectral image bands may often be highly correlated, data transformation and dimensionality reduction techniques are expected to reduce redundant information, and thus, the computational cost of the approaches, without significantly compromising their accuracy. In this paper, dimensionality reduction is performed on a VHR image and textural characteristics are calculated on its reconstructed approximations, to show that their discriminatory capabilities are maintained up to a large degree. Texture analysis is also performed on the projected data to investigate whether the different height categories can be distinguished in a similar way

  4. Nanoplatform-based molecular imaging

    National Research Council Canada - National Science Library

    Chen, Xiaoyuan

    2011-01-01

    "Nanoplathform-Based Molecular Imaging provides rationale for using nanoparticle-based probes for molecular imaging, then discusses general strategies for this underutilized, yet promising, technology...

  5. Issues in Quantitative Analysis of Ultraviolet Imager (UV) Data: Airglow

    Science.gov (United States)

    Germany, G. A.; Richards, P. G.; Spann, J. F.; Brittnacher, M. J.; Parks, G. K.

    1999-01-01

    The GGS Ultraviolet Imager (UVI) has proven to be especially valuable in correlative substorm, auroral morphology, and extended statistical studies of the auroral regions. Such studies are based on knowledge of the location, spatial, and temporal behavior of auroral emissions. More quantitative studies, based on absolute radiometric intensities from UVI images, require a more intimate knowledge of the instrument behavior and data processing requirements and are inherently more difficult than studies based on relative knowledge of the oval location. In this study, UVI airglow observations are analyzed and compared with model predictions to illustrate issues that arise in quantitative analysis of UVI images. These issues include instrument calibration, long term changes in sensitivity, and imager flat field response as well as proper background correction. Airglow emissions are chosen for this study because of their relatively straightforward modeling requirements and because of their implications for thermospheric compositional studies. The analysis issues discussed here, however, are identical to those faced in quantitative auroral studies.

  6. Design of an image encryption scheme based on a multiple chaotic map

    Science.gov (United States)

    Tong, Xiao-Jun

    2013-07-01

    In order to solve the problem that chaos is degenerated in limited computer precision and Cat map is the small key space, this paper presents a chaotic map based on topological conjugacy and the chaotic characteristics are proved by Devaney definition. In order to produce a large key space, a Cat map named block Cat map is also designed for permutation process based on multiple-dimensional chaotic maps. The image encryption algorithm is based on permutation-substitution, and each key is controlled by different chaotic maps. The entropy analysis, differential analysis, weak-keys analysis, statistical analysis, cipher random analysis, and cipher sensibility analysis depending on key and plaintext are introduced to test the security of the new image encryption scheme. Through the comparison to the proposed scheme with AES, DES and Logistic encryption methods, we come to the conclusion that the image encryption method solves the problem of low precision of one dimensional chaotic function and has higher speed and higher security.

  7. A novel secret image sharing scheme based on chaotic system

    Science.gov (United States)

    Li, Li; Abd El-Latif, Ahmed A.; Wang, Chuanjun; Li, Qiong; Niu, Xiamu

    2012-04-01

    In this paper, we propose a new secret image sharing scheme based on chaotic system and Shamir's method. The new scheme protects the shadow images with confidentiality and loss-tolerance simultaneously. In the new scheme, we generate the key sequence based on chaotic system and then encrypt the original image during the sharing phase. Experimental results and analysis of the proposed scheme demonstrate a better performance than other schemes and confirm a high probability to resist brute force attack.

  8. Attenuation correction for brain PET imaging using deep neural network based on dixon and ZTE MR images.

    Science.gov (United States)

    Gong, Kuang; Yang, Jaewon; Kim, Kyungsang; El Fakhri, Georges; Seo, Youngho; Li, Quanzheng

    2018-05-23

    Positron Emission Tomography (PET) is a functional imaging modality widely used in neuroscience studies. To obtain meaningful quantitative results from PET images, attenuation correction is necessary during image reconstruction. For PET/MR hybrid systems, PET attenuation is challenging as Magnetic Resonance (MR) images do not reflect attenuation coefficients directly. To address this issue, we present deep neural network methods to derive the continuous attenuation coefficients for brain PET imaging from MR images. With only Dixon MR images as the network input, the existing U-net structure was adopted and analysis using forty patient data sets shows it is superior than other Dixon based methods. When both Dixon and zero echo time (ZTE) images are available, we have proposed a modified U-net structure, named GroupU-net, to efficiently make use of both Dixon and ZTE information through group convolution modules when the network goes deeper. Quantitative analysis based on fourteen real patient data sets demonstrates that both network approaches can perform better than the standard methods, and the proposed network structure can further reduce the PET quantification error compared to the U-net structure. © 2018 Institute of Physics and Engineering in Medicine.

  9. Entropy-Based Block Processing for Satellite Image Registration

    Directory of Open Access Journals (Sweden)

    Ikhyun Lee

    2012-11-01

    Full Text Available Image registration is an important task in many computer vision applications such as fusion systems, 3D shape recovery and earth observation. Particularly, registering satellite images is challenging and time-consuming due to limited resources and large image size. In such scenario, state-of-the-art image registration methods such as scale-invariant feature transform (SIFT may not be suitable due to high processing time. In this paper, we propose an algorithm based on block processing via entropy to register satellite images. The performance of the proposed method is evaluated using different real images. The comparative analysis shows that it not only reduces the processing time but also enhances the accuracy.

  10. Hyperspectral image analysis. A tutorial

    DEFF Research Database (Denmark)

    Amigo Rubio, Jose Manuel; Babamoradi, Hamid; Elcoroaristizabal Martin, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processi...... to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares - Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case....... will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology...

  11. A framework of region-based dynamic image fusion

    Institute of Scientific and Technical Information of China (English)

    WANG Zhong-hua; QIN Zheng; LIU Yu

    2007-01-01

    A new framework of region-based dynamic image fusion is proposed. First, the technique of target detection is applied to dynamic images (image sequences) to segment images into different targets and background regions. Then different fusion rules are employed in different regions so that the target information is preserved as much as possible. In addition, steerable non-separable wavelet frame transform is used in the process of multi-resolution analysis, so the system achieves favorable characters of orientation and invariant shift. Compared with other image fusion methods, experimental results showed that the proposed method has better capabilities of target recognition and preserves clear background information.

  12. Texture Based Quality Analysis of Simulated Synthetic Ultrasound Images Using Local Binary Patterns †

    Directory of Open Access Journals (Sweden)

    Prerna Singh

    2017-12-01

    Full Text Available Speckle noise reduction is an important area of research in the field of ultrasound image processing. Several algorithms for speckle noise characterization and analysis have been recently proposed in the area. Synthetic ultrasound images can play a key role in noise evaluation methods as they can be used to generate a variety of speckle noise models under different interpolation and sampling schemes, and can also provide valuable ground truth data for estimating the accuracy of the chosen methods. However, not much work has been done in the area of modeling synthetic ultrasound images, and in simulating speckle noise generation to get images that are as close as possible to real ultrasound images. An important aspect of simulated synthetic ultrasound images is the requirement for extensive quality assessment for ensuring that they have the texture characteristics and gray-tone features of real images. This paper presents texture feature analysis of synthetic ultrasound images using local binary patterns (LBP and demonstrates the usefulness of a set of LBP features for image quality assessment. Experimental results presented in the paper clearly show how these features could provide an accurate quality metric that correlates very well with subjective evaluations performed by clinical experts.

  13. SU-C-201-04: Quantification of Perfusion Heterogeneity Based On Texture Analysis for Fully Automatic Detection of Ischemic Deficits From Myocardial Perfusion Imaging

    International Nuclear Information System (INIS)

    Fang, Y; Huang, H; Su, T

    2015-01-01

    Purpose: Texture-based quantification of image heterogeneity has been a popular topic for imaging studies in recent years. As previous studies mainly focus on oncological applications, we report our recent efforts of applying such techniques on cardiac perfusion imaging. A fully automated procedure has been developed to perform texture analysis for measuring the image heterogeneity. Clinical data were used to evaluate the preliminary performance of such methods. Methods: Myocardial perfusion images of Thallium-201 scans were collected from 293 patients with suspected coronary artery disease. Each subject underwent a Tl-201 scan and a percutaneous coronary intervention (PCI) within three months. The PCI Result was used as the gold standard of coronary ischemia of more than 70% stenosis. Each Tl-201 scan was spatially normalized to an image template for fully automatic segmentation of the LV. The segmented voxel intensities were then carried into the texture analysis with our open-source software Chang Gung Image Texture Analysis toolbox (CGITA). To evaluate the clinical performance of the image heterogeneity for detecting the coronary stenosis, receiver operating characteristic (ROC) analysis was used to compute the overall accuracy, sensitivity and specificity as well as the area under curve (AUC). Those indices were compared to those obtained from the commercially available semi-automatic software QPS. Results: With the fully automatic procedure to quantify heterogeneity from Tl-201 scans, we were able to achieve a good discrimination with good accuracy (74%), sensitivity (73%), specificity (77%) and AUC of 0.82. Such performance is similar to those obtained from the semi-automatic QPS software that gives a sensitivity of 71% and specificity of 77%. Conclusion: Based on fully automatic procedures of data processing, our preliminary data indicate that the image heterogeneity of myocardial perfusion imaging can provide useful information for automatic determination

  14. Imaging mass spectrometry statistical analysis.

    Science.gov (United States)

    Jones, Emrys A; Deininger, Sören-Oliver; Hogendoorn, Pancras C W; Deelder, André M; McDonnell, Liam A

    2012-08-30

    Imaging mass spectrometry is increasingly used to identify new candidate biomarkers. This clinical application of imaging mass spectrometry is highly multidisciplinary: expertise in mass spectrometry is necessary to acquire high quality data, histology is required to accurately label the origin of each pixel's mass spectrum, disease biology is necessary to understand the potential meaning of the imaging mass spectrometry results, and statistics to assess the confidence of any findings. Imaging mass spectrometry data analysis is further complicated because of the unique nature of the data (within the mass spectrometry field); several of the assumptions implicit in the analysis of LC-MS/profiling datasets are not applicable to imaging. The very large size of imaging datasets and the reporting of many data analysis routines, combined with inadequate training and accessible reviews, have exacerbated this problem. In this paper we provide an accessible review of the nature of imaging data and the different strategies by which the data may be analyzed. Particular attention is paid to the assumptions of the data analysis routines to ensure that the reader is apprised of their correct usage in imaging mass spectrometry research. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Determination of Particle Size and Distribution through Image-Based Macroscopic Analysis of the Structure of Biomass Briquettes

    Directory of Open Access Journals (Sweden)

    Veronika Chaloupková

    2018-02-01

    Full Text Available Via image-based macroscopic, analysis of a briquettes’ surface structure, particle size, and distribution was determined to better understand the behavioural pattern of input material during agglomeration in the pressing chamber of a briquetting machine. The briquettes, made of miscanthus, industrial hemp and pine sawdust were produced by a hydraulic piston press. Their structure was visualized by a stereomicroscope equipped with a digital camera and software for image analysis and data measurements. In total, 90 images of surface structure were obtained and quantitatively analysed. Using Nikon Instruments Software (NIS-Elements software, the length and area of 900 particles were measured and statistically tested to compare the size of the particles at different surface locations. Results showed statistically significant differences in particles’ size distribution: larger particles were generally on the front side of briquettes and vice versa, smaller particles were on the rear side. As well, larger particles were centred in the middle of cross sections and the smaller particles were centred on the bottom of the briquette.

  16. Rhizoslides: paper-based growth system for non-destructive, high throughput phenotyping of root development by means of image analysis.

    Science.gov (United States)

    Le Marié, Chantal; Kirchgessner, Norbert; Marschall, Daniela; Walter, Achim; Hund, Andreas

    2014-01-01

    A quantitative characterization of root system architecture is currently being attempted for various reasons. Non-destructive, rapid analyses of root system architecture are difficult to perform due to the hidden nature of the root. Hence, improved methods to measure root architecture are necessary to support knowledge-based plant breeding and to analyse root growth responses to environmental changes. Here, we report on the development of a novel method to reveal growth and architecture of maize root systems. The method is based on the cultivation of different root types within several layers of two-dimensional, large (50 × 60 cm) plates (rhizoslides). A central plexiglass screen stabilizes the system and is covered on both sides with germination paper providing water and nutrients for the developing root, followed by a transparent cover foil to prevent the roots from falling dry and to stabilize the system. The embryonic roots grow hidden between a Plexiglas surface and paper, whereas crown roots grow visible between paper and the transparent cover. Long cultivation with good image quality up to 20 days (four fully developed leaves) was enhanced by suppressing fungi with a fungicide. Based on hyperspectral microscopy imaging, the quality of different germination papers was tested and three provided sufficient contrast to distinguish between roots and background (segmentation). Illumination, image acquisition and segmentation were optimised to facilitate efficient root image analysis. Several software packages were evaluated with regard to their precision and the time investment needed to measure root system architecture. The software 'Smart Root' allowed precise evaluation of root development but needed substantial user interference. 'GiaRoots' provided the best segmentation method for batch processing in combination with a good analysis of global root characteristics but overestimated root length due to thinning artefacts. 'WhinRhizo' offered the most rapid

  17. Semiautomated Segmentation and Measurement of Cytoplasmic Vacuoles in a Neutrophil With General-Purpose Image Analysis Software.

    Science.gov (United States)

    Mizukami, Maki; Yamada, Misaki; Fukui, Sayaka; Fujimoto, Nao; Yoshida, Shigeru; Kaga, Sanae; Obata, Keiko; Jin, Shigeki; Miwa, Keiko; Masauzi, Nobuo

    2016-11-01

    Morphological observation of blood or marrow film is still described nonquantitatively. We developed a semiautomatic method for segmenting vacuoles from the cytoplasm using Photoshop (PS) and Image-J (IJ), called PS-IJ, and measured the relative entire cell area (rECA) and relative areas of vacuoles (rAV) in the cytoplasm of neutrophil with PS-IJ. Whole-blood samples were stored at 4°C with ethylenediaminetetraacetate and in two different preserving manners (P1 and P2). Color-tone intensity levels of neutrophil images were semiautomatically compensated using PS, and then vacuole portions were automatically segmented by IJ. The rAV and rECA were measured by counting pixels by IJ. For evaluating the accuracy in segmentations of vacuoles with PS-IJ, the rAV/rECA ratios calculated with results from PS-IJ were compared with those calculated with human eye and IJ (HE-IJ). The rECA and rAV/ in P1 significantly (P < 0.05, P < 0.05) were enlarged and increased, but did not significantly (P = 0.46, P = 0.21) change in P2. The rAV/rECA ratios by PS-IJ were significantly correlated (r = 0.90, P < 0.01) with those by HE-IJ. PS-IJ method can successfully segment vacuoles and measure the rAV and rECA, becoming a useful tool for quantitative description of morphological observation of blood and marrow film. © 2016 Wiley Periodicals, Inc.

  18. OpenComet: An automated tool for comet assay image analysis

    Directory of Open Access Journals (Sweden)

    Benjamin M. Gyori

    2014-01-01

    Full Text Available Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.

  19. Multi-Resolution Wavelet-Transformed Image Analysis of Histological Sections of Breast Carcinomas

    Directory of Open Access Journals (Sweden)

    Hae-Gil Hwang

    2005-01-01

    Full Text Available Multi-resolution images of histological sections of breast cancer tissue were analyzed using texture features of Haar- and Daubechies transform wavelets. Tissue samples analyzed were from ductal regions of the breast and included benign ductal hyperplasia, ductal carcinoma in situ (DCIS, and invasive ductal carcinoma (CA. To assess the correlation between computerized image analysis and visual analysis by a pathologist, we created a two-step classification system based on feature extraction and classification. In the feature extraction step, we extracted texture features from wavelet-transformed images at 10× magnification. In the classification step, we applied two types of classifiers to the extracted features, namely a statistics-based multivariate (discriminant analysis and a neural network. Using features from second-level Haar transform wavelet images in combination with discriminant analysis, we obtained classification accuracies of 96.67 and 87.78% for the training and testing set (90 images each, respectively. We conclude that the best classifier of carcinomas in histological sections of breast tissue are the texture features from the second-level Haar transform wavelet images used in a discriminant function.

  20. Morphological observation and analysis using automated image cytometry for the comparison of trypan blue and fluorescence-based viability detection method.

    Science.gov (United States)

    Chan, Leo Li-Ying; Kuksin, Dmitry; Laverty, Daniel J; Saldi, Stephanie; Qiu, Jean

    2015-05-01

    The ability to accurately determine cell viability is essential to performing a well-controlled biological experiment. Typical experiments range from standard cell culturing to advanced cell-based assays that may require cell viability measurement for downstream experiments. The traditional cell viability measurement method has been the trypan blue (TB) exclusion assay. However, since the introduction of fluorescence-based dyes for cell viability measurement using flow or image-based cytometry systems, there have been numerous publications comparing the two detection methods. Although previous studies have shown discrepancies between TB exclusion and fluorescence-based viability measurements, image-based morphological analysis was not performed in order to examine the viability discrepancies. In this work, we compared TB exclusion and fluorescence-based viability detection methods using image cytometry to observe morphological changes due to the effect of TB on dead cells. Imaging results showed that as the viability of a naturally-dying Jurkat cell sample decreased below 70 %, many TB-stained cells began to exhibit non-uniform morphological characteristics. Dead cells with these characteristics may be difficult to count under light microscopy, thus generating an artificially higher viability measurement compared to fluorescence-based method. These morphological observations can potentially explain the differences in viability measurement between the two methods.

  1. Quantification and recognition of parkinsonian gait from monocular video imaging using kernel-based principal component analysis

    Directory of Open Access Journals (Sweden)

    Chen Shih-Wei

    2011-11-01

    Full Text Available Abstract Background The computer-aided identification of specific gait patterns is an important issue in the assessment of Parkinson's disease (PD. In this study, a computer vision-based gait analysis approach is developed to assist the clinical assessments of PD with kernel-based principal component analysis (KPCA. Method Twelve PD patients and twelve healthy adults with no neurological history or motor disorders within the past six months were recruited and separated according to their "Non-PD", "Drug-On", and "Drug-Off" states. The participants were asked to wear light-colored clothing and perform three walking trials through a corridor decorated with a navy curtain at their natural pace. The participants' gait performance during the steady-state walking period was captured by a digital camera for gait analysis. The collected walking image frames were then transformed into binary silhouettes for noise reduction and compression. Using the developed KPCA-based method, the features within the binary silhouettes can be extracted to quantitatively determine the gait cycle time, stride length, walking velocity, and cadence. Results and Discussion The KPCA-based method uses a feature-extraction approach, which was verified to be more effective than traditional image area and principal component analysis (PCA approaches in classifying "Non-PD" controls and "Drug-Off/On" PD patients. Encouragingly, this method has a high accuracy rate, 80.51%, for recognizing different gaits. Quantitative gait parameters are obtained, and the power spectrums of the patients' gaits are analyzed. We show that that the slow and irregular actions of PD patients during walking tend to transfer some of the power from the main lobe frequency to a lower frequency band. Our results indicate the feasibility of using gait performance to evaluate the motor function of patients with PD. Conclusion This KPCA-based method requires only a digital camera and a decorated corridor setup

  2. Intelligent image retrieval based on radiology reports

    Energy Technology Data Exchange (ETDEWEB)

    Gerstmair, Axel; Langer, Mathias; Kotter, Elmar [University Medical Center Freiburg, Department of Diagnostic Radiology, Freiburg (Germany); Daumke, Philipp; Simon, Kai [Averbis GmbH, Freiburg (Germany)

    2012-12-15

    To create an advanced image retrieval and data-mining system based on in-house radiology reports. Radiology reports are semantically analysed using natural language processing (NLP) techniques and stored in a state-of-the-art search engine. Images referenced by sequence and image number in the reports are retrieved from the picture archiving and communication system (PACS) and stored for later viewing. A web-based front end is used as an interface to query for images and show the results with the retrieved images and report text. Using a comprehensive radiological lexicon for the underlying terminology, the search algorithm also finds results for synonyms, abbreviations and related topics. The test set was 108 manually annotated reports analysed by different system configurations. Best results were achieved using full syntactic and semantic analysis with a precision of 0.929 and recall of 0.952. Operating successfully since October 2010, 258,824 reports have been indexed and a total of 405,146 preview images are stored in the database. Data-mining and NLP techniques provide quick access to a vast repository of images and radiology reports with both high precision and recall values. Consequently, the system has become a valuable tool in daily clinical routine, education and research. (orig.)

  3. Intelligent image retrieval based on radiology reports

    International Nuclear Information System (INIS)

    Gerstmair, Axel; Langer, Mathias; Kotter, Elmar; Daumke, Philipp; Simon, Kai

    2012-01-01

    To create an advanced image retrieval and data-mining system based on in-house radiology reports. Radiology reports are semantically analysed using natural language processing (NLP) techniques and stored in a state-of-the-art search engine. Images referenced by sequence and image number in the reports are retrieved from the picture archiving and communication system (PACS) and stored for later viewing. A web-based front end is used as an interface to query for images and show the results with the retrieved images and report text. Using a comprehensive radiological lexicon for the underlying terminology, the search algorithm also finds results for synonyms, abbreviations and related topics. The test set was 108 manually annotated reports analysed by different system configurations. Best results were achieved using full syntactic and semantic analysis with a precision of 0.929 and recall of 0.952. Operating successfully since October 2010, 258,824 reports have been indexed and a total of 405,146 preview images are stored in the database. Data-mining and NLP techniques provide quick access to a vast repository of images and radiology reports with both high precision and recall values. Consequently, the system has become a valuable tool in daily clinical routine, education and research. (orig.)

  4. Validation of a Smartphone Image-Based Dietary Assessment Method for Pregnant Women

    Directory of Open Access Journals (Sweden)

    Amy M. Ashman

    2017-01-01

    Full Text Available Image-based dietary records could lower participant burden associated with traditional prospective methods of dietary assessment. They have been used in children, adolescents and adults, but have not been evaluated in pregnant women. The current study evaluated relative validity of the DietBytes image-based dietary assessment method for assessing energy and nutrient intakes. Pregnant women collected image-based dietary records (via a smartphone application of all food, drinks and supplements consumed over three non-consecutive days. Intakes from the image-based method were compared to intakes collected from three 24-h recalls, taken on random days; once per week, in the weeks following the image-based record. Data were analyzed using nutrient analysis software. Agreement between methods was ascertained using Pearson correlations and Bland-Altman plots. Twenty-five women (27 recruited, one withdrew, one incomplete, median age 29 years, 15 primiparas, eight Aboriginal Australians, completed image-based records for analysis. Significant correlations between the two methods were observed for energy, macronutrients and fiber (r = 0.58–0.84, all p < 0.05, and for micronutrients both including (r = 0.47–0.94, all p < 0.05 and excluding (r = 0.40–0.85, all p < 0.05 supplements in the analysis. Bland-Altman plots confirmed acceptable agreement with no systematic bias. The DietBytes method demonstrated acceptable relative validity for assessment of nutrient intakes of pregnant women.

  5. Evaluating fuzzy operators of an object-based image analysis for detecting landslides and their changes

    Science.gov (United States)

    Feizizadeh, Bakhtiar; Blaschke, Thomas; Tiede, Dirk; Moghaddam, Mohammad Hossein Rezaei

    2017-09-01

    This article presents a method of object-based image analysis (OBIA) for landslide delineation and landslide-related change detection from multi-temporal satellite images. It uses both spatial and spectral information on landslides, through spectral analysis, shape analysis, textural measurements using a gray-level co-occurrence matrix (GLCM), and fuzzy logic membership functionality. Following an initial segmentation step, particular combinations of various information layers were investigated to generate objects. This was achieved by applying multi-resolution segmentation to IRS-1D, SPOT-5, and ALOS satellite imagery in sequential steps of feature selection and object classification, and using slope and flow direction derivatives from a digital elevation model together with topographically-oriented gray level co-occurrence matrices. Fuzzy membership values were calculated for 11 different membership functions using 20 landslide objects from a landslide training data. Six fuzzy operators were used for the final classification and the accuracies of the resulting landslide maps were compared. A Fuzzy Synthetic Evaluation (FSE) approach was adapted for validation of the results and for an accuracy assessment using the landslide inventory database. The FSE approach revealed that the AND operator performed best with an accuracy of 93.87% for 2005 and 94.74% for 2011, closely followed by the MEAN Arithmetic operator, while the OR and AND (*) operators yielded relatively low accuracies. An object-based change detection was then applied to monitor landslide-related changes that occurred in northern Iran between 2005 and 2011. Knowledge rules to detect possible landslide-related changes were developed by evaluating all possible landslide-related objects for both time steps.

  6. Textural Analysis of Fatique Crack Surfaces: Image Pre-processing

    Directory of Open Access Journals (Sweden)

    H. Lauschmann

    2000-01-01

    Full Text Available For the fatique crack history reconstitution, new methods of quantitative microfractography are beeing developed based on the image processing and textural analysis. SEM magnifications between micro- and macrofractography are used. Two image pre-processing operatins were suggested and proved to prepare the crack surface images for analytical treatment: 1. Normalization is used to transform the image to a stationary form. Compared to the generally used equalization, it conserves the shape of brightness distribution and saves the character of the texture. 2. Binarization is used to transform the grayscale image to a system of thick fibres. An objective criterion for the threshold brightness value was found as that resulting into the maximum number of objects. Both methods were succesfully applied together with the following textural analysis.

  7. Image Processing Tools for Improved Visualization and Analysis of Remotely Sensed Images for Agriculture and Forest Classifications

    OpenAIRE

    SINHA G. R.

    2017-01-01

    This paper suggests Image Processing tools for improved visualization and better analysis of remotely sensed images. There are methods already available in literature for the purpose but the most important challenge among the limitations is lack of robustness. We propose an optimal method for image enhancement of the images using fuzzy based approaches and few optimization tools. The segmentation images subsequently obtained after de-noising will be classified into distinct information and th...

  8. A Geometric Dictionary Learning Based Approach for Fluorescence Spectroscopy Image Fusion

    Directory of Open Access Journals (Sweden)

    Zhiqin Zhu

    2017-02-01

    Full Text Available In recent years, sparse representation approaches have been integrated into multi-focus image fusion methods. The fused images of sparse-representation-based image fusion methods show great performance. Constructing an informative dictionary is a key step for sparsity-based image fusion method. In order to ensure sufficient number of useful bases for sparse representation in the process of informative dictionary construction, image patches from all source images are classified into different groups based on geometric similarities. The key information of each image-patch group is extracted by principle component analysis (PCA to build dictionary. According to the constructed dictionary, image patches are converted to sparse coefficients by simultaneous orthogonal matching pursuit (SOMP algorithm for representing the source multi-focus images. At last the sparse coefficients are fused by Max-L1 fusion rule and inverted to fused image. Due to the limitation of microscope, the fluorescence image cannot be fully focused. The proposed multi-focus image fusion solution is applied to fluorescence imaging area for generating all-in-focus images. The comparison experimentation results confirm the feasibility and effectiveness of the proposed multi-focus image fusion solution.

  9. An approach for quantitative image quality analysis for CT

    Science.gov (United States)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  10. Atlas-based analysis of cardiac shape and function: correction of regional shape bias due to imaging protocol for population studies.

    Science.gov (United States)

    Medrano-Gracia, Pau; Cowan, Brett R; Bluemke, David A; Finn, J Paul; Kadish, Alan H; Lee, Daniel C; Lima, Joao A C; Suinesiaputra, Avan; Young, Alistair A

    2013-09-13

    Cardiovascular imaging studies generate a wealth of data which is typically used only for individual study endpoints. By pooling data from multiple sources, quantitative comparisons can be made of regional wall motion abnormalities between different cohorts, enabling reuse of valuable data. Atlas-based analysis provides precise quantification of shape and motion differences between disease groups and normal subjects. However, subtle shape differences may arise due to differences in imaging protocol between studies. A mathematical model describing regional wall motion and shape was used to establish a coordinate system registered to the cardiac anatomy. The atlas was applied to data contributed to the Cardiac Atlas Project from two independent studies which used different imaging protocols: steady state free precession (SSFP) and gradient recalled echo (GRE) cardiovascular magnetic resonance (CMR). Shape bias due to imaging protocol was corrected using an atlas-based transformation which was generated from a set of 46 volunteers who were imaged with both protocols. Shape bias between GRE and SSFP was regionally variable, and was effectively removed using the atlas-based transformation. Global mass and volume bias was also corrected by this method. Regional shape differences between cohorts were more statistically significant after removing regional artifacts due to imaging protocol bias. Bias arising from imaging protocol can be both global and regional in nature, and is effectively corrected using an atlas-based transformation, enabling direct comparison of regional wall motion abnormalities between cohorts acquired in separate studies.

  11. Improved Sectional Image Analysis Technique for Evaluating Fiber Orientations in Fiber-Reinforced Cement-Based Materials.

    Science.gov (United States)

    Lee, Bang Yeon; Kang, Su-Tae; Yun, Hae-Bum; Kim, Yun Yong

    2016-01-12

    The distribution of fiber orientation is an important factor in determining the mechanical properties of fiber-reinforced concrete. This study proposes a new image analysis technique for improving the evaluation accuracy of fiber orientation distribution in the sectional image of fiber-reinforced concrete. A series of tests on the accuracy of fiber detection and the estimation performance of fiber orientation was performed on artificial fiber images to assess the validity of the proposed technique. The validation test results showed that the proposed technique estimates the distribution of fiber orientation more accurately than the direct measurement of fiber orientation by image analysis.

  12. Acquisition and Post-Processing of Immunohistochemical Images.

    Science.gov (United States)

    Sedgewick, Jerry

    2017-01-01

    Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.

  13. Noninvasive spectral imaging of skin chromophores based on multiple regression analysis aided by Monte Carlo simulation

    Science.gov (United States)

    Nishidate, Izumi; Wiswadarma, Aditya; Hase, Yota; Tanaka, Noriyuki; Maeda, Takaaki; Niizeki, Kyuichi; Aizu, Yoshihisa

    2011-08-01

    In order to visualize melanin and blood concentrations and oxygen saturation in human skin tissue, a simple imaging technique based on multispectral diffuse reflectance images acquired at six wavelengths (500, 520, 540, 560, 580 and 600nm) was developed. The technique utilizes multiple regression analysis aided by Monte Carlo simulation for diffuse reflectance spectra. Using the absorbance spectrum as a response variable and the extinction coefficients of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin as predictor variables, multiple regression analysis provides regression coefficients. Concentrations of melanin and total blood are then determined from the regression coefficients using conversion vectors that are deduced numerically in advance, while oxygen saturation is obtained directly from the regression coefficients. Experiments with a tissue-like agar gel phantom validated the method. In vivo experiments with human skin of the human hand during upper limb occlusion and of the inner forearm exposed to UV irradiation demonstrated the ability of the method to evaluate physiological reactions of human skin tissue.

  14. A kernel-based multi-feature image representation for histopathology image classification

    International Nuclear Information System (INIS)

    Moreno J; Caicedo J Gonzalez F

    2010-01-01

    This paper presents a novel strategy for building a high-dimensional feature space to represent histopathology image contents. Histogram features, related to colors, textures and edges, are combined together in a unique image representation space using kernel functions. This feature space is further enhanced by the application of latent semantic analysis, to model hidden relationships among visual patterns. All that information is included in the new image representation space. Then, support vector machine classifiers are used to assign semantic labels to images. Processing and classification algorithms operate on top of kernel functions, so that; the structure of the feature space is completely controlled using similarity measures and a dual representation. The proposed approach has shown a successful performance in a classification task using a dataset with 1,502 real histopathology images in 18 different classes. The results show that our approach for histological image classification obtains an improved average performance of 20.6% when compared to a conventional classification approach based on SVM directly applied to the original kernel.

  15. A KERNEL-BASED MULTI-FEATURE IMAGE REPRESENTATION FOR HISTOPATHOLOGY IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    J Carlos Moreno

    2010-09-01

    Full Text Available This paper presents a novel strategy for building a high-dimensional feature space to represent histopathology image contents. Histogram features, related to colors, textures and edges, are combined together in a unique image representation space using kernel functions. This feature space is further enhanced by the application of Latent Semantic Analysis, to model hidden relationships among visual patterns. All that information is included in the new image representation space. Then, Support Vector Machine classifiers are used to assign semantic labels to images. Processing and classification algorithms operate on top of kernel functions, so that, the structure of the feature space is completely controlled using similarity measures and a dual representation. The proposed approach has shown a successful performance in a classification task using a dataset with 1,502 real histopathology images in 18 different classes. The results show that our approach for histological image classification obtains an improved average performance of 20.6% when compared to a conventional classification approach based on SVM directly applied to the original kernel.

  16. In vivo quantitative whole-brain diffusion tensor imaging analysis of APP/PS1 transgenic mice using voxel-based and atlas-based methods

    International Nuclear Information System (INIS)

    Qin, Yuan-Yuan; Li, Mu-Wei; Oishi, Kenichi; Zhang, Shun; Zhang, Yan; Zhao, Ling-Yun; Zhu, Wen-Zhen; Lei, Hao

    2013-01-01

    Diffusion tensor imaging (DTI) has been applied to characterize the pathological features of Alzheimer's disease (AD) in a mouse model, although little is known about whether these features are structure specific. Voxel-based analysis (VBA) and atlas-based analysis (ABA) are good complementary tools for whole-brain DTI analysis. The purpose of this study was to identify the spatial localization of disease-related pathology in an AD mouse model. VBA and ABA quantification were used for the whole-brain DTI analysis of nine APP/PS1 mice and wild-type (WT) controls. Multiple scalar measurements, including fractional anisotropy (FA), trace, axial diffusivity (DA), and radial diffusivity (DR), were investigated to capture the various types of pathology. The accuracy of the image transformation applied for VBA and ABA was evaluated by comparing manual and atlas-based structure delineation using kappa statistics. Following the MR examination, the brains of the animals were analyzed for microscopy. Extensive anatomical alterations were identified in APP/PS1 mice, in both the gray matter areas (neocortex, hippocampus, caudate putamen, thalamus, hypothalamus, claustrum, amygdala, and piriform cortex) and the white matter areas (corpus callosum/external capsule, cingulum, septum, internal capsule, fimbria, and optic tract), evidenced by an increase in FA or DA, or both, compared to WT mice (p 0.05). The histopathological changes in the gray matter areas were confirmed by microscopy studies. DTI did, however, demonstrate significant changes in white matter areas, where the difference was not apparent by qualitative observation of a single-slice histological specimen. This study demonstrated the structure-specific nature of pathological changes in APP/PS1 mouse, and also showed the feasibility of applying whole-brain analysis methods to the investigation of an AD mouse model. (orig.)

  17. Image analysis to evaluate the browning degree of banana (Musa spp.) peel.

    Science.gov (United States)

    Cho, Jeong-Seok; Lee, Hyeon-Jeong; Park, Jung-Hoon; Sung, Jun-Hyung; Choi, Ji-Young; Moon, Kwang-Deog

    2016-03-01

    Image analysis was applied to examine banana peel browning. The banana samples were divided into 3 treatment groups: no treatment and normal packaging (Cont); CO2 gas exchange packaging (CO); normal packaging with an ethylene generator (ET). We confirmed that the browning of banana peels developed more quickly in the CO group than the other groups based on sensory test and enzyme assay. The G (green) and CIE L(∗), a(∗), and b(∗) values obtained from the image analysis sharply increased or decreased in the CO group. And these colour values showed high correlation coefficients (>0.9) with the sensory test results. CIE L(∗)a(∗)b(∗) values using a colorimeter also showed high correlation coefficients but comparatively lower than those of image analysis. Based on this analysis, browning of the banana occurred more quickly for CO2 gas exchange packaging, and image analysis can be used to evaluate the browning of banana peels. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Classification of Error-Diffused Halftone Images Based on Spectral Regression Kernel Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    Zhigao Zeng

    2016-01-01

    Full Text Available This paper proposes a novel algorithm to solve the challenging problem of classifying error-diffused halftone images. We firstly design the class feature matrices, after extracting the image patches according to their statistics characteristics, to classify the error-diffused halftone images. Then, the spectral regression kernel discriminant analysis is used for feature dimension reduction. The error-diffused halftone images are finally classified using an idea similar to the nearest centroids classifier. As demonstrated by the experimental results, our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness in tackling noise.

  19. Image/video understanding systems based on network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  20. Morphometric image analysis of giant vesicles

    DEFF Research Database (Denmark)

    Husen, Peter Rasmussen; Arriaga, Laura; Monroy, Francisco

    2012-01-01

    We have developed a strategy to determine lengths and orientations of tie lines in the coexistence region of liquid-ordered and liquid-disordered phases of cholesterol containing ternary lipid mixtures. The method combines confocal-fluorescence-microscopy image stacks of giant unilamellar vesicles...... (GUVs), a dedicated 3D-image analysis, and a quantitative analysis based in equilibrium thermodynamic considerations. This approach was tested in GUVs composed of 1,2-dioleoyl-sn-glycero-3-phosphocholine/1,2-palmitoyl-sn-glycero-3-phosphocholine/cholesterol. In general, our results show a reasonable...... agreement with previously reported data obtained by other methods. For example, our computed tie lines were found to be nonhorizontal, indicating a difference in cholesterol content in the coexisting phases. This new, to our knowledge, analytical strategy offers a way to further exploit fluorescence...

  1. Quantitative image analysis of synovial tissue

    NARCIS (Netherlands)

    van der Hall, Pascal O.; Kraan, Maarten C.; Tak, Paul Peter

    2007-01-01

    Quantitative image analysis is a form of imaging that includes microscopic histological quantification, video microscopy, image analysis, and image processing. Hallmarks are the generation of reliable, reproducible, and efficient measurements via strict calibration and step-by-step control of the

  2. Utilizing Minkowski functionals for image analysis: a marching square algorithm

    International Nuclear Information System (INIS)

    Mantz, Hubert; Jacobs, Karin; Mecke, Klaus

    2008-01-01

    Comparing noisy experimental image data with statistical models requires a quantitative analysis of grey-scale images beyond mean values and two-point correlations. A real-space image analysis technique is introduced for digitized grey-scale images, based on Minkowski functionals of thresholded patterns. A novel feature of this marching square algorithm is the use of weighted side lengths for pixels, so that boundary lengths are captured accurately. As examples to illustrate the technique we study surface topologies emerging during the dewetting process of thin films and analyse spinodal decomposition as well as turbulent patterns in chemical reaction–diffusion systems. The grey-scale value corresponds to the height of the film or to the concentration of chemicals, respectively. Comparison with analytic calculations in stochastic geometry models reveals a remarkable agreement of the examples with a Gaussian random field. Thus, a statistical test for non-Gaussian features in experimental data becomes possible with this image analysis technique—even for small image sizes. Implementations of the software used for the analysis are offered for download

  3. Brain medical image diagnosis based on corners with importance-values.

    Science.gov (United States)

    Gao, Linlin; Pan, Haiwei; Li, Qing; Xie, Xiaoqin; Zhang, Zhiqiang; Han, Jinming; Zhai, Xiao

    2017-11-21

    Brain disorders are one of the top causes of human death. Generally, neurologists analyze brain medical images for diagnosis. In the image analysis field, corners are one of the most important features, which makes corner detection and matching studies essential. However, existing corner detection studies do not consider the domain information of brain. This leads to many useless corners and the loss of significant information. Regarding corner matching, the uncertainty and structure of brain are not employed in existing methods. Moreover, most corner matching studies are used for 3D image registration. They are inapplicable for 2D brain image diagnosis because of the different mechanisms. To address these problems, we propose a novel corner-based brain medical image classification method. Specifically, we automatically extract multilayer texture images (MTIs) which embody diagnostic information from neurologists. Moreover, we present a corner matching method utilizing the uncertainty and structure of brain medical images and a bipartite graph model. Finally, we propose a similarity calculation method for diagnosis. Brain CT and MRI image sets are utilized to evaluate the proposed method. First, classifiers are trained in N-fold cross-validation analysis to produce the best θ and K. Then independent brain image sets are tested to evaluate the classifiers. Moreover, the classifiers are also compared with advanced brain image classification studies. For the brain CT image set, the proposed classifier outperforms the comparison methods by at least 8% on accuracy and 2.4% on F1-score. Regarding the brain MRI image set, the proposed classifier is superior to the comparison methods by more than 7.3% on accuracy and 4.9% on F1-score. Results also demonstrate that the proposed method is robust to different intensity ranges of brain medical image. In this study, we develop a robust corner-based brain medical image classifier. Specifically, we propose a corner detection

  4. Image encryption based on a delayed fractional-order chaotic logistic system

    Science.gov (United States)

    Wang, Zhen; Huang, Xia; Li, Ning; Song, Xiao-Na

    2012-05-01

    A new image encryption scheme is proposed based on a delayed fractional-order chaotic logistic system. In the process of generating a key stream, the time-varying delay and fractional derivative are embedded in the proposed scheme to improve the security. Such a scheme is described in detail with security analyses including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. Experimental results show that the newly proposed image encryption scheme possesses high security.

  5. Image encryption based on a delayed fractional-order chaotic logistic system

    International Nuclear Information System (INIS)

    Wang Zhen; Li Ning; Huang Xia; Song Xiao-Na

    2012-01-01

    A new image encryption scheme is proposed based on a delayed fractional-order chaotic logistic system. In the process of generating a key stream, the time-varying delay and fractional derivative are embedded in the proposed scheme to improve the security. Such a scheme is described in detail with security analyses including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. Experimental results show that the newly proposed image encryption scheme possesses high security. (general)

  6. Wave-equation Migration Velocity Analysis Using Plane-wave Common Image Gathers

    KAUST Repository

    Guo, Bowen; Schuster, Gerard T.

    2017-01-01

    Wave-equation migration velocity analysis (WEMVA) based on subsurface-offset, angle domain or time-lag common image gathers (CIGs) requires significant computational and memory resources because it computes higher dimensional migration images

  7. Weed mapping in early-season maize fields using object-based analysis of unmanned aerial vehicle (UAV) images.

    Science.gov (United States)

    Peña, José Manuel; Torres-Sánchez, Jorge; de Castro, Ana Isabel; Kelly, Maggi; López-Granados, Francisca

    2013-01-01

    The use of remote imagery captured by unmanned aerial vehicles (UAV) has tremendous potential for designing detailed site-specific weed control treatments in early post-emergence, which have not possible previously with conventional airborne or satellite images. A robust and entirely automatic object-based image analysis (OBIA) procedure was developed on a series of UAV images using a six-band multispectral camera (visible and near-infrared range) with the ultimate objective of generating a weed map in an experimental maize field in Spain. The OBIA procedure combines several contextual, hierarchical and object-based features and consists of three consecutive phases: 1) classification of crop rows by application of a dynamic and auto-adaptive classification approach, 2) discrimination of crops and weeds on the basis of their relative positions with reference to the crop rows, and 3) generation of a weed infestation map in a grid structure. The estimation of weed coverage from the image analysis yielded satisfactory results. The relationship of estimated versus observed weed densities had a coefficient of determination of r(2)=0.89 and a root mean square error of 0.02. A map of three categories of weed coverage was produced with 86% of overall accuracy. In the experimental field, the area free of weeds was 23%, and the area with low weed coverage (weeds) was 47%, which indicated a high potential for reducing herbicide application or other weed operations. The OBIA procedure computes multiple data and statistics derived from the classification outputs, which permits calculation of herbicide requirements and estimation of the overall cost of weed management operations in advance.

  8. In vivo quantitative whole-brain diffusion tensor imaging analysis of APP/PS1 transgenic mice using voxel-based and atlas-based methods

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Yuan-Yuan [Huazhong University of Science and Technology, Department of Radiology, Tongji Hospital, Tongji Medical College, Wuhan (China); The Johns Hopkins University School of Medicine, The Russell H. Morgan Department of Radiology and Radiological Science, Baltimore, MD (United States); Li, Mu-Wei; Oishi, Kenichi [The Johns Hopkins University School of Medicine, The Russell H. Morgan Department of Radiology and Radiological Science, Baltimore, MD (United States); Zhang, Shun; Zhang, Yan; Zhao, Ling-Yun; Zhu, Wen-Zhen [Huazhong University of Science and Technology, Department of Radiology, Tongji Hospital, Tongji Medical College, Wuhan (China); Lei, Hao [Chinese Academy of Sciences, Wuhan Center for Magnetic Resonance, State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Wuhan (China)

    2013-08-15

    Diffusion tensor imaging (DTI) has been applied to characterize the pathological features of Alzheimer's disease (AD) in a mouse model, although little is known about whether these features are structure specific. Voxel-based analysis (VBA) and atlas-based analysis (ABA) are good complementary tools for whole-brain DTI analysis. The purpose of this study was to identify the spatial localization of disease-related pathology in an AD mouse model. VBA and ABA quantification were used for the whole-brain DTI analysis of nine APP/PS1 mice and wild-type (WT) controls. Multiple scalar measurements, including fractional anisotropy (FA), trace, axial diffusivity (DA), and radial diffusivity (DR), were investigated to capture the various types of pathology. The accuracy of the image transformation applied for VBA and ABA was evaluated by comparing manual and atlas-based structure delineation using kappa statistics. Following the MR examination, the brains of the animals were analyzed for microscopy. Extensive anatomical alterations were identified in APP/PS1 mice, in both the gray matter areas (neocortex, hippocampus, caudate putamen, thalamus, hypothalamus, claustrum, amygdala, and piriform cortex) and the white matter areas (corpus callosum/external capsule, cingulum, septum, internal capsule, fimbria, and optic tract), evidenced by an increase in FA or DA, or both, compared to WT mice (p < 0.05, corrected). The average kappa value between manual and atlas-based structure delineation was approximately 0.8, and there was no significant difference between APP/PS1 and WT mice (p > 0.05). The histopathological changes in the gray matter areas were confirmed by microscopy studies. DTI did, however, demonstrate significant changes in white matter areas, where the difference was not apparent by qualitative observation of a single-slice histological specimen. This study demonstrated the structure-specific nature of pathological changes in APP/PS1 mouse, and also showed the

  9. Reliability of an analysis method for measuring diaphragm excursion by means of direct visualization with videofluoroscopy.

    Science.gov (United States)

    Yi, Liu C; Nascimento, Oliver A; Jardim, José R

    2011-06-01

    The purpose of this study was to verify the reproducibility between two different observers of an analysis method for diaphragmatic displacement measurements using direct visualization with videofluoroscopy. 29 mouth breathing children aged 5 to 12 years from both genders were analyzed. The diaphragmatic displacement evaluation was divided in three parts: videofluoroscopy with VHS recording in standing, sitting, and dorsal positions; digitalization of the images; and measurement of the distance between diaphragmatic domes during a breathing cycle using Adobe Photoshop 5.5 and Adobe Premiere PRO 6.5 software. The intraclass correlation coefficients presented excellent reproducibility in all positions, with coefficients always above 0.94. Mean of the measurements of the diaphagramatic domes displacement done by the two observers were similar (Phealthcare professionals. Copyright © 2010 SEPAR. Published by Elsevier Espana. All rights reserved.

  10. Automated Glacier Mapping using Object Based Image Analysis. Case Studies from Nepal, the European Alps and Norway

    Science.gov (United States)

    Vatle, S. S.

    2015-12-01

    Frequent and up-to-date glacier outlines are needed for many applications of glaciology, not only glacier area change analysis, but also for masks in volume or velocity analysis, for the estimation of water resources and as model input data. Remote sensing offers a good option for creating glacier outlines over large areas, but manual correction is frequently necessary, especially in areas containing supraglacial debris. We show three different workflows for mapping clean ice and debris-covered ice within Object Based Image Analysis (OBIA). By working at the object level as opposed to the pixel level, OBIA facilitates using contextual, spatial and hierarchical information when assigning classes, and additionally permits the handling of multiple data sources. Our first example shows mapping debris-covered ice in the Manaslu Himalaya, Nepal. SAR Coherence data is used in combination with optical and topographic data to classify debris-covered ice, obtaining an accuracy of 91%. Our second example shows using a high-resolution LiDAR derived DEM over the Hohe Tauern National Park in Austria. Breaks in surface morphology are used in creating image objects; debris-covered ice is then classified using a combination of spectral, thermal and topographic properties. Lastly, we show a completely automated workflow for mapping glacier ice in Norway. The NDSI and NIR/SWIR band ratio are used to map clean ice over the entire country but the thresholds are calculated automatically based on a histogram of each image subset. This means that in theory any Landsat scene can be inputted and the clean ice can be automatically extracted. Debris-covered ice can be included semi-automatically using contextual and morphological information.

  11. Development of a Support Vector Machine - Based Image Analysis System for Focal Liver Lesions Classification in Magnetic Resonance Images

    International Nuclear Information System (INIS)

    Gatos, I; Tsantis, S; Kagadis, G; Karamesini, M; Skouroliakou, A

    2015-01-01

    Purpose: The design and implementation of a computer-based image analysis system employing the support vector machine (SVM) classifier system for the classification of Focal Liver Lesions (FLLs) on routine non-enhanced, T2-weighted Magnetic Resonance (MR) images. Materials and Methods: The study comprised 92 patients; each one of them has undergone MRI performed on a Magnetom Concerto (Siemens). Typical signs on dynamic contrast-enhanced MRI and biopsies were employed towards a three class categorization of the 92 cases: 40-benign FLLs, 25-Hepatocellular Carcinomas (HCC) within Cirrhotic liver parenchyma and 27-liver metastases from Non-Cirrhotic liver. Prior to FLLs classification an automated lesion segmentation algorithm based on Marcov Random Fields was employed in order to acquire each FLL Region of Interest. 42 texture features derived from the gray-level histogram, co-occurrence and run-length matrices and 12 morphological features were obtained from each lesion. Stepwise multi-linear regression analysis was utilized to avoid feature redundancy leading to a feature subset that fed the multiclass SVM classifier designed for lesion classification. SVM System evaluation was performed by means of leave-one-out method and ROC analysis. Results: Maximum accuracy for all three classes (90.0%) was obtained by means of the Radial Basis Kernel Function and three textural features (Inverse- Different-Moment, Sum-Variance and Long-Run-Emphasis) that describe lesion's contrast, variability and shape complexity. Sensitivity values for the three classes were 92.5%, 81.5% and 96.2% respectively, whereas specificity values were 94.2%, 95.3% and 95.5%. The AUC value achieved for the selected subset was 0.89 with 0.81 - 0.94 confidence interval. Conclusion: The proposed SVM system exhibit promising results that could be utilized as a second opinion tool to the radiologist in order to decrease the time/cost of diagnosis and the need for patients to undergo invasive

  12. Evidence-based cancer imaging

    Energy Technology Data Exchange (ETDEWEB)

    Shinagare, Atul B.; Khorasani, Ramin [Dept. of Radiology, Brigham and Women' s Hospital, Boston (Korea, Republic of)

    2017-01-15

    With the advances in the field of oncology, imaging is increasingly used in the follow-up of cancer patients, leading to concerns about over-utilization. Therefore, it has become imperative to make imaging more evidence-based, efficient, cost-effective and equitable. This review explores the strategies and tools to make diagnostic imaging more evidence-based, mainly in the context of follow-up of cancer patients.

  13. Analysis of Pregerminated Barley Using Hyperspectral Image Analysis

    DEFF Research Database (Denmark)

    Arngren, Morten; Hansen, Per Waaben; Eriksen, Birger

    2011-01-01

    imaging system in a mathematical modeling framework to identify pregerminated barley at an early stage of approximately 12 h of pregermination. Our model only assigns pregermination as the cause for a single kernel’s lack of germination and is unable to identify dormancy, kernel damage etc. The analysis...... is based on more than 750 Rosalina barley kernels being pregerminated at 8 different durations between 0 and 60 h based on the BRF method. Regerminating the kernels reveals a grouping of the pregerminated kernels into three categories: normal, delayed and limited germination. Our model employs a supervised...

  14. Digital image sequence processing, compression, and analysis

    CERN Document Server

    Reed, Todd R

    2004-01-01

    IntroductionTodd R. ReedCONTENT-BASED IMAGE SEQUENCE REPRESENTATIONPedro M. Q. Aguiar, Radu S. Jasinschi, José M. F. Moura, andCharnchai PluempitiwiriyawejTHE COMPUTATION OF MOTIONChristoph Stiller, Sören Kammel, Jan Horn, and Thao DangMOTION ANALYSIS AND DISPLACEMENT ESTIMATION IN THE FREQUENCY DOMAINLuca Lucchese and Guido Maria CortelazzoQUALITY OF SERVICE ASSESSMENT IN NEW GENERATION WIRELESS VIDEO COMMUNICATIONSGaetano GiuntaERROR CONCEALMENT IN DIGITAL VIDEOFrancesco G.B. De NataleIMAGE SEQUENCE RESTORATION: A WIDER PERSPECTIVEAnil KokaramVIDEO SUMMARIZATIONCuneyt M. Taskiran and Edward

  15. Evaluation of Yogurt Microstructure Using Confocal Laser Scanning Microscopy and Image Analysis.

    Science.gov (United States)

    Skytte, Jacob L; Ghita, Ovidiu; Whelan, Paul F; Andersen, Ulf; Møller, Flemming; Dahl, Anders B; Larsen, Rasmus

    2015-06-01

    The microstructure of protein networks in yogurts defines important physical properties of the yogurt and hereby partly its quality. Imaging this protein network using confocal scanning laser microscopy (CSLM) has shown good results, and CSLM has become a standard measuring technique for fermented dairy products. When studying such networks, hundreds of images can be obtained, and here image analysis methods are essential for using the images in statistical analysis. Previously, methods including gray level co-occurrence matrix analysis and fractal analysis have been used with success. However, a range of other image texture characterization methods exists. These methods describe an image by a frequency distribution of predefined image features (denoted textons). Our contribution is an investigation of the choice of image analysis methods by performing a comparative study of 7 major approaches to image texture description. Here, CSLM images from a yogurt fermentation study are investigated, where production factors including fat content, protein content, heat treatment, and incubation temperature are varied. The descriptors are evaluated through nearest neighbor classification, variance analysis, and cluster analysis. Our investigation suggests that the texton-based descriptors provide a fuller description of the images compared to gray-level co-occurrence matrix descriptors and fractal analysis, while still being as applicable and in some cases as easy to tune. © 2015 Institute of Food Technologists®

  16. Fourier analysis: from cloaking to imaging

    Science.gov (United States)

    Wu, Kedi; Cheng, Qiluan; Wang, Guo Ping

    2016-04-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers.

  17. Image preprocessing study on KPCA-based face recognition

    Science.gov (United States)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  18. Machine learning based analysis of cardiovascular images

    NARCIS (Netherlands)

    Wolterink, JM

    2017-01-01

    Cardiovascular diseases (CVDs), including coronary artery disease (CAD) and congenital heart disease (CHD) are the global leading cause of death. Computed tomography (CT) and magnetic resonance imaging (MRI) allow non-invasive imaging of cardiovascular structures. This thesis presents machine

  19. Does thorax EIT image analysis depend on the image reconstruction method?

    Science.gov (United States)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2013-04-01

    Different methods were proposed to analyze the resulting images of electrical impedance tomography (EIT) measurements during ventilation. The aim of our study was to examine if the analysis methods based on back-projection deliver the same results when applied on images based on other reconstruction algorithms. Seven mechanically ventilated patients with ARDS were examined by EIT. The thorax contours were determined from the routine CT images. EIT raw data was reconstructed offline with (1) filtered back-projection with circular forward model (BPC); (2) GREIT reconstruction method with circular forward model (GREITC) and (3) GREIT with individual thorax geometry (GREITT). Three parameters were calculated on the resulting images: linearity, global ventilation distribution and regional ventilation distribution. The results of linearity test are 5.03±2.45, 4.66±2.25 and 5.32±2.30 for BPC, GREITC and GREITT, respectively (median ±interquartile range). The differences among the three methods are not significant (p = 0.93, Kruskal-Wallis test). The proportions of ventilation in the right lung are 0.58±0.17, 0.59±0.20 and 0.59±0.25 for BPC, GREITC and GREITT, respectively (p = 0.98). The differences of the GI index based on different reconstruction methods (0.53±0.16, 0.51±0.25 and 0.54±0.16 for BPC, GREITC and GREITT, respectively) are also not significant (p = 0.93). We conclude that the parameters developed for images generated with GREITT are comparable with filtered back-projection and GREITC.

  20. Edge-based correlation image registration for multispectral imaging

    Science.gov (United States)

    Nandy, Prabal [Albuquerque, NM

    2009-11-17

    Registration information for images of a common target obtained from a plurality of different spectral bands can be obtained by combining edge detection and phase correlation. The images are edge-filtered, and pairs of the edge-filtered images are then phase correlated to produce phase correlation images. The registration information can be determined based on these phase correlation images.

  1. An index of beam hardening artifact for two-dimensional cone-beam CT tomographic images: establishment and preliminary evaluation

    Science.gov (United States)

    Yuan, Fusong; Lv, Peijun; Yang, Huifang; Wang, Yong; Sun, Yuchun

    2015-07-01

    Objectives: Based on the pixel gray value measurements, establish a beam-hardening artifacts index of the cone-beam CT tomographic image, and preliminarily evaluate its applicability. Methods: The 5mm-diameter metal ball and resin ball were fixed on the light-cured resin base plate respectively, while four vitro molars were fixed above and below the ball, on the left and right respectively, which have 10mm distance with the metal ball. Then, cone beam CT was used to scan the fixed base plate twice. The same layer tomographic images were selected from the two data and imported into the Photoshop software. The circle boundary was built through the determination of the center and radius of the circle, according to the artifact-free images section. Grayscale measurement tools were used to measure the internal boundary gray value G0, gray value G1 and G2 of 1mm and 20mm artifacts outside the circular boundary, the length L1 of the arc with artifacts in the circular boundary, the circumference L2. Hardening artifacts index was set A = (G1 / G0) * 0.5 + (G2 / G1) * 0.4 + (L2 / L1) * 0.1. Then, the A values of metal and resin materials were calculated respectively. Results: The A value of cobalt-chromium alloy material is 1, and resin material is 0. Conclusion: The A value reflects comprehensively the three factors of hardening artifacts influencing normal oral tissue image sharpness of cone beam CT. The three factors include relative gray value, the decay rate and range of artifacts.

  2. Hybrid statistics-simulations based method for atom-counting from ADF STEM images

    Energy Technology Data Exchange (ETDEWEB)

    De wael, Annelies, E-mail: annelies.dewael@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); De Backer, Annick [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Jones, Lewys; Nellist, Peter D. [Department of Materials, University of Oxford, Parks Road, OX1 3PH Oxford (United Kingdom); Van Aert, Sandra, E-mail: sandra.vanaert@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium)

    2017-06-15

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. - Highlights: • A hybrid method for atom-counting from ADF STEM images is introduced. • Image simulations are incorporated into a statistical framework in a reliable manner. • Limits of the existing methods for atom-counting are far exceeded. • Reliable counting results from an experimental low dose image are obtained. • Progress towards reliable quantitative analysis of beam-sensitive materials is made.

  3. Secure image encryption algorithm design using a novel chaos based S-Box

    International Nuclear Information System (INIS)

    Çavuşoğlu, Ünal; Kaçar, Sezgin; Pehlivan, Ihsan; Zengin, Ahmet

    2017-01-01

    Highlights: • A new chaotic system is developed for creating S-Box and image encryption algorithm. • Chaos based random number generator is designed with the help of the new chaotic system. NIST tests are run on generated random numbers to verify randomness. • A new S-Box design algorithm is developed to create the chaos based S-Box to be utilized in encryption algorithm and performance tests are made. • The new developed S-Box based image encryption algorithm is introduced and image encryption application is carried out. • To show the quality and strong of the encryption process, security analysis are performed and compared with the AES and chaos algorithms. - Abstract: In this study, an encryption algorithm that uses chaos based S-BOX is developed for secure and speed image encryption. First of all, a new chaotic system is developed for creating S-Box and image encryption algorithm. Chaos based random number generator is designed with the help of the new chaotic system. Then, NIST tests are run on generated random numbers to verify randomness. A new S-Box design algorithm is developed to create the chaos based S-Box to be utilized in encryption algorithm and performance tests are made. As the next step, the new developed S-Box based image encryption algorithm is introduced in detail. Finally, image encryption application is carried out. To show the quality and strong of the encryption process, security analysis are performed. Proposed algorithm is compared with the AES and chaos algorithms. According to tests results, the proposed image encryption algorithm is secure and speed for image encryption application.

  4. Transfer function analysis of radiographic imaging systems

    International Nuclear Information System (INIS)

    Metz, C.E.; Doi, K.

    1979-01-01

    The theoretical and experimental aspects of the techniques of transfer function analysis used in radiographic imaging systems are reviewed. The mathematical principles of transfer function analysis are developed for linear, shift-invariant imaging systems, for the relation between object and image and for the image due to a sinusoidal plane wave object. The other basic mathematical principle discussed is 'Fourier analysis' and its application to an input function. Other aspects of transfer function analysis included are alternative expressions for the 'optical transfer function' of imaging systems and expressions are derived for both serial and parallel transfer image sub-systems. The applications of transfer function analysis to radiographic imaging systems are discussed in relation to the linearisation of the radiographic imaging system, the object, the geometrical unsharpness, the screen-film system unsharpness, other unsharpness effects and finally noise analysis. It is concluded that extensive theoretical, computer simulation and experimental studies have demonstrated that the techniques of transfer function analysis provide an accurate and reliable means for predicting and understanding the effects of various radiographic imaging system components in most practical diagnostic medical imaging situations. (U.K.)

  5. Cloud-based processing of multi-spectral imaging data

    Science.gov (United States)

    Bernat, Amir S.; Bolton, Frank J.; Weiser, Reuven; Levitz, David

    2017-03-01

    Multispectral imaging holds great promise as a non-contact tool for the assessment of tissue composition. Performing multi - spectral imaging on a hand held mobile device would allow to bring this technology and with it knowledge to low resource settings to provide a state of the art classification of tissue health. This modality however produces considerably larger data sets than white light imaging and requires preliminary image analysis for it to be used. The data then needs to be analyzed and logged, while not requiring too much of the system resource or a long computation time and battery use by the end point device. Cloud environments were designed to allow offloading of those problems by allowing end point devices (smartphones) to offload computationally hard tasks. For this end we present a method where the a hand held device based around a smartphone captures a multi - spectral dataset in a movie file format (mp4) and compare it to other image format in size, noise and correctness. We present the cloud configuration used for segmenting images to frames where they can later be used for further analysis.

  6. Repeated intravenous administration of gadobutrol does not lead to increased signal intensity on unenhanced T1-weighted images - a voxel-based whole brain analysis

    Energy Technology Data Exchange (ETDEWEB)

    Langner, Soenke; Kromrey, Marie-Luise [University Medicine Greifswald, Institute of Diagnostic Radiology and Neuroradiology, Greifswald (Germany); Kuehn, Jens-Peter [University Medicine Greifswald, Institute of Diagnostic Radiology and Neuroradiology, Greifswald (Germany); University Hospital, Carl Gustav Carus University Dresden, Institute for Radiology, Dresden (Germany); Grothe, Matthias [University Medicine Greifswald, Department of Neurology, Greifswald (Germany); Domin, Martin [University Medicine Greifswald, Functional Imaging Unit, Institute of Diagnostic Radiology and Neuroradiology, Greifswald (Germany)

    2017-09-15

    To identify a possible association between repeated intravenous administration of gadobutrol and increased signal intensity in the grey and white matter using voxel-based whole-brain analysis. In this retrospective single-centre study, 217 patients with a clinically isolated syndrome underwent baseline brain magnetic resonance imaging and at least one annual follow-up examination with intravenous administration of 0.1 mmol/kg body weight of gadobutrol. Using the ''Diffeomorphic Anatomical Registration using Exponentiated Lie algebra'' (DARTEL) normalisation process, tissue templates for grey matter (GM), white matter (WM), and cerebrospinal fluid (CSF) were calculated, as were GM-CSF and WM-CSF ratios. Voxel-based whole-brain analysis was used to calculate the signal intensity for each voxel in each data set. Paired t-test was applied to test differences to baseline MRI for significance. Voxel-based whole-brain analysis demonstrated no significant changes in signal intensity of grey and white matter after up to five gadobutrol administrations. There was no significant change in GM-CSF and grey WM-CSF ratios. Voxel-based whole-brain analysis did not demonstrate increased signal intensity of GM and WM on unenhanced T1-weighted images after repeated gadobutrol administration. The molecular structure of gadolinium-based contrast agent preparations may be an essential factor causing SI increase on unenhanced T1-weighted images. (orig.)

  7. Multiple-image encryption via lifting wavelet transform and XOR operation based on compressive ghost imaging scheme

    Science.gov (United States)

    Li, Xianye; Meng, Xiangfeng; Yang, Xiulun; Wang, Yurong; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-03-01

    A multiple-image encryption method via lifting wavelet transform (LWT) and XOR operation is proposed, which is based on a row scanning compressive ghost imaging scheme. In the encryption process, the scrambling operation is implemented for the sparse images transformed by LWT, then the XOR operation is performed on the scrambled images, and the resulting XOR images are compressed in the row scanning compressive ghost imaging, through which the ciphertext images can be detected by bucket detector arrays. During decryption, the participant who possesses his/her correct key-group, can successfully reconstruct the corresponding plaintext image by measurement key regeneration, compression algorithm reconstruction, XOR operation, sparse images recovery, and inverse LWT (iLWT). Theoretical analysis and numerical simulations validate the feasibility of the proposed method.

  8. Digital Image Encryption Algorithm Design Based on Genetic Hyperchaos

    Directory of Open Access Journals (Sweden)

    Jian Wang

    2016-01-01

    Full Text Available In view of the present chaotic image encryption algorithm based on scrambling (diffusion is vulnerable to choosing plaintext (ciphertext attack in the process of pixel position scrambling, we put forward a image encryption algorithm based on genetic super chaotic system. The algorithm, by introducing clear feedback to the process of scrambling, makes the scrambling effect related to the initial chaos sequence and the clear text itself; it has realized the image features and the organic fusion of encryption algorithm. By introduction in the process of diffusion to encrypt plaintext feedback mechanism, it improves sensitivity of plaintext, algorithm selection plaintext, and ciphertext attack resistance. At the same time, it also makes full use of the characteristics of image information. Finally, experimental simulation and theoretical analysis show that our proposed algorithm can not only effectively resist plaintext (ciphertext attack, statistical attack, and information entropy attack but also effectively improve the efficiency of image encryption, which is a relatively secure and effective way of image communication.

  9. VOLUME STUDY WITH HIGH DENSITY OF PARTICLES BASED ON CONTOUR AND CORRELATION IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Tatyana Yu. Nikolaeva

    2014-11-01

    Full Text Available The subject of study is the techniques of particle statistics evaluation, in particular, processing methods of particle images obtained by coherent illumination. This paper considers the problem of recognition and statistical accounting for individual images of small scattering particles in an arbitrary section of the volume in case of high concentrations. For automatic recognition of focused particles images, a special algorithm for statistical analysis based on contouring and thresholding was used. By means of the mathematical formalism of the scalar diffraction theory, coherent images of the particles formed by the optical system with high numerical aperture were simulated. Numerical testing of the method proposed for the cases of different concentrations and distributions of particles in the volume was performed. As a result, distributions of density and mass fraction of the particles were obtained, and the efficiency of the method in case of different concentrations of particles was evaluated. At high concentrations, the effect of coherent superposition of the particles from the adjacent planes strengthens, which makes it difficult to recognize images of particles using the algorithm considered in the paper. In this case, we propose to supplement the method with calculating the cross-correlation function of particle images from adjacent segments of the volume, and evaluating the ratio between the height of the correlation peak and the height of the function pedestal in the case of different distribution characters. The method of statistical accounting of particles considered in this paper is of practical importance in the study of volume with particles of different nature, for example, in problems of biology and oceanography. Effective work in the regime of high concentrations expands the limits of applicability of these methods for practically important cases and helps to optimize determination time of the distribution character and

  10. A hyperspectral image analysis workbench for environmental science applications

    Energy Technology Data Exchange (ETDEWEB)

    Christiansen, J.H.; Zawada, D.G.; Simunich, K.L.; Slater, J.C.

    1992-10-01

    A significant challenge to the information sciences is to provide more powerful and accessible means to exploit the enormous wealth of data available from high-resolution imaging spectrometry, or ``hyperspectral`` imagery, for analysis, for mapping purposes, and for input to environmental modeling applications. As an initial response to this challenge, Argonne`s Advanced Computer Applications Center has developed a workstation-based prototype software workbench which employs Al techniques and other advanced approaches to deduce surface characteristics and extract features from the hyperspectral images. Among its current capabilities, the prototype system can classify pixels by abstract surface type. The classification process employs neural network analysis of inputs which include pixel spectra and a variety of processed image metrics, including image ``texture spectra`` derived from fractal signatures computed for subimage tiles at each wavelength.

  11. Wavelet-based de-noising algorithm for images acquired with parallel magnetic resonance imaging (MRI)

    International Nuclear Information System (INIS)

    Delakis, Ioannis; Hammad, Omer; Kitney, Richard I

    2007-01-01

    Wavelet-based de-noising has been shown to improve image signal-to-noise ratio in magnetic resonance imaging (MRI) while maintaining spatial resolution. Wavelet-based de-noising techniques typically implemented in MRI require that noise displays uniform spatial distribution. However, images acquired with parallel MRI have spatially varying noise levels. In this work, a new algorithm for filtering images with parallel MRI is presented. The proposed algorithm extracts the edges from the original image and then generates a noise map from the wavelet coefficients at finer scales. The noise map is zeroed at locations where edges have been detected and directional analysis is also used to calculate noise in regions of low-contrast edges that may not have been detected. The new methodology was applied on phantom and brain images and compared with other applicable de-noising techniques. The performance of the proposed algorithm was shown to be comparable with other techniques in central areas of the images, where noise levels are high. In addition, finer details and edges were maintained in peripheral areas, where noise levels are low. The proposed methodology is fully automated and can be applied on final reconstructed images without requiring sensitivity profiles or noise matrices of the receiver coils, therefore making it suitable for implementation in a clinical MRI setting

  12. Image processing and analysis using neural networks for optometry area

    Science.gov (United States)

    Netto, Antonio V.; Ferreira de Oliveira, Maria C.

    2002-11-01

    In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.

  13. A Reliable Image Watermarking Scheme Based on Redistributed Image Normalization and SVD

    Directory of Open Access Journals (Sweden)

    Musrrat Ali

    2016-01-01

    Full Text Available Digital image watermarking is the process of concealing secret information in a digital image for protecting its rightful ownership. Most of the existing block based singular value decomposition (SVD digital watermarking schemes are not robust to geometric distortions, such as rotation in an integer multiple of ninety degree and image flipping, which change the locations of the pixels but don’t make any changes to the pixel’s intensity of the image. Also, the schemes have used a constant scaling factor to give the same weightage to the coefficients of different magnitudes that results in visible distortion in some regions of the watermarked image. Therefore, to overcome the problems mentioned here, this paper proposes a novel image watermarking scheme by incorporating the concepts of redistributed image normalization and variable scaling factor depending on the coefficient’s magnitude to be embedded. Furthermore, to enhance the security and robustness the watermark is shuffled by using the piecewise linear chaotic map before the embedding. To investigate the robustness of the scheme several attacks are applied to seriously distort the watermarked image. Empirical analysis of the results has demonstrated the efficiency of the proposed scheme.

  14. Stacker’s Crane Position Fixing Based on Real Time Image Processing and Analysis

    Directory of Open Access Journals (Sweden)

    Kmeid Saad

    2015-06-01

    Full Text Available This study illustrates the usage of stacker cranes and image processing in automated warehouse systems. The aim is to use real time image processing and analysis for a stacker’s crane position fixing in order to use it as a pick-up and delivery system (P/D, to be controlled by a programmable logic controller unit (PLC.

  15. Telemetry Timing Analysis for Image Reconstruction of Kompsat Spacecraft

    Directory of Open Access Journals (Sweden)

    Jin-Ho Lee

    2000-06-01

    Full Text Available The KOMPSAT (KOrea Multi-Purpose SATellite has two optical imaging instruments called EOC (Electro-Optical Camera and OSMI (Ocean Scanning Multispectral Imager. The image data of these instruments are transmitted to ground station and restored correctly after post-processing with the telemetry data transferred from KOMPSAT spacecraft. The major timing information of the KOMPSAT is OBT (On-Board Time which is formatted by the on-board computer of the spacecraft, based on 1Hz sync. pulse coming from the GPS receiver involved. The OBT is transmitted to ground station with the house-keeping telemetry data of the spacecraft while it is distributed to the instruments via 1553B data bus for synchronization during imaging and formatting. The timing information contained in the spacecraft telemetry data would have direct relation to the image data of the instruments, which should be well explained to get a more accurate image. This paper addresses the timing analysis of the KOMPSAT spacecraft and instruments, including the gyro data timing analysis for the correct restoration of the EOC and OSMI image data at ground station.

  16. 3-D Image Analysis of Fluorescent Drug Binding

    Directory of Open Access Journals (Sweden)

    M. Raquel Miquel

    2005-01-01

    Full Text Available Fluorescent ligands provide the means of studying receptors in whole tissues using confocal laser scanning microscopy and have advantages over antibody- or non-fluorescence-based method. Confocal microscopy provides large volumes of images to be measured. Histogram analysis of 3-D image volumes is proposed as a method of graphically displaying large amounts of volumetric image data to be quickly analyzed and compared. The fluorescent ligand BODIPY FL-prazosin (QAPB was used in mouse aorta. Histogram analysis reports the amount of ligand-receptor binding under different conditions and the technique is sensitive enough to detect changes in receptor availability after antagonist incubation or genetic manipulations. QAPB binding was concentration dependent, causing concentration-related rightward shifts in the histogram. In the presence of 10 μM phenoxybenzamine (blocking agent, the QAPB (50 nM histogram overlaps the autofluorescence curve. The histogram obtained for the 1D knockout aorta lay to the left of that of control and 1B knockout aorta, indicating a reduction in 1D receptors. We have shown, for the first time, that it is possible to graphically display binding of a fluorescent drug to a biological tissue. Although our application is specific to adrenergic receptors, the general method could be applied to any volumetric, fluorescence-image-based assay.

  17. SU-E-J-237: Image Feature Based DRR and Portal Image Registration

    Energy Technology Data Exchange (ETDEWEB)

    Wang, X; Chang, J [NY Weill Cornell Medical Ctr, NY (United States)

    2014-06-01

    Purpose: Two-dimensional (2D) matching of the kV X-ray and digitally reconstructed radiography (DRR) images is an important setup technique for image-guided radiotherapy (IGRT). In our clinics, mutual information based methods are used for this purpose on commercial linear accelerators, but with often needs for manual corrections. This work proved the feasibility that feature based image transform can be used to register kV and DRR images. Methods: The scale invariant feature transform (SIFT) method was implemented to detect the matching image details (or key points) between the kV and DRR images. These key points represent high image intensity gradients, and thus the scale invariant features. Due to the poor image contrast from our kV image, direct application of the SIFT method yielded many detection errors. To assist the finding of key points, the center coordinates of the kV and DRR images were read from the DICOM header, and the two groups of key points with similar relative positions to their corresponding centers were paired up. Using these points, a rigid transform (with scaling, horizontal and vertical shifts) was estimated. We also artificially introduced vertical and horizontal shifts to test the accuracy of our registration method on anterior-posterior (AP) and lateral pelvic images. Results: The results provided a satisfactory overlay of the transformed kV onto the DRR image. The introduced vs. detected shifts were fit into a linear regression. In the AP image experiments, linear regression analysis showed a slope of 1.15 and 0.98 with an R2 of 0.89 and 0.99 for the horizontal and vertical shifts, respectively. The results are 1.2 and 1.3 with R2 of 0.72 and 0.82 for the lateral image shifts. Conclusion: This work provided an alternative technique for kV to DRR alignment. Further improvements in the estimation accuracy and image contrast tolerance are underway.

  18. A virtual laboratory for medical image analysis

    NARCIS (Netherlands)

    Olabarriaga, Sílvia D.; Glatard, Tristan; de Boer, Piter T.

    2010-01-01

    This paper presents the design, implementation, and usage of a virtual laboratory for medical image analysis. It is fully based on the Dutch grid, which is part of the Enabling Grids for E-sciencE (EGEE) production infrastructure and driven by the gLite middleware. The adopted service-oriented

  19. Sub-pattern based multi-manifold discriminant analysis for face recognition

    Science.gov (United States)

    Dai, Jiangyan; Guo, Changlu; Zhou, Wei; Shi, Yanjiao; Cong, Lin; Yi, Yugen

    2018-04-01

    In this paper, we present a Sub-pattern based Multi-manifold Discriminant Analysis (SpMMDA) algorithm for face recognition. Unlike existing Multi-manifold Discriminant Analysis (MMDA) approach which is based on holistic information of face image for recognition, SpMMDA operates on sub-images partitioned from the original face image and then extracts the discriminative local feature from the sub-images separately. Moreover, the structure information of different sub-images from the same face image is considered in the proposed method with the aim of further improve the recognition performance. Extensive experiments on three standard face databases (Extended YaleB, CMU PIE and AR) demonstrate that the proposed method is effective and outperforms some other sub-pattern based face recognition methods.

  20. Video retrieval by still-image analysis with ImageMiner

    Science.gov (United States)

    Kreyss, Jutta; Roeper, M.; Alshuth, Peter; Hermes, Thorsten; Herzog, Otthein

    1997-01-01

    The large amount of available multimedia information (e.g. videos, audio, images) requires efficient and effective annotation and retrieval methods. As videos start playing a more important role in the frame of multimedia, we want to make these available for content-based retrieval. The ImageMiner-System, which was developed at the University of Bremen in the AI group, is designed for content-based retrieval of single images by a new combination of techniques and methods from computer vision and artificial intelligence. In our approach to make videos available for retrieval in a large database of videos and images there are two necessary steps: First, the detection and extraction of shots from a video, which is done by a histogram based method and second, the construction of the separate frames in a shot to one still single images. This is performed by a mosaicing-technique. The resulting mosaiced image gives a one image visualization of the shot and can be analyzed by the ImageMiner-System. ImageMiner has been tested on several domains, (e.g. landscape images, technical drawings), which cover a wide range of applications.