WorldWideScience

Sample records for imaging experiments based

  1. Developing students’ ideas about lens imaging: teaching experiments with an image-based approach

    Science.gov (United States)

    Grusche, Sascha

    2017-07-01

    Lens imaging is a classic topic in physics education. To guide students from their holistic viewpoint to the scientists’ analytic viewpoint, an image-based approach to lens imaging has recently been proposed. To study the effect of the image-based approach on undergraduate students’ ideas, teaching experiments are performed and evaluated using qualitative content analysis. Some of the students’ ideas have not been reported before, namely those related to blurry lens images, and those developed by the proposed teaching approach. To describe learning pathways systematically, a conception-versus-time coordinate system is introduced, specifying how teaching actions help students advance toward a scientific understanding.

  2. Model-based microwave image reconstruction: simulations and experiments

    International Nuclear Information System (INIS)

    Ciocan, Razvan; Jiang Huabei

    2004-01-01

    We describe an integrated microwave imaging system that can provide spatial maps of dielectric properties of heterogeneous media with tomographically collected data. The hardware system (800-1200 MHz) was built based on a lock-in amplifier with 16 fixed antennas. The reconstruction algorithm was implemented using a Newton iterative method with combined Marquardt-Tikhonov regularizations. System performance was evaluated using heterogeneous media mimicking human breast tissue. Finite element method coupled with the Bayliss and Turkel radiation boundary conditions were applied to compute the electric field distribution in the heterogeneous media of interest. The results show that inclusions embedded in a 76-diameter background medium can be quantitatively reconstructed from both simulated and experimental data. Quantitative analysis of the microwave images obtained suggests that an inclusion of 14 mm in diameter is the smallest object that can be fully characterized presently using experimental data, while objects as small as 10 mm in diameter can be quantitatively resolved with simulated data

  3. Effect of Reading Ability and Internet Experience on Keyword-Based Image Search

    Science.gov (United States)

    Lei, Pei-Lan; Lin, Sunny S. J.; Sun, Chuen-Tsai

    2013-01-01

    Image searches are now crucial for obtaining information, constructing knowledge, and building successful educational outcomes. We investigated how reading ability and Internet experience influence keyword-based image search behaviors and performance. We categorized 58 junior-high-school students into four groups of high/low reading ability and…

  4. Voxel-based Monte Carlo simulation of X-ray imaging and spectroscopy experiments

    International Nuclear Information System (INIS)

    Bottigli, U.; Brunetti, A.; Golosio, B.; Oliva, P.; Stumbo, S.; Vincze, L.; Randaccio, P.; Bleuet, P.; Simionovici, A.; Somogyi, A.

    2004-01-01

    A Monte Carlo code for the simulation of X-ray imaging and spectroscopy experiments in heterogeneous samples is presented. The energy spectrum, polarization and profile of the incident beam can be defined so that X-ray tube systems as well as synchrotron sources can be simulated. The sample is modeled as a 3D regular grid. The chemical composition and density is given at each point of the grid. Photoelectric absorption, fluorescent emission, elastic and inelastic scattering are included in the simulation. The core of the simulation is a fast routine for the calculation of the path lengths of the photon trajectory intersections with the grid voxels. The voxel representation is particularly useful for samples that cannot be well described by a small set of polyhedra. This is the case of most naturally occurring samples. In such cases, voxel-based simulations are much less expensive in terms of computational cost than simulations on a polygonal representation. The efficient scheme used for calculating the path lengths in the voxels and the use of variance reduction techniques make the code suitable for the detailed simulation of complex experiments on generic samples in a relatively short time. Examples of applications to X-ray imaging and spectroscopy experiments are discussed

  5. Voxel-based Monte Carlo simulation of X-ray imaging and spectroscopy experiments

    Energy Technology Data Exchange (ETDEWEB)

    Bottigli, U. [Istituto di Matematica e Fisica dell' Universita di Sassari, via Vienna 2, 07100, Sassari (Italy); Sezione INFN di Cagliari (Italy); Brunetti, A. [Istituto di Matematica e Fisica dell' Universita di Sassari, via Vienna 2, 07100, Sassari (Italy); Golosio, B. [Istituto di Matematica e Fisica dell' Universita di Sassari, via Vienna 2, 07100, Sassari (Italy) and Sezione INFN di Cagliari (Italy)]. E-mail: golosio@uniss.it; Oliva, P. [Istituto di Matematica e Fisica dell' Universita di Sassari, via Vienna 2, 07100, Sassari (Italy); Stumbo, S. [Istituto di Matematica e Fisica dell' Universita di Sassari, via Vienna 2, 07100, Sassari (Italy); Vincze, L. [Department of Chemistry, University of Antwerp (Belgium); Randaccio, P. [Dipartimento di Fisica dell' Universita di Cagliari and Sezione INFN di Cagliari (Italy); Bleuet, P. [European Synchrotron Radiation Facility, Grenoble (France); Simionovici, A. [European Synchrotron Radiation Facility, Grenoble (France); Somogyi, A. [European Synchrotron Radiation Facility, Grenoble (France)

    2004-10-08

    A Monte Carlo code for the simulation of X-ray imaging and spectroscopy experiments in heterogeneous samples is presented. The energy spectrum, polarization and profile of the incident beam can be defined so that X-ray tube systems as well as synchrotron sources can be simulated. The sample is modeled as a 3D regular grid. The chemical composition and density is given at each point of the grid. Photoelectric absorption, fluorescent emission, elastic and inelastic scattering are included in the simulation. The core of the simulation is a fast routine for the calculation of the path lengths of the photon trajectory intersections with the grid voxels. The voxel representation is particularly useful for samples that cannot be well described by a small set of polyhedra. This is the case of most naturally occurring samples. In such cases, voxel-based simulations are much less expensive in terms of computational cost than simulations on a polygonal representation. The efficient scheme used for calculating the path lengths in the voxels and the use of variance reduction techniques make the code suitable for the detailed simulation of complex experiments on generic samples in a relatively short time. Examples of applications to X-ray imaging and spectroscopy experiments are discussed.

  6. Patient-directed Internet-based Medical Image Exchange: Experience from an Initial Multicenter Implementation.

    Science.gov (United States)

    Greco, Giampaolo; Patel, Anand S; Lewis, Sara C; Shi, Wei; Rasul, Rehana; Torosyan, Mary; Erickson, Bradley J; Hiremath, Atheeth; Moskowitz, Alan J; Tellis, Wyatt M; Siegel, Eliot L; Arenson, Ronald L; Mendelson, David S

    2016-02-01

    Inefficient transfer of personal health records among providers negatively impacts quality of health care and increases cost. This multicenter study evaluates the implementation of the first Internet-based image-sharing system that gives patients ownership and control of their imaging exams, including assessment of patient satisfaction. Patients receiving any medical imaging exams in four academic centers were eligible to have images uploaded into an online, Internet-based personal health record. Satisfaction surveys were provided during recruitment with questions on ease of use, privacy and security, and timeliness of access to images. Responses were rated on a five-point scale and compared using logistic regression and McNemar's test. A total of 2562 patients enrolled from July 2012 to August 2013. The median number of imaging exams uploaded per patient was 5. Most commonly, exams were plain X-rays (34.7%), computed tomography (25.7%), and magnetic resonance imaging (16.1%). Of 502 (19.6%) patient surveys returned, 448 indicated the method of image sharing (Internet, compact discs [CDs], both, other). Nearly all patients (96.5%) responded favorably to having direct access to images, and 78% reported viewing their medical images independently. There was no difference between Internet and CD users in satisfaction with privacy and security and timeliness of access to medical images. A greater percentage of Internet users compared to CD users reported access without difficulty (88.3% vs. 77.5%, P Internet-based image-sharing system is feasible and surpasses the use of CDs with respect to accessibility of imaging exams while generating similar satisfaction with respect to privacy. Copyright © 2015 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  7. Radiometric modeling and calibration of the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) ground based measurement experiment

    Science.gov (United States)

    Tian, Jialin; Smith, William L.; Gazarik, Michael J.

    2008-12-01

    The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere's thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The GIFTS calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts, therefore, enhancing the absolute calibration accuracy. This method is applied to data collected during the GIFTS Ground Based Measurement (GBM) experiment, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The accurately calibrated GIFTS radiances are produced using the first four PC scores in the GIFTS-AERI regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period. Using the GIFTS GBM calibration model, we compute the calibrated radiances from data

  8. Clinical Experiences With Onboard Imager KV Images for Linear Accelerator-Based Stereotactic Radiosurgery and Radiotherapy Setup

    International Nuclear Information System (INIS)

    Hong, Linda X.; Chen, Chin C.; Garg, Madhur; Yaparpalvi, Ravindra; Mah, Dennis

    2009-01-01

    Purpose: To report our clinical experiences with on-board imager (OBI) kV image verification for cranial stereotactic radiosurgery (SRS) and radiotherapy (SRT) treatments. Methods and Materials: Between January 2007 and May 2008, 42 patients (57 lesions) were treated with SRS with head frame immobilization and 13 patients (14 lesions) were treated with SRT with face mask immobilization at our institution. No margin was added to the gross tumor for SRS patients, and a 3-mm three-dimensional margin was added to the gross tumor to create the planning target volume for SRT patients. After localizing the patient with stereotactic target positioner (TaPo), orthogonal kV images using OBI were taken and fused to planning digital reconstructed radiographs. Suggested couch shifts in vertical, longitudinal, and lateral directions were recorded. kV images were also taken immediately after treatment for 21 SRS patients and on a weekly basis for 6 SRT patients to assess any intrafraction changes. Results: For SRS patients, 57 pretreatment kV images were evaluated and the suggested shifts were all within 1 mm in any direction (i.e., within the accuracy of image fusion). For SRT patients, the suggested shifts were out of the 3-mm tolerance for 31 of 309 setups. Intrafraction motions were detected in 3 SRT patients. Conclusions: kV imaging provided a useful tool for SRS or SRT setups. For SRS setup with head frame, it provides radiographic confirmation of localization using the stereotactic target positioner. For SRT with mask, a 3-mm margin is adequate and feasible for routine setup when TaPo is combined with kV imaging

  9. First experience with THE AUTOLAP™ SYSTEM: an image-based robotic camera steering device.

    Science.gov (United States)

    Wijsman, Paul J M; Broeders, Ivo A M J; Brenkman, Hylke J; Szold, Amir; Forgione, Antonello; Schreuder, Henk W R; Consten, Esther C J; Draaisma, Werner A; Verheijen, Paul M; Ruurda, Jelle P; Kaufman, Yuval

    2018-05-01

    Robotic camera holders for endoscopic surgery have been available for 20 years but market penetration is low. The current camera holders are controlled by voice, joystick, eyeball tracking, or head movements, and this type of steering has proven to be successful but excessive disturbance of surgical workflow has blocked widespread introduction. The Autolap™ system (MST, Israel) uses a radically different steering concept based on image analysis. This may improve acceptance by smooth, interactive, and fast steering. These two studies were conducted to prove safe and efficient performance of the core technology. A total of 66 various laparoscopic procedures were performed with the AutoLap™ by nine experienced surgeons, in two multi-center studies; 41 cholecystectomies, 13 fundoplications including hiatal hernia repair, 4 endometriosis surgeries, 2 inguinal hernia repairs, and 6 (bilateral) salpingo-oophorectomies. The use of the AutoLap™ system was evaluated in terms of safety, image stability, setup and procedural time, accuracy of imaged-based movements, and user satisfaction. Surgical procedures were completed with the AutoLap™ system in 64 cases (97%). The mean overall setup time of the AutoLap™ system was 4 min (04:08 ± 0.10). Procedure times were not prolonged due to the use of the system when compared to literature average. The reported user satisfaction was 3.85 and 3.96 on a scale of 1 to 5 in two studies. More than 90% of the image-based movements were accurate. No system-related adverse events were recorded while using the system. Safe and efficient use of the core technology of the AutoLap™ system was demonstrated with high image stability and good surgeon satisfaction. The results support further clinical studies that will focus on usability, improved ergonomics and additional image-based features.

  10. Hepatic trauma: CT findings and considerations based on our experience in emergency diagnostic imaging

    International Nuclear Information System (INIS)

    Romano, Luigia; Giovine, Sabrina; Guidi, Guido; Tortora, Giovanni; Cinque, Teresa; Romano, Stefania

    2004-01-01

    and peritoneal fluid evaluation may be used to make a first differentiation of severity of lesions, but haemodynamic parameters may help the clinician to prefer a conservative treatment. In emergency based hospitals and also in our experience, positive benefits spring from diagnostic accuracy and consequent correct therapeutic management

  11. Hepatic trauma: CT findings and considerations based on our experience in emergency diagnostic imaging

    Energy Technology Data Exchange (ETDEWEB)

    Romano, Luigia; Giovine, Sabrina; Guidi, Guido; Tortora, Giovanni; Cinque, Teresa; Romano, Stefania E-mail: stefromano@libero.it

    2004-04-01

    findings and peritoneal fluid evaluation may be used to make a first differentiation of severity of lesions, but haemodynamic parameters may help the clinician to prefer a conservative treatment. In emergency based hospitals and also in our experience, positive benefits spring from diagnostic accuracy and consequent correct therapeutic management.

  12. GPR Imaging for Deeply Buried Objects: A Comparative Study Based on FDTD Models and Field Experiments

    Science.gov (United States)

    Tilley, roger; Dowla, Farid; Nekoogar, Faranak; Sadjadpour, Hamid

    2012-01-01

    Conventional use of Ground Penetrating Radar (GPR) is hampered by variations in background environmental conditions, such as water content in soil, resulting in poor repeatability of results over long periods of time when the radar pulse characteristics are kept the same. Target objects types might include voids, tunnels, unexploded ordinance, etc. The long-term objective of this work is to develop methods that would extend the use of GPR under various environmental and soil conditions provided an optimal set of radar parameters (such as frequency, bandwidth, and sensor configuration) are adaptively employed based on the ground conditions. Towards that objective, developing Finite Difference Time Domain (FDTD) GPR models, verified by experimental results, would allow us to develop analytical and experimental techniques to control radar parameters to obtain consistent GPR images with changing ground conditions. Reported here is an attempt at developing 20 and 3D FDTD models of buried targets verified by two different radar systems capable of operating over different soil conditions. Experimental radar data employed were from a custom designed high-frequency (200 MHz) multi-static sensor platform capable of producing 3-D images, and longer wavelength (25 MHz) COTS radar (Pulse EKKO 100) capable of producing 2-D images. Our results indicate different types of radar can produce consistent images.

  13. MR-based full-body preventative cardiovascular and tumor imaging: technique and preliminary experience

    International Nuclear Information System (INIS)

    Goyen, Mathias; Goehde, Susanne C.; Herborn, Christoph U.; Hunold, Peter; Vogt, Florian M.; Gizewski, Elke R.; Lauenstein, Thomas C.; Ajaj, Waleed; Forsting, Michael; Debatin, Joerg F.; Ruehm, Stefan G.

    2004-01-01

    Recent improvements in hardware and software, lack of side effects, as well as diagnostic accuracy make magnetic resonance imaging a natural candidate for preventative imaging. Thus, the purpose of the study was to evaluate the feasibility of a comprehensive 60-min MR-based screening examination in healthy volunteers and a limited number of patients with known target disease. In ten healthy volunteers (7 men, 3 women; mean age, 32.4 years) and five patients (4 men, 1 woman; mean age, 56.2 years) with proven target disease we evaluated the performance of a comprehensive MR screening strategy by combining well-established organ-based MR examination components encompassing the brain, the arterial system, the heart, the lungs, and the colon. All ten volunteers and five patients tolerated the comprehensive MR examination well. The mean in-room time was 63 min. In one volunteer, insufficient colonic cleansing on the part of the volunteer diminished the diagnostic reliability of MR colonography. All remaining components of the comprehensive MR examination were considered diagnostic in all volunteers and patients. In the five patients, the examination revealed the known pathologies [aneurysm of the anterior communicating artery (n=1), renal artery stenosis (n=1), myocardial infarct (n=1), and colonic polyp (n=2)]. The outlined MR screening strategy encompassing the brain, the arterial system, the heart, the lung, and the colon is feasible. Further studies have to show that MR-based screening programs are cost-effective in terms of the life-years saved. (orig.)

  14. Holography Experiments on Optical Imaging.

    Science.gov (United States)

    Bonczak, B.; Dabrowski, J.

    1979-01-01

    Describes experiments intended to produce a better understanding of the holographic method of producing images and optical imaging by other optical systems. Application of holography to teaching physics courses is considered. (Author/SA)

  15. Experiment Design Regularization-Based Hardware/Software Codesign for Real-Time Enhanced Imaging in Uncertain Remote Sensing Environment

    Directory of Open Access Journals (Sweden)

    Castillo Atoche A

    2010-01-01

    Full Text Available A new aggregated Hardware/Software (HW/SW codesign approach to optimization of the digital signal processing techniques for enhanced imaging with real-world uncertain remote sensing (RS data based on the concept of descriptive experiment design regularization (DEDR is addressed. We consider the applications of the developed approach to typical single-look synthetic aperture radar (SAR imaging systems operating in the real-world uncertain RS scenarios. The software design is aimed at the algorithmic-level decrease of the computational load of the large-scale SAR image enhancement tasks. The innovative algorithmic idea is to incorporate into the DEDR-optimized fixed-point iterative reconstruction/enhancement procedure the convex convergence enforcement regularization via constructing the proper multilevel projections onto convex sets (POCS in the solution domain. The hardware design is performed via systolic array computing based on a Xilinx Field Programmable Gate Array (FPGA XC4VSX35-10ff668 and is aimed at implementing the unified DEDR-POCS image enhancement/reconstruction procedures in a computationally efficient multi-level parallel fashion that meets the (near real-time image processing requirements. Finally, we comment on the simulation results indicative of the significantly increased performance efficiency both in resolution enhancement and in computational complexity reduction metrics gained with the proposed aggregated HW/SW co-design approach.

  16. Image-Based Single Cell Profiling: High-Throughput Processing of Mother Machine Experiments.

    Directory of Open Access Journals (Sweden)

    Christian Carsten Sachs

    Full Text Available Microfluidic lab-on-chip technology combined with live-cell imaging has enabled the observation of single cells in their spatio-temporal context. The mother machine (MM cultivation system is particularly attractive for the long-term investigation of rod-shaped bacteria since it facilitates continuous cultivation and observation of individual cells over many generations in a highly parallelized manner. To date, the lack of fully automated image analysis software limits the practical applicability of the MM as a phenotypic screening tool.We present an image analysis pipeline for the automated processing of MM time lapse image stacks. The pipeline supports all analysis steps, i.e., image registration, orientation correction, channel/cell detection, cell tracking, and result visualization. Tailored algorithms account for the specialized MM layout to enable a robust automated analysis. Image data generated in a two-day growth study (≈ 90 GB is analyzed in ≈ 30 min with negligible differences in growth rate between automated and manual evaluation quality. The proposed methods are implemented in the software molyso (MOther machine AnaLYsis SOftware that provides a new profiling tool to analyze unbiasedly hitherto inaccessible large-scale MM image stacks.Presented is the software molyso, a ready-to-use open source software (BSD-licensed for the unsupervised analysis of MM time-lapse image stacks. molyso source code and user manual are available at https://github.com/modsim/molyso.

  17. Pictures, images, and recollective experience.

    Science.gov (United States)

    Dewhurst, S A; Conway, M A

    1994-09-01

    Five experiments investigated the influence of picture processing on recollective experience in recognition memory. Subjects studied items that differed in visual or imaginal detail, such as pictures versus words and high-imageability versus low-imageability words, and performed orienting tasks that directed processing either toward a stimulus as a word or toward a stimulus as a picture or image. Standard effects of imageability (e.g., the picture superiority effect and memory advantages following imagery) were obtained only in recognition judgments that featured recollective experience and were eliminated or reversed when recognition was not accompanied by recollective experience. It is proposed that conscious recollective experience in recognition memory is cued by attributes of retrieved memories such as sensory-perceptual attributes and records of cognitive operations performed at encoding.

  18. Experience of modeling relief of impact lunar crater Aitken based on high-resolution orbital images

    Science.gov (United States)

    Mukhametshin, Ch R.; Semenov, A. A.; Shpekin, M. I.

    2018-05-01

    The paper presents the author’s results of modeling the relief of lunar Aitken crater on the basis of high-resolution orbital images. The images were taken in the frame of the “Apollo” program in 1971-1972 and delivered to the Earth by crews of “Apollo-15” and “Apollo-17”. The authors used the images obtained by metric and panoramic cameras. The main result is the careful study of the unusual features of Aitken crater on models created by the authors with the computer program, developed by “Agisoft Photoscan”. The paper shows what possibilities are opened with 3D models in the study of the structure of impact craters on the Moon. In particular, for the first time, the authors managed to show the structure of the glacier-like tongue in Aitken crater, which is regarded as one of the promising areas of the Moon for the forthcoming expeditions.

  19. Impact of point spread function correction in standardized uptake value quantitation for positron emission tomography images. A study based on phantom experiments and clinical images

    International Nuclear Information System (INIS)

    Nakamura, Akihiro; Tanizaki, Yasuo; Takeuchi, Miho

    2014-01-01

    While point spread function (PSF)-based positron emission tomography (PET) reconstruction effectively improves the spatial resolution and image quality of PET, it may damage its quantitative properties by producing edge artifacts, or Gibbs artifacts, which appear to cause overestimation of regional radioactivity concentration. In this report, we investigated how edge artifacts produce negative effects on the quantitative properties of PET. Experiments with a National Electrical Manufacturers Association (NEMA) phantom, containing radioactive spheres of a variety of sizes and background filled with cold air or water, or radioactive solutions, showed that profiles modified by edge artifacts were reproducible regardless of background μ values, and the effects of edge artifacts increased with increasing sphere-to-background radioactivity concentration ratio (S/B ratio). Profiles were also affected by edge artifacts in complex fashion in response to variable combinations of sphere sizes and S/B ratios; and central single-peak overestimation up to 50% was occasionally noted in relatively small spheres with high S/B ratios. Effects of edge artifacts were obscured in spheres with low S/B ratios. In patient images with a variety of focal lesions, areas of higher radioactivity accumulation were generally more enhanced by edge artifacts, but the effects were variable depending on the size of and accumulation in the lesion. PET images generated using PSF-based reconstruction are therefore not appropriate for the evaluation of SUV. (author)

  20. [Impact of point spread function correction in standardized uptake value quantitation for positron emission tomography images: a study based on phantom experiments and clinical images].

    Science.gov (United States)

    Nakamura, Akihiro; Tanizaki, Yasuo; Takeuchi, Miho; Ito, Shigeru; Sano, Yoshitaka; Sato, Mayumi; Kanno, Toshihiko; Okada, Hiroyuki; Torizuka, Tatsuo; Nishizawa, Sadahiko

    2014-06-01

    While point spread function (PSF)-based positron emission tomography (PET) reconstruction effectively improves the spatial resolution and image quality of PET, it may damage its quantitative properties by producing edge artifacts, or Gibbs artifacts, which appear to cause overestimation of regional radioactivity concentration. In this report, we investigated how edge artifacts produce negative effects on the quantitative properties of PET. Experiments with a National Electrical Manufacturers Association (NEMA) phantom, containing radioactive spheres of a variety of sizes and background filled with cold air or water, or radioactive solutions, showed that profiles modified by edge artifacts were reproducible regardless of background μ values, and the effects of edge artifacts increased with increasing sphere-to-background radioactivity concentration ratio (S/B ratio). Profiles were also affected by edge artifacts in complex fashion in response to variable combinations of sphere sizes and S/B ratios; and central single-peak overestimation up to 50% was occasionally noted in relatively small spheres with high S/B ratios. Effects of edge artifacts were obscured in spheres with low S/B ratios. In patient images with a variety of focal lesions, areas of higher radioactivity accumulation were generally more enhanced by edge artifacts, but the effects were variable depending on the size of and accumulation in the lesion. PET images generated using PSF-based reconstruction are therefore not appropriate for the evaluation of SUV.

  1. Portable Imaging Polarimeter and Imaging Experiments; TOPICAL

    International Nuclear Information System (INIS)

    PHIPPS, GARY S.; KEMME, SHANALYN A.; SWEATT, WILLIAM C.; DESCOUR, M.R.; GARCIA, J.P.; DERENIAK, E.L.

    1999-01-01

    Polarimetry is the method of recording the state of polarization of light. Imaging polarimetry extends this method to recording the spatially resolved state of polarization within a scene. Imaging-polarimetry data have the potential to improve the detection of manmade objects in natural backgrounds. We have constructed a midwave infrared complete imaging polarimeter consisting of a fixed wire-grid polarizer and rotating form-birefringent retarder. The retardance and the orientation angles of the retarder were optimized to minimize the sensitivity of the instrument to noise in the measurements. The optimal retardance was found to be 132(degree) rather than the typical 90(degree). The complete imaging polarimeter utilized a liquid-nitrogen cooled PtSi camera. The fixed wire-grid polarizer was located at the cold stop inside the camera dewar. The complete imaging polarimeter was operated in the 4.42-5(micro)m spectral range. A series of imaging experiments was performed using as targets a surface of water, an automobile, and an aircraft. Further analysis of the polarization measurements revealed that in all three cases the magnitude of circular polarization was comparable to the noise in the calculated Stokes-vector components

  2. Constructing Image-Based Culture Definitions Using Metaphors: Impact of a Cross-Cultural Immersive Experience

    Science.gov (United States)

    Tuleja, Elizabeth A.

    2017-01-01

    This study provides an approach to teaching and learning in the international business (IB) classroom about cultural values, beliefs, attitudes, and norms through the study of cultural metaphor. The methodology is based on established qualitative methods by using participants' visual pictures and written explanations--representative of their…

  3. Experiments with a novel content-based image retrieval software: can we eliminate classification systems in adolescent idiopathic scoliosis?

    Science.gov (United States)

    Menon, K Venugopal; Kumar, Dinesh; Thomas, Tessamma

    2014-02-01

    Study Design Preliminary evaluation of new tool. Objective To ascertain whether the newly developed content-based image retrieval (CBIR) software can be used successfully to retrieve images of similar cases of adolescent idiopathic scoliosis (AIS) from a database to help plan treatment without adhering to a classification scheme. Methods Sixty-two operated cases of AIS were entered into the newly developed CBIR database. Five new cases of different curve patterns were used as query images. The images were fed into the CBIR database that retrieved similar images from the existing cases. These were analyzed by a senior surgeon for conformity to the query image. Results Within the limits of variability set for the query system, all the resultant images conformed to the query image. One case had no similar match in the series. The other four retrieved several images that were matching with the query. No matching case was left out in the series. The postoperative images were then analyzed to check for surgical strategies. Broad guidelines for treatment could be derived from the results. More precise query settings, inclusion of bending films, and a larger database will enhance accurate retrieval and better decision making. Conclusion The CBIR system is an effective tool for accurate documentation and retrieval of scoliosis images. Broad guidelines for surgical strategies can be made from the postoperative images of the existing cases without adhering to any classification scheme.

  4. Dictionary Based Image Segmentation

    DEFF Research Database (Denmark)

    Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2015-01-01

    We propose a method for weakly supervised segmentation of natural images, which may contain both textured or non-textured regions. Our texture representation is based on a dictionary of image patches. To divide an image into separated regions with similar texture we use an implicit level sets...

  5. Overview of 3-year experience with large-scale electronic portal imaging device-based 3-dimensional transit dosimetry

    NARCIS (Netherlands)

    Mijnheer, Ben J.; González, Patrick; Olaciregui-Ruiz, Igor; Rozendaal, Roel A.; van Herk, Marcel; Mans, Anton

    2015-01-01

    To assess the usefulness of electronic portal imaging device (EPID)-based 3-dimensional (3D) transit dosimetry in a radiation therapy department by analyzing a large set of dose verification results. In our institution, routine in vivo dose verification of all treatments is performed by means of 3D

  6. The Track Imaging Cerenkov Experiment

    Science.gov (United States)

    Wissel, S. A.; Byrum, K.; Cunningham, J. D.; Drake, G.; Hays, E.; Horan, D.; Kieda, D.; Kovacs, E.; Macgill, S.; Nodulman, L.; hide

    2012-01-01

    We describe a dedicated cosmic-ray telescope that explores a new method for detecting Cerenkov radiation from high-energy primary cosmic rays and the large particle air shower they induce upon entering the atmosphere. Using a camera comprising 16 multi-anode photomultiplier tubes for a total of 256 pixels, the Track Imaging Cerenkov Experiment (TrICE) resolves substructures in particle air showers with 0.086deg resolution. Cerenkov radiation is imaged using a novel two-part optical system in which a Fresnel lens provides a wide-field optical trigger and a mirror system collects delayed light with four times the magnification. TrICE records well-resolved cosmic-ray air showers at rates ranging between 0.01-0.1 Hz.

  7. IMAGE DESCRIPTIONS FOR SKETCH BASED IMAGE RETRIEVAL

    OpenAIRE

    SAAVEDRA RONDO, JOSE MANUEL; SAAVEDRA RONDO, JOSE MANUEL

    2008-01-01

    Due to the massive use of Internet together with the proliferation of media devices, content based image retrieval has become an active discipline in computer science. A common content based image retrieval approach requires that the user gives a regular image (e.g, a photo) as a query. However, having a regular image as query may be a serious problem. Indeed, people commonly use an image retrieval system because they do not count on the desired image. An easy alternative way t...

  8. REMOTE SENSING IMAGE QUALITY ASSESSMENT EXPERIMENT WITH POST-PROCESSING

    Directory of Open Access Journals (Sweden)

    W. Jiang

    2018-04-01

    Full Text Available This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  9. Nanoplatform-based molecular imaging

    National Research Council Canada - National Science Library

    Chen, Xiaoyuan

    2011-01-01

    "Nanoplathform-Based Molecular Imaging provides rationale for using nanoparticle-based probes for molecular imaging, then discusses general strategies for this underutilized, yet promising, technology...

  10. Particle Identification with the Cherenkov imaging technique using MPGD based Photon Detectors for Physics at COMPASS Experiment at CERN

    CERN Document Server

    AUTHOR|(CDS)2070220; Martin, Anna

    A novel technology for the detection of single photons has been developed and implemented in 2016 in the Ring Imaging Cherenkov (RICH) detector of the COMPASS Experiment at CERN SPS. Some basic knowledge in the field of particle identification and RICH counters, Micro Pattern Gaseous Detectors (MPGDs) in general and their development for photon detection applications are provided. The characteristics of the COMPASS setup are summarized and the COMPAS RICH-1 detector is described and shown to provide hadron identification in the momentum range between 3 and 55 GeV/c. The THGEM technology is discussed illustrating their characterization as gas multipliers and as reflective photocathodes: large gains and efficient photodetection collections are achieved when using optimized parameters and conditions (hole diameter = THGEM thickness = 0.4 mm; hole pitch = 0.8 mm and no rim; CH4-rich gas mixtures and electric field values > 1 kV/cm at the CsI surface). The intense R\\&D program leading to the choice of a hybrid...

  11. Destination visual image and expectation of experiences

    DEFF Research Database (Denmark)

    Ye, H.; Tussyadiah, Iis

    2011-01-01

    A unique experience is the essence of tourism sought by tourists. The most effective way to communicate the notion of a tourism experience at a destination is to provide visual cues that stimulate the imagination and connect with potential tourists in a personal way. This study aims...... at understanding how a visual image is relevant to the expectation of experiences by deconstructing images of a destination and interpreting visitors' perceptions of these images and the experiences associated with them. The results suggest that tourists with different understandings of desirable experiences found...

  12. Automation in Cytomics: A Modern RDBMS Based Platform for Image Analysis and Management in High-Throughput Screening Experiments

    NARCIS (Netherlands)

    E. Larios (Enrique); Y. Zhang (Ying); K. Yan (Kuan); Z. Di; S. LeDévédec (Sylvia); F.E. Groffen (Fabian); F.J. Verbeek

    2012-01-01

    textabstractIn cytomics bookkeeping of the data generated during lab experiments is crucial. The current approach in cytomics is to conduct High-Throughput Screening (HTS) experiments so that cells can be tested under many different experimental conditions. Given the large amount of different

  13. A MEMS-based heating holder for the direct imaging of simultaneous in-situ heating and biasing experiments in scanning/transmission electron microscopes.

    Science.gov (United States)

    Mele, Luigi; Konings, Stan; Dona, Pleun; Evertz, Francis; Mitterbauer, Christoph; Faber, Pybe; Schampers, Ruud; Jinschek, Joerg R

    2016-04-01

    The introduction of scanning/transmission electron microscopes (S/TEM) with sub-Angstrom resolution as well as fast and sensitive detection solutions support direct observation of dynamic phenomena in-situ at the atomic scale. Thereby, in-situ specimen holders play a crucial role: accurate control of the applied in-situ stimulus on the nanostructure combined with the overall system stability to assure atomic resolution are paramount for a successful in-situ S/TEM experiment. For those reasons, MEMS-based TEM sample holders are becoming one of the preferred choices, also enabling a high precision in measurements of the in-situ parameter for more reproducible data. A newly developed MEMS-based microheater is presented in combination with the new NanoEx™-i/v TEM sample holder. The concept is built on a four-point probe temperature measurement approach allowing active, accurate local temperature control as well as calorimetry. In this paper, it is shown that it provides high temperature stability up to 1,300°C with a peak temperature of 1,500°C (also working accurately in gaseous environments), high temperature measurement accuracy (in-situ S/TEM imaging experiments, but also elemental mapping at elevated temperatures using energy-dispersive X-ray spectroscopy (EDS). Moreover, it has the unique capability to enable simultaneous heating and biasing experiments. © 2016 Wiley Periodicals, Inc.

  14. Spectroscopic Needs for Imaging Dark Energy Experiments

    International Nuclear Information System (INIS)

    Newman, Jeffrey A.; Abate, Alexandra; Abdalla, Filipe B.; Allam, Sahar; Allen, Steven W.; Ansari, Reza; Bailey, Stephen; Barkhouse, Wayne A.; Beers, Timothy C.; Blanton, Michael R.; Brodwin, Mark; Brownstein, Joel R.; Brunner, Robert J.; Carrasco-Kind, Matias; Cervantes-Cota, Jorge; Chisari, Nora Elisa; Colless, Matthew; Coupon, Jean; Cunha, Carlos E.; Frye, Brenda L.; Gawiser, Eric J.; Gehrels, Neil; Grady, Kevin; Hagen, Alex; Hall, Patrick B.; Hearin, Andrew P.; Hildebrandt, Hendrik; Hirata, Christopher M.; Ho, Shirley; Huterer, Dragan; Ivezic, Zeljko; Kneib, Jean-Paul; Kruk, Jeffrey W.; Lahav, Ofer; Mandelbaum, Rachel; Matthews, Daniel J.; Miquel, Ramon; Moniez, Marc; Moos, H. W.; Moustakas, John; Papovich, Casey; Peacock, John A.; Rhodes, Jason; Ricol, Jean-Stepane; Sadeh, Iftach; Schmidt, Samuel J.; Stern, Daniel K.; Tyson, J. Anthony; Von der Linden, Anja; Wechsler, Risa H.; Wood-Vasey, W. M.; Zentner, A.

    2015-01-01

    Ongoing and near-future imaging-based dark energy experiments are critically dependent upon photometric redshifts (a.k.a. photo-z's): i.e., estimates of the redshifts of objects based only on flux information obtained through broad filters. Higher-quality, lower-scatter photo-z's will result in smaller random errors on cosmological parameters; while systematic errors in photometric redshift estimates, if not constrained, may dominate all other uncertainties from these experiments. The desired optimization and calibration is dependent upon spectroscopic measurements for secure redshift information; this is the key application of galaxy spectroscopy for imaging-based dark energy experiments. Hence, to achieve their full potential, imaging-based experiments will require large sets of objects with spectroscopically-determined redshifts, for two purposes: Training: Objects with known redshift are needed to map out the relationship between object color and z (or, equivalently, to determine empirically-calibrated templates describing the rest-frame spectra of the full range of galaxies, which may be used to predict the color-z relation). The ultimate goal of training is to minimize each moment of the distribution of differences between photometric redshift estimates and the true redshifts of objects, making the relationship between them as tight as possible. The larger and more complete our ''training set'' of spectroscopic redshifts is, the smaller the RMS photo-z errors should be, increasing the constraining power of imaging experiments; Requirements: Spectroscopic redshift measurements for ∼30,000 objects over >∼15 widely-separated regions, each at least ∼20 arcmin in diameter, and reaching the faintest objects used in a given experiment, will likely be necessary if photometric redshifts are to be trained and calibrated with conventional techniques. Larger, more complete samples (i.e., with longer exposure times) can improve photo

  15. ROV Based Underwater Blurred Image Restoration

    Institute of Scientific and Technical Information of China (English)

    LIU Zhishen; DING Tianfu; WANG Gang

    2003-01-01

    In this paper, we present a method of ROV based image processing to restore underwater blurry images from the theory of light and image transmission in the sea. Computer is used to simulate the maximum detection range of the ROV under different water body conditions. The receiving irradiance of the video camera at different detection ranges is also calculated. The ROV's detection performance under different water body conditions is given by simulation. We restore the underwater blurry images using the Wiener filter based on the simulation. The Wiener filter is shown to be a simple useful method for underwater image restoration in the ROV underwater experiments. We also present examples of restored images of an underwater standard target taken by the video camera in these experiments.

  16. Highly efficient router-based readout algorithm for single-photon-avalanche-diode imagers for time-correlated experiments

    Science.gov (United States)

    Cominelli, A.; Acconcia, G.; Caldi, F.; Peronio, P.; Ghioni, M.; Rech, I.

    2018-02-01

    Time-Correlated Single Photon Counting (TCSPC) is a powerful tool that permits to record extremely fast optical signals with a precision down to few picoseconds. On the other hand, it is recognized as a relatively slow technique, especially when a large time-resolved image is acquired exploiting a single acquisition channel and a scanning system. During the last years, much effort has been made towards the parallelization of many acquisition and conversion chains. In particular, the exploitation of Single-Photon Avalanche Diodes in standard CMOS technology has paved the way to the integration of thousands of independent channels on the same chip. Unfortunately, the presence of a large number of detectors can give rise to a huge rate of events, which can easily lead to the saturation of the transfer rate toward the elaboration unit. As a result, a smart readout approach is needed to guarantee an efficient exploitation of the limited transfer bandwidth. We recently introduced a novel readout architecture, aimed at maximizing the counting efficiency of the system in typical TCSPC measurements. It features a limited number of high-performance converters, which are shared with a much larger array, while a smart routing logic provides a dynamic multiplexing between the two parts. Here we propose a novel routing algorithm, which exploits standard digital gates distributed among a large 32x32 array to ensure a dynamic connection between detectors and external time-measurement circuits.

  17. Point cloud-based survey for cultural heritage – An experience of integrated use of range-based and image-based technology for the San Francesco convent in Monterubbiano

    Directory of Open Access Journals (Sweden)

    A. Meschini

    2014-06-01

    Full Text Available The paper aims at presenting some results of a point cloud-based survey carried out through integrated methodologies based on active and passive 3D acquisition techniques for processing 3D models. This experiment is part of a research project still in progress conducted by an interdisciplinary team from the School of Architecture and Design of Ascoli Piceno and funded by the University of Camerino. We describe an experimentation conducted on the convent of San Francesco located in Monterubbiano town center (Marche, Italy. The whole complex has undergone a number of substantial changes since the year of its foundation in 1247. The survey was based on an approach blending range-based 3D data acquired by a TOF laser scanner and image-based 3D acquired using an UAV equipped with digital camera in order to survey some external parts difficult to reach with TLS. The integration of two acquisition methods aimed to define a workflow suitable to process dense 3D models from which to generate high poly and low poly 3D models useful to describe complex architectures for different purposes such as photorealistic representations, historical documentation, risk assessment analyses based on Finite Element Methods (FEM.

  18. An Image-Based Modeling Experience about Social Facilities, Built during the Fascist Period in Middle Italy

    Science.gov (United States)

    Rossi, D.

    2011-09-01

    The main focus of this article is to explain a teaching activity. This experience follows a research aimed to testing innovative systems for formal and digital analysis of architectural building. In particular, the field of investigation is the analytical drawing. An analytical draw allows to develope an interpretative and similar models of reality; these models are built using photomodeling techniques and are designed to re-write modern and contemporary architecture. The typology of the buildings surveyed belong to a cultural period, called Modern Movement, historically placed between the two world wars. The Modern Movement aimed to renew existing architectural principle and to a functional redefinition of the same one. In Italy these principles arrived during the Fascist period. Heritage made up of public social buildings (case del Balilla, G.I.L., recreation center...) built during the fascist period in middle Italy is remarkable for quantity and in many cases for architectural quality. This kind of buildings are composed using pure shapes: large cube (gyms) alternate with long rectangular block containing offices creates compositions made of big volumes and high towers. These features are perfectly suited to the needs of a surveying process by photomodeling where the role of photography is central and where there is the need to identify certain and easily distinguishable points on all picture, leaning on the edges of the volume or lininig on the texture discontinuity. The goal is the documentation to preserve and to develop buildings and urban complexes of modern architecture, directed to encourage an artistic preservation.

  19. AN IMAGE-BASED MODELING EXPERIENCE ABOUT SOCIAL FACILITIES, BUILT DURING THE FASCIST PERIOD IN MIDDLE ITALY

    Directory of Open Access Journals (Sweden)

    D. Rossi

    2012-09-01

    Full Text Available The main focus of this article is to explain a teaching activity. This experience follows a research aimed to testing innovative systems for formal and digital analysis of architectural building. In particular, the field of investigation is the analytical drawing. An analytical draw allows to develope an interpretative and similar models of reality; these models are built using photomodeling techniques and are designed to re-write modern and contemporary architecture. The typology of the buildings surveyed belong to a cultural period, called Modern Movement, historically placed between the two world wars. The Modern Movement aimed to renew existing architectural principle and to a functional redefinition of the same one. In Italy these principles arrived during the Fascist period. Heritage made up of public social buildings (case del Balilla, G.I.L., recreation center... built during the fascist period in middle Italy is remarkable for quantity and in many cases for architectural quality. This kind of buildings are composed using pure shapes: large cube (gyms alternate with long rectangular block containing offices creates compositions made of big volumes and high towers. These features are perfectly suited to the needs of a surveying process by photomodeling where the role of photography is central and where there is the need to identify certain and easily distinguishable points on all picture, leaning on the edges of the volume or lininig on the texture discontinuity. The goal is the documentation to preserve and to develop buildings and urban complexes of modern architecture, directed to encourage an artistic preservation.

  20. Image based SAR product simulation for analysis

    Science.gov (United States)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  1. Evidence-based cancer imaging

    Energy Technology Data Exchange (ETDEWEB)

    Shinagare, Atul B.; Khorasani, Ramin [Dept. of Radiology, Brigham and Women' s Hospital, Boston (Korea, Republic of)

    2017-01-15

    With the advances in the field of oncology, imaging is increasingly used in the follow-up of cancer patients, leading to concerns about over-utilization. Therefore, it has become imperative to make imaging more evidence-based, efficient, cost-effective and equitable. This review explores the strategies and tools to make diagnostic imaging more evidence-based, mainly in the context of follow-up of cancer patients.

  2. Image denoising based on noise detection

    Science.gov (United States)

    Jiang, Yuanxiang; Yuan, Rui; Sun, Yuqiu; Tian, Jinwen

    2018-03-01

    Because of the noise points in the images, any operation of denoising would change the original information of non-noise pixel. A noise detection algorithm based on fractional calculus was proposed to denoise in this paper. Convolution of the image was made to gain direction gradient masks firstly. Then, the mean gray was calculated to obtain the gradient detection maps. Logical product was made to acquire noise position image next. Comparisons in the visual effect and evaluation parameters after processing, the results of experiment showed that the denoising algorithms based on noise were better than that of traditional methods in both subjective and objective aspects.

  3. Liposomes - experiment of magnetic resonance imaging application

    International Nuclear Information System (INIS)

    Mathieu, S.

    1987-01-01

    Most pharmaceutical research effort with liposomes has been involved with the investigation of their use as drug carriers to particular target organs. Recently there has been a growing interest in liposomes not only as carrier of drugs but as a tool for the introduction of various substances into the human body. In this study, liposome delivery of nitroxyl radicals as NMR contrast agent for improved tissue imaging is experimented in rats [fr

  4. PET CT imaging: the Philippine experience

    International Nuclear Information System (INIS)

    Santiago, Jonas Y.

    2011-01-01

    Currently, the most discussed fusion imaging is PET CT. Fusion technology has tremendous potential in diagnostic imaging to detect numerous conditions such as tumors, Alzheimer's disease, dementia and neural disorders. The fusion of PET with CT helps in the localization of molecular abnormalities, thereby increasing diagnostic accuracy and differentiating benign or artefact lesions from malignant diseases. It uses a radiotracer called fluro deoxyglucose that gives a clear distinction between pathological and physiological uptake. Interest in this technology is increasing and additional clinical validation are likely to induce more health care providers to invest in combined scanners. It is hope that in time, a better appreciation of its advantages over conventional and traditional imaging modalities will be realized. The first PET CT facility in the country was established at the St. Luke's Medical Center in Quezon City in 2008 and has since then provided a state-of-the art imaging modality to its patients here and those from other countries. The paper will present the experiences so far gained from its operation, including the measures and steps currently taken by the facility to ensure optimum workers and patient safety. Plans and programs to further enhance the awareness of the Filipino public on this advanced imaging modality for an improved health care delivery system may also be discussed briefly. (author)

  5. Image-based occupancy sensor

    Science.gov (United States)

    Polese, Luigi Gentile; Brackney, Larry

    2015-05-19

    An image-based occupancy sensor includes a motion detection module that receives and processes an image signal to generate a motion detection signal, a people detection module that receives the image signal and processes the image signal to generate a people detection signal, a face detection module that receives the image signal and processes the image signal to generate a face detection signal, and a sensor integration module that receives the motion detection signal from the motion detection module, receives the people detection signal from the people detection module, receives the face detection signal from the face detection module, and generates an occupancy signal using the motion detection signal, the people detection signal, and the face detection signal, with the occupancy signal indicating vacancy or occupancy, with an occupancy indication specifying that one or more people are detected within the monitored volume.

  6. The Galileo Solid-State Imaging experiment

    Science.gov (United States)

    Belton, M.J.S.; Klaasen, K.P.; Clary, M.C.; Anderson, J.L.; Anger, C.D.; Carr, M.H.; Chapman, C.R.; Davies, M.E.; Greeley, R.; Anderson, D.; Bolef, L.K.; Townsend, T.E.; Greenberg, R.; Head, J. W.; Neukum, G.; Pilcher, C.B.; Veverka, J.; Gierasch, P.J.; Fanale, F.P.; Ingersoll, A.P.; Masursky, H.; Morrison, D.; Pollack, James B.

    1992-01-01

    The Solid State Imaging (SSI) experiment on the Galileo Orbiter spacecraft utilizes a high-resolution (1500 mm focal length) television camera with an 800 ?? 800 pixel virtual-phase, charge-coupled detector. It is designed to return images of Jupiter and its satellites that are characterized by a combination of sensitivity levels, spatial resolution, geometric fiedelity, and spectral range unmatched by imaging data obtained previously. The spectral range extends from approximately 375 to 1100 nm and only in the near ultra-violet region (??? 350 nm) is the spectral coverage reduced from previous missions. The camera is approximately 100 times more sensitive than those used in the Voyager mission, and, because of the nature of the satellite encounters, will produce images with approximately 100 times the ground resolution (i.e., ??? 50 m lp-1) on the Galilean satellites. We describe aspects of the detector including its sensitivity to energetic particle radiation and how the requirements for a large full-well capacity and long-term stability in operating voltages led to the choice of the virtual phase chip. The F/8.5 camera system can reach point sources of V(mag) ??? 11 with S/N ??? 10 and extended sources with surface brightness as low as 20 kR in its highest gain state and longest exposure mode. We describe the performance of the system as determined by ground calibration and the improvements that have been made to the telescope (same basic catadioptric design that was used in Mariner 10 and the Voyager high-resolution cameras) to reduce the scattered light reaching the detector. The images are linearly digitized 8-bits deep and, after flat-fielding, are cosmetically clean. Information 'preserving' and 'non-preserving' on-board data compression capabilities are outlined. A special "summation" mode, designed for use deep in the Jovian radiation belts, near Io, is also described. The detector is 'preflashed' before each exposure to ensure the photometric linearity

  7. Microprocessor based image processing system

    International Nuclear Information System (INIS)

    Mirza, M.I.; Siddiqui, M.N.; Rangoonwala, A.

    1987-01-01

    Rapid developments in the production of integrated circuits and introduction of sophisticated 8,16 and now 32 bit microprocessor based computers, have set new trends in computer applications. Nowadays the users by investing much less money can make optimal use of smaller systems by getting them custom-tailored according to their requirements. During the past decade there have been great advancements in the field of computer Graphics and consequently, 'Image Processing' has emerged as a separate independent field. Image Processing is being used in a number of disciplines. In the Medical Sciences, it is used to construct pseudo color images from computer aided tomography (CAT) or positron emission tomography (PET) scanners. Art, advertising and publishing people use pseudo colours in pursuit of more effective graphics. Structural engineers use Image Processing to examine weld X-rays to search for imperfections. Photographers use Image Processing for various enhancements which are difficult to achieve in a conventional dark room. (author)

  8. Evidence based medical imaging (EBMI)

    International Nuclear Information System (INIS)

    Smith, Tony

    2008-01-01

    Background: The evidence based paradigm was first described about a decade ago. Previous authors have described a framework for the application of evidence based medicine which can be readily adapted to medical imaging practice. Purpose: This paper promotes the application of the evidence based framework in both the justification of the choice of examination type and the optimisation of the imaging technique used. Methods: The framework includes five integrated steps: framing a concise clinical question; searching for evidence to answer that question; critically appraising the evidence; applying the evidence in clinical practice; and, evaluating the use of revised practices. Results: This paper illustrates the use of the evidence based framework in medical imaging (that is, evidence based medical imaging) using the examples of two clinically relevant case studies. In doing so, a range of information technology and other resources available to medical imaging practitioners are identified with the intention of encouraging the application of the evidence based paradigm in radiography and radiology. Conclusion: There is a perceived need for radiographers and radiologists to make greater use of valid research evidence from the literature to inform their clinical practice and thus provide better quality services

  9. HDR Pathological Image Enhancement Based on Improved Bias Field Correction and Guided Image Filter

    Directory of Open Access Journals (Sweden)

    Qingjiao Sun

    2016-01-01

    Full Text Available Pathological image enhancement is a significant topic in the field of pathological image processing. This paper proposes a high dynamic range (HDR pathological image enhancement method based on improved bias field correction and guided image filter (GIF. Firstly, a preprocessing including stain normalization and wavelet denoising is performed for Haematoxylin and Eosin (H and E stained pathological image. Then, an improved bias field correction model is developed to enhance the influence of light for high-frequency part in image and correct the intensity inhomogeneity and detail discontinuity of image. Next, HDR pathological image is generated based on least square method using low dynamic range (LDR image, H and E channel images. Finally, the fine enhanced image is acquired after the detail enhancement process. Experiments with 140 pathological images demonstrate the performance advantages of our proposed method as compared with related work.

  10. Edge-based correlation image registration for multispectral imaging

    Science.gov (United States)

    Nandy, Prabal [Albuquerque, NM

    2009-11-17

    Registration information for images of a common target obtained from a plurality of different spectral bands can be obtained by combining edge detection and phase correlation. The images are edge-filtered, and pairs of the edge-filtered images are then phase correlated to produce phase correlation images. The registration information can be determined based on these phase correlation images.

  11. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  12. First Human Experience with Directly Image-able Iodinated Embolization Microbeads

    Energy Technology Data Exchange (ETDEWEB)

    Levy, Elliot B., E-mail: levyeb@cc.nih.gov; Krishnasamy, Venkatesh P. [National Institutes of Health, Center for Interventional Oncology (United States); Lewis, Andrew L.; Willis, Sean; Macfarlane, Chelsea [Biocompatibles, UK Ltd, A BTG International Group Company (United Kingdom); Anderson, Victoria [National Institutes of Health, Center for Interventional Oncology (United States); Bom, Imramsjah MJ van der [Clinical Science IGT Systems North & Latin America, Philips, Philips, Image Guided Interventions (United States); Radaelli, Alessandro [Image-Guided Therapy Systems, Philips, Philips, Image Guided Interventions (Netherlands); Dreher, Matthew R. [Biocompatibles, UK Ltd, A BTG International Group Company (United Kingdom); Sharma, Karun V. [Children’s National Medical Center (United States); Negussie, Ayele; Mikhail, Andrew S. [National Institutes of Health, Center for Interventional Oncology (United States); Geschwind, Jean-Francois H. [Department of Radiology and Biomedical Imaging (United States); Wood, Bradford J. [National Institutes of Health, Center for Interventional Oncology (United States)

    2016-08-15

    PurposeTo describe first clinical experience with a directly image-able, inherently radio-opaque microspherical embolic agent for transarterial embolization of liver tumors.MethodologyLC Bead LUMI™ is a new product based upon sulfonate-modified polyvinyl alcohol hydrogel microbeads with covalently bound iodine (~260 mg I/ml). 70–150 μ LC Bead LUMI™ iodinated microbeads were injected selectively via a 2.8 Fr microcatheter to near complete flow stasis into hepatic arteries in three patients with hepatocellular carcinoma, carcinoid, or neuroendocrine tumor. A custom imaging platform tuned for LC LUMI™ microbead conspicuity using a cone beam CT (CBCT)/angiographic C-arm system (Allura Clarity FD20, Philips) was used along with CBCT embolization treatment planning software (EmboGuide, Philips).ResultsLC Bead LUMI™ image-able microbeads were easily delivered and monitored during the procedure using fluoroscopy, single-shot radiography (SSD), digital subtraction angiography (DSA), dual-phase enhanced and unenhanced CBCT, and unenhanced conventional CT obtained 48 h after the procedure. Intra-procedural imaging demonstrated tumor at risk for potential under-treatment, defined as paucity of image-able microbeads within a portion of the tumor which was confirmed at 48 h CT imaging. Fusion of pre- and post-embolization CBCT identified vessels without beads that corresponded to enhancing tumor tissue in the same location on follow-up imaging (48 h post).ConclusionLC Bead LUMI™ image-able microbeads provide real-time feedback and geographic localization of treatment in real time during treatment. The distribution and density of image-able beads within a tumor need further evaluation as an additional endpoint for embolization.

  13. Cluster Ensemble-Based Image Segmentation

    Directory of Open Access Journals (Sweden)

    Xiaoru Wang

    2013-07-01

    Full Text Available Image segmentation is the foundation of computer vision applications. In this paper, we propose a new cluster ensemble-based image segmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories of images. Second, we exploit the PageRank idea from Internet applications and apply it to the image segmentation task. This can improve the final segmentation results by combining the spatial information of the image and the semantic similarity of regions. Our experiments on four public image databases validate the superiority of our algorithm over conventional single type of feature or multiple types of features-based algorithms, since our algorithm can fuse multiple types of features effectively for better segmentation results. Moreover, our method is also proved to be very competitive in comparison with other state-of-the-art segmentation algorithms.

  14. HVS-based medical image compression

    Energy Technology Data Exchange (ETDEWEB)

    Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)

    2005-07-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.

  15. HVS-based medical image compression

    International Nuclear Information System (INIS)

    Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao

    2005-01-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time

  16. Bayer image parallel decoding based on GPU

    Science.gov (United States)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  17. Image inpainting based on stacked autoencoders

    International Nuclear Information System (INIS)

    Shcherbakov, O; Batishcheva, V

    2014-01-01

    Recently we have proposed the algorithm for the problem of image inpaiting (filling in occluded or damaged parts of images). This algorithm was based on the criterion spectrum entropy and showed promising results despite of using hand-crafted representation of images. In this paper, we present a method for solving image inpaiting task based on learning some image representation. Some results are shown to illustrate quality of image reconstruction.

  18. Image encryption based on permutation-substitution using chaotic map and Latin Square Image Cipher

    Science.gov (United States)

    Panduranga, H. T.; Naveen Kumar, S. K.; Kiran, HASH(0x22c8da0)

    2014-06-01

    In this paper we presented a image encryption based on permutation-substitution using chaotic map and Latin square image cipher. The proposed method consists of permutation and substitution process. In permutation process, plain image is permuted according to chaotic sequence generated using chaotic map. In substitution process, based on secrete key of 256 bit generate a Latin Square Image Cipher (LSIC) and this LSIC is used as key image and perform XOR operation between permuted image and key image. The proposed method can applied to any plain image with unequal width and height as well and also resist statistical attack, differential attack. Experiments carried out for different images of different sizes. The proposed method possesses large key space to resist brute force attack.

  19. Three-dimensional computed tomographic imaging in the diagnosis of vertebral column trauma: experience based on 21 patients and review of the literature

    Energy Technology Data Exchange (ETDEWEB)

    Domenicucci, M.; Preite, R.; Ramieri, A.; Osti, M.; Ciappetta, P.; Delfini, R. [Universita degli studi di Roma, ``la Spienza``, Piazzale Aldo Moro (Italy)

    1997-11-01

    3-D images produced by recently available, software provide a 3-D understanding much more readily than do multiple two two-dimensional images. Because it would be very difficult to standardize this method of imaging, it seems best that the specialist (orthopedic surgeon, neurosurgeon, neuro-radiologist) be present during the investigation to decide the viewing angles. An important limitation to this method is the presence of degenerative disease or osteoporosis, mainly in elderly patients. (authors)

  20. The Native American Experience. American Historical Images on File.

    Science.gov (United States)

    Wardwell, Lelia, Ed.

    This photo-documentation reference body presents more than 275 images chronicling the experiences of the American Indian from their prehistoric migrations to the present. The volume includes information and images illustrating the life ways of various tribes. The images are accompanied by historical information providing cultural context. The book…

  1. Image matching navigation based on fuzzy information

    Institute of Scientific and Technical Information of China (English)

    田玉龙; 吴伟仁; 田金文; 柳健

    2003-01-01

    In conventional image matching methods, the image matching process is mostly based on image statistic information. One aspect neglected by all these methods is that there is much fuzzy information contained in these images. A new fuzzy matching algorithm based on fuzzy similarity for navigation is presented in this paper. Because the fuzzy theory is of the ability of making good description of the fuzzy information contained in images, the image matching method based on fuzzy similarity would look forward to producing good performance results. Experimental results using matching algorithm based on fuzzy information also demonstrate its reliability and practicability.

  2. ISPA (imaging silicon pixel array) experiment

    CERN Document Server

    Patrice Loïez

    2002-01-01

    Application components of ISPA tubes are shown: the CERN-developed anode chip, special windows for gamma and x-ray detection, scintillating crystal and fibre arrays for imaging and tracking of ionizing particles.

  3. Remote sensing image segmentation based on Hadoop cloud platform

    Science.gov (United States)

    Li, Jie; Zhu, Lingling; Cao, Fubin

    2018-01-01

    To solve the problem that the remote sensing image segmentation speed is slow and the real-time performance is poor, this paper studies the method of remote sensing image segmentation based on Hadoop platform. On the basis of analyzing the structural characteristics of Hadoop cloud platform and its component MapReduce programming, this paper proposes a method of image segmentation based on the combination of OpenCV and Hadoop cloud platform. Firstly, the MapReduce image processing model of Hadoop cloud platform is designed, the input and output of image are customized and the segmentation method of the data file is rewritten. Then the Mean Shift image segmentation algorithm is implemented. Finally, this paper makes a segmentation experiment on remote sensing image, and uses MATLAB to realize the Mean Shift image segmentation algorithm to compare the same image segmentation experiment. The experimental results show that under the premise of ensuring good effect, the segmentation rate of remote sensing image segmentation based on Hadoop cloud Platform has been greatly improved compared with the single MATLAB image segmentation, and there is a great improvement in the effectiveness of image segmentation.

  4. Image processing analysis of traditional Gestalt vision experiments

    Science.gov (United States)

    McCann, John J.

    2002-06-01

    In the late 19th century, the Gestalt Psychology rebelled against the popular new science of Psychophysics. The Gestalt revolution used many fascinating visual examples to illustrate that the whole is greater than the sum of all the parts. Color constancy was an important example. The physical interpretation of sensations and their quantification by JNDs and Weber fractions were met with innumerable examples in which two 'identical' physical stimuli did not look the same. The fact that large changes in the color of the illumination failed to change color appearance in real scenes demanded something more than quantifying the psychophysical response of a single pixel. The debates continues today with proponents of both physical, pixel-based colorimetry and perceptual, image- based cognitive interpretations. Modern instrumentation has made colorimetric pixel measurement universal. As well, new examples of unconscious inference continue to be reported in the literature. Image processing provides a new way of analyzing familiar Gestalt displays. Since the pioneering experiments by Fergus Campbell and Land, we know that human vision has independent spatial channels and independent color channels. Color matching data from color constancy experiments agrees with spatial comparison analysis. In this analysis, simple spatial processes can explain the different appearances of 'identical' stimuli by analyzing the multiresolution spatial properties of their surrounds. Benary's Cross, White's Effect, the Checkerboard Illusion and the Dungeon Illusion can all be understood by the analysis of their low-spatial-frequency components. Just as with color constancy, these Gestalt images are most simply described by the analysis of spatial components. Simple spatial mechanisms account for the appearance of 'identical' stimuli in complex scenes. It does not require complex, cognitive processes to calculate appearances in familiar Gestalt experiments.

  5. FUSION SEGMENTATION METHOD BASED ON FUZZY THEORY FOR COLOR IMAGES

    Directory of Open Access Journals (Sweden)

    J. Zhao

    2017-09-01

    Full Text Available The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.

  6. Quantum Image Encryption Algorithm Based on Image Correlation Decomposition

    Science.gov (United States)

    Hua, Tianxiang; Chen, Jiamin; Pei, Dongju; Zhang, Wenquan; Zhou, Nanrun

    2015-02-01

    A novel quantum gray-level image encryption and decryption algorithm based on image correlation decomposition is proposed. The correlation among image pixels is established by utilizing the superposition and measurement principle of quantum states. And a whole quantum image is divided into a series of sub-images. These sub-images are stored into a complete binary tree array constructed previously and then randomly performed by one of the operations of quantum random-phase gate, quantum revolving gate and Hadamard transform. The encrypted image can be obtained by superimposing the resulting sub-images with the superposition principle of quantum states. For the encryption algorithm, the keys are the parameters of random phase gate, rotation angle, binary sequence and orthonormal basis states. The security and the computational complexity of the proposed algorithm are analyzed. The proposed encryption algorithm can resist brute force attack due to its very large key space and has lower computational complexity than its classical counterparts.

  7. [Development of RF coil of permanent magnet mini-magnetic resonance imager and mouse imaging experiments].

    Science.gov (United States)

    Hou, Shulian; Xie, Huantong; Chen, Wei; Wang, Guangxin; Zhao, Qiang; Li, Shiyu

    2014-10-01

    In the development of radio frequency (RF) coils for better quality of the mini-type permanent magnetic resonance imager for using in the small animal imaging, the solenoid RF coil has a special advantage for permanent magnetic system based on analyses of various types.of RF coils. However, it is not satisfied for imaging if the RF coils are directly used. By theoretical analyses of the magnetic field properties produced from the solenoid coil, the research direction was determined by careful studies to raise further the uniformity of the magnetic field coil, receiving coil sensitivity for signals and signal-to-noise ratio (SNR). The method had certain advantages and avoided some shortcomings of the other different coil types, such as, birdcage coil, saddle shaped coil and phased array coil by using the alloy materials (from our own patent). The RF coils were designed, developed and made for keeled applicable to permanent magnet-type magnetic resonance imager, multi-coil combination-type, single-channel overall RF receiving coil, and applied for a patent. Mounted on three instruments (25 mm aperture, with main magnetic field strength of 0.5 T or 1.5 T, and 50 mm aperture, with main magnetic field strength of 0.48 T), we performed experiments with mice, rats, and nude mice bearing tumors. The experimental results indicated that the RF receiving coil was fully applicable to the permanent magnet-type imaging system.

  8. Experience with CANDID: Comparison algorithm for navigating digital image databases

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, P.; Cannon, M.

    1994-10-01

    This paper presents results from the authors experience with CANDID (Comparison Algorithm for Navigating Digital Image Databases), which was designed to facilitate image retrieval by content using a query-by-example methodology. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized similarity measure between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to a user-provided example image. Results for three test applications are included.

  9. SALIENCY BASED SEGMENTATION OF SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    A. Sharma

    2015-03-01

    Full Text Available Saliency gives the way as humans see any image and saliency based segmentation can be eventually helpful in Psychovisual image interpretation. Keeping this in view few saliency models are used along with segmentation algorithm and only the salient segments from image have been extracted. The work is carried out for terrestrial images as well as for satellite images. The methodology used in this work extracts those segments from segmented image which are having higher or equal saliency value than a threshold value. Salient and non salient regions of image become foreground and background respectively and thus image gets separated. For carrying out this work a dataset of terrestrial images and Worldview 2 satellite images (sample data are used. Results show that those saliency models which works better for terrestrial images are not good enough for satellite image in terms of foreground and background separation. Foreground and background separation in terrestrial images is based on salient objects visible on the images whereas in satellite images this separation is based on salient area rather than salient objects.

  10. Medical Image Tamper Detection Based on Passive Image Authentication.

    Science.gov (United States)

    Ulutas, Guzin; Ustubioglu, Arda; Ustubioglu, Beste; V Nabiyev, Vasif; Ulutas, Mustafa

    2017-12-01

    Telemedicine has gained popularity in recent years. Medical images can be transferred over the Internet to enable the telediagnosis between medical staffs and to make the patient's history accessible to medical staff from anywhere. Therefore, integrity protection of the medical image is a serious concern due to the broadcast nature of the Internet. Some watermarking techniques are proposed to control the integrity of medical images. However, they require embedding of extra information (watermark) into image before transmission. It decreases visual quality of the medical image and can cause false diagnosis. The proposed method uses passive image authentication mechanism to detect the tampered regions on medical images. Structural texture information is obtained from the medical image by using local binary pattern rotation invariant (LBPROT) to make the keypoint extraction techniques more successful. Keypoints on the texture image are obtained with scale invariant feature transform (SIFT). Tampered regions are detected by the method by matching the keypoints. The method improves the keypoint-based passive image authentication mechanism (they do not detect tampering when the smooth region is used for covering an object) by using LBPROT before keypoint extraction because smooth regions also have texture information. Experimental results show that the method detects tampered regions on the medical images even if the forged image has undergone some attacks (Gaussian blurring/additive white Gaussian noise) or the forged regions are scaled/rotated before pasting.

  11. A SVD Based Image Complexity Measure

    DEFF Research Database (Denmark)

    Gustafsson, David Karl John; Pedersen, Kim Steenstrup; Nielsen, Mads

    2009-01-01

    Images are composed of geometric structures and texture, and different image processing tools - such as denoising, segmentation and registration - are suitable for different types of image contents. Characterization of the image content in terms of geometric structure and texture is an important...... problem that one is often faced with. We propose a patch based complexity measure, based on how well the patch can be approximated using singular value decomposition. As such the image complexity is determined by the complexity of the patches. The concept is demonstrated on sequences from the newly...... collected DIKU Multi-Scale image database....

  12. Pixel extraction based integral imaging with controllable viewing direction

    International Nuclear Information System (INIS)

    Ji, Chao-Chao; Deng, Huan; Wang, Qiong-Hua

    2012-01-01

    We propose pixel extraction based integral imaging with a controllable viewing direction. The proposed integral imaging can provide viewers three-dimensional (3D) images in a very small viewing angle. The viewing angle and the viewing direction of the reconstructed 3D images are controlled by the pixels extracted from an elemental image array. Theoretical analysis and a 3D display experiment of the viewing direction controllable integral imaging are carried out. The experimental results verify the correctness of the theory. A 3D display based on the integral imaging can protect the viewer’s privacy and has huge potential for a television to show multiple 3D programs at the same time. (paper)

  13. High dynamic range image acquisition based on multiplex cameras

    Science.gov (United States)

    Zeng, Hairui; Sun, Huayan; Zhang, Tinghua

    2018-03-01

    High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.

  14. ISPA (imaging silicon pixel array) experiment

    CERN Multimedia

    Patrice Loïez

    2002-01-01

    The ISPA tube is a position-sensitive photon detector. It belongs to the family of hybrid photon detectors (HPD), recently developed by CERN and INFN with leading photodetector firms. HPDs confront in a vacuum envelope a photocathode and a silicon detector. This can be a single diode or a pixelized detector. The electrons generated by the photocathode are efficiently detected by the silicon anode by applying a high-voltage difference between them. ISPA tube can be used in high-energy applications as well as bio-medical and imaging applications.

  15. Color image definition evaluation method based on deep learning method

    Science.gov (United States)

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  16. Satisfaction of search experiments in advanced imaging

    Science.gov (United States)

    Berbaum, Kevin S.

    2012-03-01

    The objective of our research is to understand the perception of multiple abnormalities in an imaging examination and to develop strategies for improved diagnostic. We are one of the few laboratories in the world pursuing the goal of reducing detection errors through a better understanding of the underlying perceptual processes involved. Failure to detect an abnormality is the most common class of error in diagnostic imaging and generally is considered the most serious by the medical community. Many of these errors have been attributed to "satisfaction of search," which occurs when a lesion is not reported because discovery of another abnormality has "satisfied" the goal of the search. We have gained some understanding of the mechanisms of satisfaction of search (SOS) traditional radiographic modalities. Currently, there are few interventions to remedy SOS error. For example, patient history that the prompts specific abnormalities, protects the radiologist from missing them even when other abnormalities are present. The knowledge gained from this programmatic research will lead to reduction of observer error.

  17. SISCOM imaging : an initial South African experience

    International Nuclear Information System (INIS)

    Warwick, J.; Rubow, S.; Van Heerden, B.; Ghoorun, S.; Butler, J.

    2004-01-01

    Full text: Subtraction ictal SPECT co-registered with MRI (SISCOM) is a new technique utilized for the detection and localization of epileptogenic foci in patients with refractory focal epilepsy who are candidates for surgical resection. The technique requires many challenges to be overcome, in particular in relation to the administration of the radiopharmaceutical, acquisition of brain SPECT and the conversion, co-registration and fusion of brain SPECT and MRI studies. Furthermore the interpretation of the studies is complex and is ideally performed in a multidisciplinary context in cooperation with disciplines such as neurology, radiology, psychiatry and neurosurgery. Materials and methods: Two brain SPECT studies are performed using 99m Tc-ethylene cystinate dimer (ECD). An ictal study is performed after the administration of the 99m Tc-ECD during a seizure. An interictal SPECT, performed between seizures is then subtracted from the ictal SPECT, and the difference image fused with an MRI study to optimise localization of the epileptogenic focus. Image conversion, co-registration and fusion was performed using MRlcro and SPM software. Results: To date the Departments of Neurology and Nuclear Medicine have completed over 10 SISCOM studies. Conclusion: During this presentation this initial work will be presented. The methodology as well as the challenges involved in performing and interpreting these studies will be discussed. Individual cases will be used to illustrate the impact of this powerful technique on future patient management. (author)

  18. CONTEXT BASED FOOD IMAGE ANALYSIS

    OpenAIRE

    He, Ye; Xu, Chang; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.

    2013-01-01

    We are developing a dietary assessment system that records daily food intake through the use of food images. Recognizing food in an image is difficult due to large visual variance with respect to eating or preparation conditions. This task becomes even more challenging when different foods have similar visual appearance. In this paper we propose to incorporate two types of contextual dietary information, food co-occurrence patterns and personalized learning models, in food image analysis to r...

  19. ADVANCED CLUSTER BASED IMAGE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    D. Kesavaraja

    2011-11-01

    Full Text Available This paper presents efficient and portable implementations of a useful image segmentation technique which makes use of the faster and a variant of the conventional connected components algorithm which we call parallel Components. In the Modern world majority of the doctors are need image segmentation as the service for various purposes and also they expect this system is run faster and secure. Usually Image segmentation Algorithms are not working faster. In spite of several ongoing researches in Conventional Segmentation and its Algorithms might not be able to run faster. So we propose a cluster computing environment for parallel image Segmentation to provide faster result. This paper is the real time implementation of Distributed Image Segmentation in Clustering of Nodes. We demonstrate the effectiveness and feasibility of our method on a set of Medical CT Scan Images. Our general framework is a single address space, distributed memory programming model. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. The image segmentation algorithm makes use of an efficient cluster process which uses a novel approach for parallel merging. Our experimental results are consistent with the theoretical analysis and practical results. It provides the faster execution time for segmentation, when compared with Conventional method. Our test data is different CT scan images from the Medical database. More efficient implementations of Image Segmentation will likely result in even faster execution times.

  20. Content Based Medical Image Retrieval for Histopathological, CT and MRI Images

    Directory of Open Access Journals (Sweden)

    Swarnambiga AYYACHAMY

    2013-09-01

    Full Text Available A content based approach is followed for medical images. The purpose of this study is to access the stability of these methods for medical image retrieval. The methods used in color based retrieval for histopathological images are color co-occurrence matrix (CCM and histogram with meta features. For texture based retrieval GLCM (gray level co-occurrence matrix and local binary pattern (LBP were used. For shape based retrieval canny edge detection and otsu‘s method with multivariable threshold were used. Texture and shape based retrieval were implemented using MRI (magnetic resonance images. The most remarkable characteristics of the article are its content based approach for each medical imaging modality. Our efforts were focused on the initial visual search. From our experiment, histogram with meta features in color based retrieval for histopathological images shows a precision of 60 % and recall of 30 %. Whereas GLCM in texture based retrieval for MRI images shows a precision of 70 % and recall of 20 %. Shape based retrieval for MRI images shows a precision of 50% and recall of 25 %. The retrieval results shows that this simple approach is successful.

  1. Biometric image enhancement using decision rule based image fusion techniques

    Science.gov (United States)

    Sagayee, G. Mary Amirtha; Arumugam, S.

    2010-02-01

    Introducing biometrics into information systems may result in considerable benefits. Most of the researchers confirmed that the finger print is widely used than the iris or face and more over it is the primary choice for most privacy concerned applications. For finger prints applications, choosing proper sensor is at risk. The proposed work deals about, how the image quality can be improved by introducing image fusion technique at sensor levels. The results of the images after introducing the decision rule based image fusion technique are evaluated and analyzed with its entropy levels and root mean square error.

  2. ON THE PATH OF THE FILM: Image as Experience and Experience as Image

    Directory of Open Access Journals (Sweden)

    André Reyes Novaes

    2013-12-01

    Full Text Available This article aims to show the relationships between the description of images and the conduction of fieldworks considering a practical pedagogical experience performed with students from the Colégio de Aplicação da UFRJ. Taking as a starting point the research conduct to shoot a documentary called Vulgo Sacopã, which portrayed the transformation of the landscape of a hill situated at Lagoa Rodrigo de Freitas in the city of Rio de Janeiro, the pedagogical practice was divided in two stages. First, students were presented to a series of historical images, seeking to stimulate the reading of the landscape as a "text" and identify variations in its interpretation. Later, we went to the fieldwork in order to perform the landscape and meet the main character of the film. By walking in the paths of the film, it was possible to identify a series of exchanges that can problematize simplistic divisions between the landscape “in visu", shown in the classroom, and the landscape "in situ" observed in the field. RESUMO: O presente artigo tem como objetivo mostrar as relações entre descrição de imagens e trabalho de campo por meio de uma experiência pedagógica prática realizada com alunos do Colégio de Aplicação da UFRJ. Tendo como ponto de partida a pesquisa feita para a execução do documentário Vulgo Sacopã, que retratou as transformações da paisagem de um morro situado na Lagoa Rodrigo de Freitas na cidade do Rio de Janeiro, a prática pedagógica se dividiu em duas etapas. Primeiramente, os alunos foram apresentados a uma série de imagens históricas, buscando estimular a leitura da paisagem como um “texto” e identificar variações na sua interpretação. Posteriormente, foi realizado um trabalho de campo no intuito de percorrer a paisagem estudada e conhecer o personagem principal do filme exibido. Ao percorrer as trilhas do filme com os alunos, foi possível identificar uma série de entrelaçamentos que podem problematizar

  3. Pc-Based Floating Point Imaging Workstation

    Science.gov (United States)

    Guzak, Chris J.; Pier, Richard M.; Chinn, Patty; Kim, Yongmin

    1989-07-01

    The medical, military, scientific and industrial communities have come to rely on imaging and computer graphics for solutions to many types of problems. Systems based on imaging technology are used to acquire and process images, and analyze and extract data from images that would otherwise be of little use. Images can be transformed and enhanced to reveal detail and meaning that would go undetected without imaging techniques. The success of imaging has increased the demand for faster and less expensive imaging systems and as these systems become available, more and more applications are discovered and more demands are made. From the designer's perspective the challenge to meet these demands forces him to attack the problem of imaging from a different perspective. The computing demands of imaging algorithms must be balanced against the desire for affordability and flexibility. Systems must be flexible and easy to use, ready for current applications but at the same time anticipating new, unthought of uses. Here at the University of Washington Image Processing Systems Lab (IPSL) we are focusing our attention on imaging and graphics systems that implement imaging algorithms for use in an interactive environment. We have developed a PC-based imaging workstation with the goal to provide powerful and flexible, floating point processing capabilities, along with graphics functions in an affordable package suitable for diverse environments and many applications.

  4. An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis

    Science.gov (United States)

    Kim, Yongmin; Alexander, Thomas

    1986-06-01

    In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.

  5. Developing stereo image based robot control system

    Energy Technology Data Exchange (ETDEWEB)

    Suprijadi,; Pambudi, I. R.; Woran, M.; Naa, C. F; Srigutomo, W. [Department of Physics, FMIPA, InstitutTeknologi Bandung Jl. Ganesha No. 10. Bandung 40132, Indonesia supri@fi.itb.ac.id (Indonesia)

    2015-04-16

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based on stereovision captures.

  6. Reflectance conversion methods for the VIS/NIR imaging spectrometer aboard the Chang'E-3 lunar rover: based on ground validation experiment data

    International Nuclear Information System (INIS)

    Liu Bin; Liu Jian-Zhong; Zhang Guang-Liang; Zou Yong-Liao; Ling Zong-Cheng; Zhang Jiang; He Zhi-Ping; Yang Ben-Yong

    2013-01-01

    The second phase of the Chang'E Program (also named Chang'E-3) has the goal to land and perform in-situ detection on the lunar surface. A VIS/NIR imaging spectrometer (VNIS) will be carried on the Chang'E-3 lunar rover to detect the distribution of lunar minerals and resources. VNIS is the first mission in history to perform in-situ spectral measurement on the surface of the Moon, the reflectance data of which are fundamental for interpretation of lunar composition, whose quality would greatly affect the accuracy of lunar element and mineral determination. Until now, in-situ detection by imaging spectrometers was only performed by rovers on Mars. We firstly review reflectance conversion methods for rovers on Mars (Viking landers, Pathfinder and Mars Exploration rovers, etc). Secondly, we discuss whether these conversion methods used on Mars can be applied to lunar in-situ detection. We also applied data from a laboratory bidirectional reflectance distribution function (BRDF) using simulated lunar soil to test the availability of this method. Finally, we modify reflectance conversion methods used on Mars by considering differences between environments on the Moon and Mars and apply the methods to experimental data obtained from the ground validation of VNIS. These results were obtained by comparing reflectance data from the VNIS measured in the laboratory with those from a standard spectrometer obtained at the same time and under the same observing conditions. The shape and amplitude of the spectrum fits well, and the spectral uncertainty parameters for most samples are within 8%, except for the ilmenite sample which has a low albedo. In conclusion, our reflectance conversion method is suitable for lunar in-situ detection.

  7. Detail Enhancement for Infrared Images Based on Propagated Image Filter

    Directory of Open Access Journals (Sweden)

    Yishu Peng

    2016-01-01

    Full Text Available For displaying high-dynamic-range images acquired by thermal camera systems, 14-bit raw infrared data should map into 8-bit gray values. This paper presents a new method for detail enhancement of infrared images to display the image with a relatively satisfied contrast and brightness, rich detail information, and no artifacts caused by the image processing. We first adopt a propagated image filter to smooth the input image and separate the image into the base layer and the detail layer. Then, we refine the base layer by using modified histogram projection for compressing. Meanwhile, the adaptive weights derived from the layer decomposition processing are used as the strict gain control for the detail layer. The final display result is obtained by recombining the two modified layers. Experimental results on both cooled and uncooled infrared data verify that the proposed method outperforms the method based on log-power histogram modification and bilateral filter-based detail enhancement in both detail enhancement and visual effect.

  8. Compartmental analysis, imaging techniques and population pharmacokinetic. Experiences at CENTIS

    International Nuclear Information System (INIS)

    Hernández, Ignacio; León, Mariela; Leyva, Rene; Castro, Yusniel; Ayra, Fernando E.

    2016-01-01

    Introduction: In pharmacokinetic evaluation small rodents are used in a large extend. Traditional pharmacokinetic evaluations by the two steps approach can be replaced by the sparse data design which may also represent a complicated situation to evaluate satisfactorily from the statistical point of view. In this presentation different situations of sparse data sampling are analyzed based on practical consideration. Non linear mixed effect model was selected in order to estimate pharmacokinetic parameters in simulated data from real experimental results using blood sampling and imaging procedures. Materials and methods: Different scenarios representing several experimental designs of incomplete individual profiles were evaluated. Data sets were simulated based on real data from previous experiments. In all cases three to five blood samples were considered per time point. A combination of compartmental analysis with tumor uptake obtained by gammagraphy of radiolabeled drugs is also evaluated.All pharmacokinetic profiles were analyzed by means of MONOLIX software version 4.2.3. Results: All sampling schedules yield the same results when computed using the MONOLIX software and the SAEM algorithm. Population and individual pharmacokinetic parameters were accurately estimated with three or five determination per sampling point. According with the used methodology and software tool, it can be an expected result, but demonstrating the method performance in such situations, allow us to select a more flexible design using a very small number of animals in preclinical research. The combination with imaging procedures also allows us to construct a completely structured compartmental analysis. Results of real experiments are presented demonstrating the versatility of used methodology in different evaluations. The same sampling approach can be considered in phase I or II clinical trials. (author)

  9. Novel welding image processing method based on fractal theory

    Institute of Scientific and Technical Information of China (English)

    陈强; 孙振国; 肖勇; 路井荣

    2002-01-01

    Computer vision has come into used in the fields of welding process control and automation. In order to improve precision and rapidity of welding image processing, a novel method based on fractal theory has been put forward in this paper. Compared with traditional methods, the image is preliminarily processed in the macroscopic regions then thoroughly analyzed in the microscopic regions in the new method. With which, an image is divided up to some regions according to the different fractal characters of image edge, and the fuzzy regions including image edges are detected out, then image edges are identified with Sobel operator and curved by LSM (Lease Square Method). Since the data to be processed have been decreased and the noise of image has been reduced, it has been testified through experiments that edges of weld seam or weld pool could be recognized correctly and quickly.

  10. An FPGA-based heterogeneous image fusion system design method

    Science.gov (United States)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  11. Multi-band Image Registration Method Based on Fourier Transform

    Institute of Scientific and Technical Information of China (English)

    庹红娅; 刘允才

    2004-01-01

    This paper presented a registration method based on Fourier transform for multi-band images which is involved in translation and small rotation. Although different band images differ a lot in the intensity and features,they contain certain common information which we can exploit. A model was given that the multi-band images have linear correlations under the least-square sense. It is proved that the coefficients have no effect on the registration progress if two images have linear correlations. Finally, the steps of the registration method were proposed. The experiments show that the model is reasonable and the results are satisfying.

  12. MR imaging of prostate. Preliminary experience with calculated imaging in 28 cases

    International Nuclear Information System (INIS)

    Gevenois, P.A.; Van Regemorter, G.; Ghysels, M.; Delepaut, A.; Van Gansbeke, D.; Struyven, J.

    1988-01-01

    The majority of studies with MR imaging in prostate disease are based on a semiology obtained using images weighted in T1 and T2. A study was carried out to evaluate effects of images calculated in T1 and T2 obtained at 0.5T. This preliminary study concerns 28 prostate examinations with spin-echo acquisition and inversion-recuperation parameters, and provided images calculated in T1, weighted and calculated in T2. Images allowed detection and characterization of prostate lesions. However, although calculated images accentuate discrimination of the method, the weighted images conserve their place because of their improved spatial resolution [fr

  13. Image-based RSA: Roentgen stereophotogrammetric analysis based on 2D-3D image registration.

    Science.gov (United States)

    de Bruin, P W; Kaptein, B L; Stoel, B C; Reiber, J H C; Rozing, P M; Valstar, E R

    2008-01-01

    Image-based Roentgen stereophotogrammetric analysis (IBRSA) integrates 2D-3D image registration and conventional RSA. Instead of radiopaque RSA bone markers, IBRSA uses 3D CT data, from which digitally reconstructed radiographs (DRRs) are generated. Using 2D-3D image registration, the 3D pose of the CT is iteratively adjusted such that the generated DRRs resemble the 2D RSA images as closely as possible, according to an image matching metric. Effectively, by registering all 2D follow-up moments to the same 3D CT, the CT volume functions as common ground. In two experiments, using RSA and using a micromanipulator as gold standard, IBRSA has been validated on cadaveric and sawbone scapula radiographs, and good matching results have been achieved. The accuracy was: |mu |RSA but higher than in vivo standard RSA. Because IBRSA does not require radiopaque markers, it adds functionality to the RSA method by opening new directions and possibilities for research, such as dynamic analyses using fluoroscopy on subjects without markers and computer navigation applications.

  14. Image Based Rendering and Virtual Reality

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    The Presentation concerns with an overview of Image Based Rendering approaches and their use on Virtual Reality, including Virtual Photography and Cinematography, and Mobile Robot Navigation.......The Presentation concerns with an overview of Image Based Rendering approaches and their use on Virtual Reality, including Virtual Photography and Cinematography, and Mobile Robot Navigation....

  15. Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering

    Science.gov (United States)

    Jiang, Lu; Piao, Yan

    2018-04-01

    The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.

  16. Content Based Image Matching for Planetary Science

    Science.gov (United States)

    Deans, M. C.; Meyer, C.

    2006-12-01

    Planetary missions generate large volumes of data. With the MER rovers still functioning on Mars, PDS contains over 7200 released images from the Microscopic Imagers alone. These data products are only searchable by keys such as the Sol, spacecraft clock, or rover motion counter index, with little connection to the semantic content of the images. We have developed a method for matching images based on the visual textures in images. For every image in a database, a series of filters compute the image response to localized frequencies and orientations. Filter responses are turned into a low dimensional descriptor vector, generating a 37 dimensional fingerprint. For images such as the MER MI, this represents a compression ratio of 99.9965% (the fingerprint is approximately 0.0035% the size of the original image). At query time, fingerprints are quickly matched to find images with similar appearance. Image databases containing several thousand images are preprocessed offline in a matter of hours. Image matches from the database are found in a matter of seconds. We have demonstrated this image matching technique using three sources of data. The first database consists of 7200 images from the MER Microscopic Imager. The second database consists of 3500 images from the Narrow Angle Mars Orbital Camera (MOC-NA), which were cropped into 1024×1024 sub-images for consistency. The third database consists of 7500 scanned archival photos from the Apollo Metric Camera. Example query results from all three data sources are shown. We have also carried out user tests to evaluate matching performance by hand labeling results. User tests verify approximately 20% false positive rate for the top 14 results for MOC NA and MER MI data. This means typically 10 to 12 results out of 14 match the query image sufficiently. This represents a powerful search tool for databases of thousands of images where the a priori match probability for an image might be less than 1%. Qualitatively, correct

  17. Superpixel-Based Feature for Aerial Image Scene Recognition

    Directory of Open Access Journals (Sweden)

    Hongguang Li

    2018-01-01

    Full Text Available Image scene recognition is a core technology for many aerial remote sensing applications. Different landforms are inputted as different scenes in aerial imaging, and all landform information is regarded as valuable for aerial image scene recognition. However, the conventional features of the Bag-of-Words model are designed using local points or other related information and thus are unable to fully describe landform areas. This limitation cannot be ignored when the aim is to ensure accurate aerial scene recognition. A novel superpixel-based feature is proposed in this study to characterize aerial image scenes. Then, based on the proposed feature, a scene recognition method of the Bag-of-Words model for aerial imaging is designed. The proposed superpixel-based feature that utilizes landform information establishes top-task superpixel extraction of landforms to bottom-task expression of feature vectors. This characterization technique comprises the following steps: simple linear iterative clustering based superpixel segmentation, adaptive filter bank construction, Lie group-based feature quantification, and visual saliency model-based feature weighting. Experiments of image scene recognition are carried out using real image data captured by an unmanned aerial vehicle (UAV. The recognition accuracy of the proposed superpixel-based feature is 95.1%, which is higher than those of scene recognition algorithms based on other local features.

  18. Image magnification based on similarity analogy

    International Nuclear Information System (INIS)

    Chen Zuoping; Ye Zhenglin; Wang Shuxun; Peng Guohua

    2009-01-01

    Aiming at the high time complexity of the decoding phase in the traditional image enlargement methods based on fractal coding, a novel image magnification algorithm is proposed in this paper, which has the advantage of iteration-free decoding, by using the similarity analogy between an image and its zoom-out and zoom-in. A new pixel selection technique is also presented to further improve the performance of the proposed method. Furthermore, by combining some existing fractal zooming techniques, an efficient image magnification algorithm is obtained, which can provides the image quality as good as the state of the art while greatly decrease the time complexity of the decoding phase.

  19. Vision communications based on LED array and imaging sensor

    Science.gov (United States)

    Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.

  20. TESIS experiment on EUV imaging spectroscopy of the Sun

    Science.gov (United States)

    Kuzin, S. V.; Bogachev, S. A.; Zhitnik, I. A.; Pertsov, A. A.; Ignatiev, A. P.; Mitrofanov, A. M.; Slemzin, V. A.; Shestov, S. V.; Sukhodrev, N. K.; Bugaenko, O. I.

    2009-03-01

    TESIS is a set of solar imaging instruments in development by the Lebedev Physical Institute of the Russian Academy of Science, to be launched aboard the Russian spacecraft CORONAS-PHOTON in December 2008. The main goal of TESIS is to provide complex observations of solar active phenomena from the transition region to the inner and outer solar corona with high spatial, spectral and temporal resolution in the EUV and Soft X-ray spectral bands. TESIS includes five unique space instruments: the MgXII Imaging Spectroheliometer (MISH) with spherical bent crystal mirror, for observations of the Sun in the monochromatic MgXII 8.42 Å line; the EUV Spectoheliometer (EUSH) with grazing incidence difraction grating, for the registration of the full solar disc in monochromatic lines of the spectral band 280-330 Å; two Full-disk EUV Telescopes (FET) with multilayer mirrors covering the band 130-136 and 290-320 Å; and the Solar EUV Coronagraph (SEC), based on the Ritchey-Chretien scheme, to observe the inner and outer solar corona from 0.2 to 4 solar radii in spectral band 290-320 Å. TESIS experiment will start at the rising phase of the 24th cycle of solar activity. With the advanced capabilities of its instruments, TESIS will help better understand the physics of solar flares and high-energy phenomena and provide new data on parameters of solar plasma in the temperature range 10-10K. This paper gives a brief description of the experiment, its equipment, and its scientific objectives.

  1. Content-Based Image Retrial Based on Hadoop

    Directory of Open Access Journals (Sweden)

    DongSheng Yin

    2013-01-01

    Full Text Available Generally, time complexity of algorithms for content-based image retrial is extremely high. In order to retrieve images on large-scale databases efficiently, a new way for retrieving based on Hadoop distributed framework is proposed. Firstly, a database of images features is built by using Speeded Up Robust Features algorithm and Locality-Sensitive Hashing and then perform the search on Hadoop platform in a parallel way specially designed. Considerable experimental results show that it is able to retrieve images based on content on large-scale cluster and image sets effectively.

  2. Retinal image quality assessment based on image clarity and content

    Science.gov (United States)

    Abdel-Hamid, Lamiaa; El-Rafei, Ahmed; El-Ramly, Salwa; Michelson, Georg; Hornegger, Joachim

    2016-09-01

    Retinal image quality assessment (RIQA) is an essential step in automated screening systems to avoid misdiagnosis caused by processing poor quality retinal images. A no-reference transform-based RIQA algorithm is introduced that assesses images based on five clarity and content quality issues: sharpness, illumination, homogeneity, field definition, and content. Transform-based RIQA algorithms have the advantage of considering retinal structures while being computationally inexpensive. Wavelet-based features are proposed to evaluate the sharpness and overall illumination of the images. A retinal saturation channel is designed and used along with wavelet-based features for homogeneity assessment. The presented sharpness and illumination features are utilized to assure adequate field definition, whereas color information is used to exclude nonretinal images. Several publicly available datasets of varying quality grades are utilized to evaluate the feature sets resulting in area under the receiver operating characteristic curve above 0.99 for each of the individual feature sets. The overall quality is assessed by a classifier that uses the collective features as an input vector. The classification results show superior performance of the algorithm in comparison to other methods from literature. Moreover, the algorithm addresses efficiently and comprehensively various quality issues and is suitable for automatic screening systems.

  3. Image content authentication based on channel coding

    Science.gov (United States)

    Zhang, Fan; Xu, Lei

    2008-03-01

    The content authentication determines whether an image has been tampered or not, and if necessary, locate malicious alterations made on the image. Authentication on a still image or a video are motivated by recipient's interest, and its principle is that a receiver must be able to identify the source of this document reliably. Several techniques and concepts based on data hiding or steganography designed as a means for the image authentication. This paper presents a color image authentication algorithm based on convolution coding. The high bits of color digital image are coded by the convolution codes for the tamper detection and localization. The authentication messages are hidden in the low bits of image in order to keep the invisibility of authentication. All communications channels are subject to errors introduced because of additive Gaussian noise in their environment. Data perturbations cannot be eliminated but their effect can be minimized by the use of Forward Error Correction (FEC) techniques in the transmitted data stream and decoders in the receiving system that detect and correct bits in error. This paper presents a color image authentication algorithm based on convolution coding. The message of each pixel is convolution encoded with the encoder. After the process of parity check and block interleaving, the redundant bits are embedded in the image offset. The tamper can be detected and restored need not accessing the original image.

  4. Optical image hiding based on interference

    Science.gov (United States)

    Zhang, Yan; Wang, Bo

    2009-11-01

    Optical image processing has been paid a lot of attentions recently due to its large capacitance and fast speed. Many image encryption and hiding technologies have been proposed based on the optical technology. In conventional image encryption methods, the random phase masks are usually used as encryption keys to encode the images into random white noise distribution. However, this kind of methods requires interference technology such as holography to record complex amplitude. Furthermore, it is vulnerable to attack techniques. The image hiding methods employ the phase retrieve algorithm to encode the images into two or more phase masks. The hiding process is carried out within a computer and the images are reconstructed optically. But the iterative algorithms need a lot of time to hide the image into the masks. All methods mentioned above are based on the optical diffraction of the phase masks. In this presentation, we will propose a new optical image hiding method based on interference. The coherence lights pass through two phase masks and are combined by a beam splitter. Two beams interfere with each other and the desired image appears at the pre-designed plane. Two phase distribution masks are designed analytically; therefore, the hiding speed can be obviously improved. Simulation results are carried out to demonstrate the validity of the new proposed methods.

  5. An overview of medical image data base

    International Nuclear Information System (INIS)

    Nishihara, Eitaro

    1992-01-01

    Recently, the systematization using computers in medical institutions has advanced, and the introduction of hospital information system has been almost completed in the large hospitals with more than 500 beds. But the objects of the management of the hospital information system are text information, and do not include the management of images of enormous quantity. By the progress of image diagnostic equipment, the digitization of medical images has advanced, but the management of images in hospitals does not utilize the merits of digital images. For the purpose of solving these problems, the picture archiving and communication system (PACS) was proposed about ten years ago, which makes medical images into a data base, and enables the on-line access to images from various places in hospitals. The studies have been continued to realize it. The features of medical image data, the present status of utilizing medical image data, the outline of the PACS, the image data base for the PACS, the problems in the realization of the data base and the technical trend, and the state of actual construction of the PACS are reported. (K.I.)

  6. Results from neutron imaging of ICF experiments at NIF

    Science.gov (United States)

    Merrill, F. E.; Danly, C. R.; Fittinghoff, D. N.; Grim, G. P.; Guler, N.; Volegov, P. L.; Wilde, C. H.

    2016-03-01

    In 2011 a neutron imaging diagnostic was commissioned at the National Ignition Facility (NIF). This new system has been used to collect neutron images to measure the size and shape of the burning DT plasma and the surrounding fuel assembly. The imaging technique uses a pinhole neutron aperture placed between the neutron source and a neutron detector. The detection system measures the two-dimensional distribution of neutrons passing through the pinhole. This diagnostic collects two images at two times. The long flight path for this diagnostic, 28 m, results in a chromatic separation of the neutrons, allowing the independently timed images to measure the source distribution for two neutron energies. Typically one image measures the distribution of the 14 MeV neutrons, and the other image measures the distribution of the 6-12 MeV neutrons. The combination of these two images has provided data on the size and shape of the burning plasma within the compressed capsule, as well as a measure of the quantity and spatial distribution of the cold fuel surrounding this core. Images have been collected for the majority of the experiments performed as part of the ignition campaign. Results from this data have been used to estimate a burn-averaged fuel assembly as well as providing performance metrics to gauge progress towards ignition. This data set and our interpretation are presented.

  7. A factorial experiment on image quality and radiation dose

    International Nuclear Information System (INIS)

    Norrman, E.; Persliden, J.

    2005-01-01

    To find if factorial experiments can be used in the optimisation of diagnostic imaging, a factorial experiment was performed to investigate some of the factors that influence image quality, kerma area product (KAP) and effective dose (E). In a factorial experiment the factors are varied together instead of one at a time, making it possible to discover interactions between the factors as well as major effects. The factors studied were tube potential, tube loading, focus size and filtration. Each factor was set to two levels (low and high). The influence of the factors on the response variables (image quality, KAP and E) was studied using a direct digital detector. The major effects of each factor on the response variables were estimated as well as the interaction effects between factors. The image quality, KAP and E were mainly influenced by tube loading, tube potential and filtration. There were some active interactions, for example, between tube potential and filtration and between tube loading and filtration. The study shows that factorial experiments can be used to predict the influence of various parameters on image quality and radiation dose. (authors)

  8. Characterization of lens based photoacoustic imaging system

    Directory of Open Access Journals (Sweden)

    Kalloor Joseph Francis

    2017-12-01

    Full Text Available Some of the challenges in translating photoacoustic (PA imaging to clinical applications includes limited view of the target tissue, low signal to noise ratio and the high cost of developing real-time systems. Acoustic lens based PA imaging systems, also known as PA cameras are a potential alternative to conventional imaging systems in these scenarios. The 3D focusing action of lens enables real-time C-scan imaging with a 2D transducer array. In this paper, we model the underlying physics in a PA camera in the mathematical framework of an imaging system and derive a closed form expression for the point spread function (PSF. Experimental verification follows including the details on how to design and fabricate the lens inexpensively. The system PSF is evaluated over a 3D volume that can be imaged by this PA camera. Its utility is demonstrated by imaging phantom and an ex vivo human prostate tissue sample.

  9. Characterization of lens based photoacoustic imaging system.

    Science.gov (United States)

    Francis, Kalloor Joseph; Chinni, Bhargava; Channappayya, Sumohana S; Pachamuthu, Rajalakshmi; Dogra, Vikram S; Rao, Navalgund

    2017-12-01

    Some of the challenges in translating photoacoustic (PA) imaging to clinical applications includes limited view of the target tissue, low signal to noise ratio and the high cost of developing real-time systems. Acoustic lens based PA imaging systems, also known as PA cameras are a potential alternative to conventional imaging systems in these scenarios. The 3D focusing action of lens enables real-time C-scan imaging with a 2D transducer array. In this paper, we model the underlying physics in a PA camera in the mathematical framework of an imaging system and derive a closed form expression for the point spread function (PSF). Experimental verification follows including the details on how to design and fabricate the lens inexpensively. The system PSF is evaluated over a 3D volume that can be imaged by this PA camera. Its utility is demonstrated by imaging phantom and an ex vivo human prostate tissue sample.

  10. Image Mosaic Method Based on SIFT Features of Line Segment

    Directory of Open Access Journals (Sweden)

    Jun Zhu

    2014-01-01

    Full Text Available This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling.

  11. Understanding images using knowledge based approach

    International Nuclear Information System (INIS)

    Tascini, G.

    1985-01-01

    This paper presents an approach to image understanding focusing on low level image processing and proposes a rule-based approach as part of larger knowledge-based system. The general system has a yerarchical structure that comprises several knowledge-based layers. The main idea is to confine at the lower level the domain independent knowledge and to reserve the higher levels for the domain dependent knowledge, that is for the interpretation

  12. Image-based petrophysical parameters

    DEFF Research Database (Denmark)

    Noe-Nygaard, Jakob; Engstrøm, Finn; Sølling, Theis Ivan

    2017-01-01

    run directly from the micro-CT results on a cutting measured on an in-house instrument; the results clearly show that micro-CT measurements on chalk do not capture the pore space with sufficient detail to be predictive. Overall, with the appropriate resolution, the present study shows......-computed-tomography (nano-CT) images of trim sections and cuttings. Moreover, the trim-section results are upscaled to trim size to form the basis of an additional comparison. The results are also benchmarked against conventional core analysis (CCAL) results on trim-size samples. The comparison shows that petrophysical...... parameters from CT imaging agree reasonably well with those determined experimentally. The upscaled results show some discrepancy with the nano-CT results, particularly in the case of the low-permeability plug. This is probably because of the challenge in finding a representative subvolume. For the cuttings...

  13. Image based rendering of iterated function systems

    NARCIS (Netherlands)

    Wijk, van J.J.; Saupe, D.

    2004-01-01

    A fast method to generate fractal imagery is presented. Iterated function systems (IFS) are based on repeatedly copying transformed images. We show that this can be directly translated into standard graphics operations: Each image is generated by texture mapping and blending copies of the previous

  14. Infrared Imaging for Inquiry-Based Learning

    Science.gov (United States)

    Xie, Charles; Hazzard, Edmund

    2011-01-01

    Based on detecting long-wavelength infrared (IR) radiation emitted by the subject, IR imaging shows temperature distribution instantaneously and heat flow dynamically. As a picture is worth a thousand words, an IR camera has great potential in teaching heat transfer, which is otherwise invisible. The idea of using IR imaging in teaching was first…

  15. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...

  16. Young adult women's experiences of body image after bariatric surgery

    DEFF Research Database (Denmark)

    Jensen, Janet F; Hoegh-Petersen, Mette; Larsen, Tine B

    2014-01-01

    AIM: To understand the lived experience of body image in young women after obesity surgery. BACKGROUND: Quantitative studies have documented that health-related quality of life and body image are improved after bariatric surgery, probably due to significant weight loss. Female obesity surgery...... candidates are likely to be motivated by dissatisfaction regarding physical appearance. However, little is known about the experience of the individual woman, leaving little understanding of the association between bariatric surgery and changes in health-related quality of life and body image. DESIGN...... analysed by systematic text condensation influenced by Giorgi's phenomenological method and supplemented by elements from narrative analysis. FINDINGS: The analysis revealed three concepts: solution to an unbearable problem, learning new boundaries and hopes of normalization. These revelatory concepts were...

  17. Studying a free fall experiment using short sequences of images

    International Nuclear Information System (INIS)

    Vera, Francisco; Romanque, Cristian

    2008-01-01

    We discuss a new alternative for obtaining position and time coordinates from a video of a free fall experiment. In our approach, after converting the video to a short sequence of images, the images are analyzed using a web page application developed by the author. The main advantage of the setup explained in this work, is that it is simple to use, no software license fees are necessary, and can be scaled-up to be used by a big number of students in introductory physics courses. The steps involved in the full analysis of a falling object are: we grab a short digital video of the experiment and convert it to a sequence of images, then, using a web page that includes all the necessary javascript, the student can easily click on the object of interest to obtain the (x,y,t) coordinates, finally, the student analyze motion using a spreadsheet.

  18. Experience based reliability centered maintenance

    International Nuclear Information System (INIS)

    Haenninen, S.; Laakso, K.

    1993-03-01

    The systematic analysis and documentation of operating experiences should be included in a living NPP life management program. Failure mode and effects and maintenance effects analyses are suitable methods for analysis of the failure and corrective maintenance experiences of equipment. Combined use of the information on occurred functional failures and the decision tree logic of the reliability centered maintenance identifies applicable and effective preventive maintenance tasks of equipment in an old plant. In this study the electrical motor drives of closing and isolation valves (MOV) of TVO and Loviisa nuclear power plants were selected to serve as pilot study objects. The study was limited to valve drives having actuators manufactured by AUMA in Germany. The fault and maintenance history of MOVs from 1981 up to and including October 1991 in different safety and process systems at TVO 1 and 2 nuclear power units was at first analyzed in a systematic way. The scope of the components studied was 81 MOVs in safety-related systems and 127 other MOVs per each TVO unit. In the case of the Loviisa plant, the observation period was limited to three years, i.e. from February 1989 up to February 1992. The scope of the Loviisa 1 and 2 components studied was 44 respectively 95 MOVs. (25 refs., 22 figs., 8 tabs.)

  19. Image based Monument Recognition using Graph based Visual Saliency

    DEFF Research Database (Denmark)

    Kalliatakis, Grigorios; Triantafyllidis, Georgios

    2013-01-01

    This article presents an image-based application aiming at simple image classification of well-known monuments in the area of Heraklion, Crete, Greece. This classification takes place by utilizing Graph Based Visual Saliency (GBVS) and employing Scale Invariant Feature Transform (SIFT) or Speeded......, the images have been previously processed according to the Graph Based Visual Saliency model in order to keep either SIFT or SURF features corresponding to the actual monuments while the background “noise” is minimized. The application is then able to classify these images, helping the user to better...

  20. Psychoanalytic Bases for One's Image of God: Fact or Artifact?

    Science.gov (United States)

    Buri, John R.

    As a result of Freud's seminal postulations of the psychoanalytic bases for one's God-concept, it is a frequently accepted hypothesis that an individual's image of God is largely a reflection of experiences with and feelings toward one's own father. While such speculations as to an individual's phenomenological conceptions of God have an…

  1. A Subdivision-Based Representation for Vector Image Editing.

    Science.gov (United States)

    Liao, Zicheng; Hoppe, Hugues; Forsyth, David; Yu, Yizhou

    2012-11-01

    Vector graphics has been employed in a wide variety of applications due to its scalability and editability. Editability is a high priority for artists and designers who wish to produce vector-based graphical content with user interaction. In this paper, we introduce a new vector image representation based on piecewise smooth subdivision surfaces, which is a simple, unified and flexible framework that supports a variety of operations, including shape editing, color editing, image stylization, and vector image processing. These operations effectively create novel vector graphics by reusing and altering existing image vectorization results. Because image vectorization yields an abstraction of the original raster image, controlling the level of detail of this abstraction is highly desirable. To this end, we design a feature-oriented vector image pyramid that offers multiple levels of abstraction simultaneously. Our new vector image representation can be rasterized efficiently using GPU-accelerated subdivision. Experiments indicate that our vector image representation achieves high visual quality and better supports editing operations than existing representations.

  2. Comic image understanding based on polygon detection

    Science.gov (United States)

    Li, Luyuan; Wang, Yongtao; Tang, Zhi; Liu, Dong

    2013-01-01

    Comic image understanding aims to automatically decompose scanned comic page images into storyboards and then identify the reading order of them, which is the key technique to produce digital comic documents that are suitable for reading on mobile devices. In this paper, we propose a novel comic image understanding method based on polygon detection. First, we segment a comic page images into storyboards by finding the polygonal enclosing box of each storyboard. Then, each storyboard can be represented by a polygon, and the reading order of them is determined by analyzing the relative geometric relationship between each pair of polygons. The proposed method is tested on 2000 comic images from ten printed comic series, and the experimental results demonstrate that it works well on different types of comic images.

  3. Monotonicity-based electrical impedance tomography for lung imaging

    Science.gov (United States)

    Zhou, Liangdong; Harrach, Bastian; Seo, Jin Keun

    2018-04-01

    This paper presents a monotonicity-based spatiotemporal conductivity imaging method for continuous regional lung monitoring using electrical impedance tomography (EIT). The EIT data (i.e. the boundary current-voltage data) can be decomposed into pulmonary, cardiac and other parts using their different periodic natures. The time-differential current-voltage operator corresponding to the lung ventilation can be viewed as either semi-positive or semi-negative definite owing to monotonic conductivity changes within the lung regions. We used these monotonicity constraints to improve the quality of lung EIT imaging. We tested the proposed methods in numerical simulations, phantom experiments and human experiments.

  4. Fast single image dehazing based on image fusion

    Science.gov (United States)

    Liu, Haibo; Yang, Jie; Wu, Zhengping; Zhang, Qingnian

    2015-01-01

    Images captured in foggy weather conditions often fade the colors and reduce the contrast of the observed objects. An efficient image fusion method is proposed to remove haze from a single input image. First, the initial medium transmission is estimated based on the dark channel prior. Second, the method adopts an assumption that the degradation level affected by haze of each region is the same, which is similar to the Retinex theory, and uses a simple Gaussian filter to get the coarse medium transmission. Then, pixel-level fusion is achieved between the initial medium transmission and coarse medium transmission. The proposed method can recover a high-quality haze-free image based on the physical model, and the complexity of the proposed method is only a linear function of the number of input image pixels. Experimental results demonstrate that the proposed method can allow a very fast implementation and achieve better restoration for visibility and color fidelity compared to some state-of-the-art methods.

  5. Destroying False Images of God: The Experiences of LGBT Catholics.

    Science.gov (United States)

    Deguara, Angele

    2018-01-01

    This article is about how lesbian, gay, bisexual, and trans (LGBT) Catholics imagine God and how images of God change in parallel with their self-image. The study is based on qualitative research with LGBT Catholics, most of whom are members of Drachma LGBTI in Malta or Ali d'Aquila in Palermo, Sicily. LGBT Catholics' image of God changes as they struggle to reconcile their religious and sexual identities and as they go through a process of "conversion" from deviants and sinners to loved children of God. One study participant compares his faith in God to peeling an onion: "With every layer one peels off, one destroys false images of God." Most study participants have moved away from the image of God as a bearded old man and father of creation and moved more toward a conception of God as love once identity conflicts are resolved.

  6. Chaotic Image Encryption Algorithm Based on Circulant Operation

    Directory of Open Access Journals (Sweden)

    Xiaoling Huang

    2013-01-01

    Full Text Available A novel chaotic image encryption scheme based on the time-delay Lorenz system is presented in this paper with the description of Circulant matrix. Making use of the chaotic sequence generated by the time-delay Lorenz system, the pixel permutation is carried out in diagonal and antidiagonal directions according to the first and second components. Then, a pseudorandom chaotic sequence is generated again from time-delay Lorenz system using all components. Modular operation is further employed for diffusion by blocks, in which the control parameter is generated depending on the plain-image. Numerical experiments show that the proposed scheme possesses the properties of a large key space to resist brute-force attack, sensitive dependence on secret keys, uniform distribution of gray values in the cipher-image, and zero correlation between two adjacent cipher-image pixels. Therefore, it can be adopted as an effective and fast image encryption algorithm.

  7. Chaotic Image Scrambling Algorithm Based on S-DES

    International Nuclear Information System (INIS)

    Yu, X Y; Zhang, J; Ren, H E; Xu, G S; Luo, X Y

    2006-01-01

    With the security requirement improvement of the image on the network, some typical image encryption methods can't meet the demands of encryption, such as Arnold cat map and Hilbert transformation. S-DES system can encrypt the input binary flow of image, but the fixed system structure and few keys will still bring some risks. However, the sensitivity of initial value that Logistic chaotic map can be well applied to the system of S-DES, which makes S-DES have larger random and key quantities. A dual image encryption algorithm based on S-DES and Logistic map is proposed. Through Matlab simulation experiments, the key quantities will attain 10 17 and the encryption speed of one image doesn't exceed one second. Compared to traditional methods, it has some merits such as easy to understand, rapid encryption speed, large keys and sensitivity to initial value

  8. Web based tools for visualizing imaging data and development of XNATView, a zero footprint image viewer.

    Science.gov (United States)

    Gutman, David A; Dunn, William D; Cobb, Jake; Stoner, Richard M; Kalpathy-Cramer, Jayashree; Erickson, Bradley

    2014-01-01

    Advances in web technologies now allow direct visualization of imaging data sets without necessitating the download of large file sets or the installation of software. This allows centralization of file storage and facilitates image review and analysis. XNATView is a light framework recently developed in our lab to visualize DICOM images stored in The Extensible Neuroimaging Archive Toolkit (XNAT). It consists of a PyXNAT-based framework to wrap around the REST application programming interface (API) and query the data in XNAT. XNATView was developed to simplify quality assurance, help organize imaging data, and facilitate data sharing for intra- and inter-laboratory collaborations. Its zero-footprint design allows the user to connect to XNAT from a web browser, navigate through projects, experiments, and subjects, and view DICOM images with accompanying metadata all within a single viewing instance.

  9. A multicore based parallel image registration method.

    Science.gov (United States)

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L; Foran, David J

    2009-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform.

  10. Image compression software for the SOHO LASCO and EIT experiments

    Science.gov (United States)

    Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis

    1994-01-01

    This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.

  11. Models for Patch-Based Image Restoration

    Directory of Open Access Journals (Sweden)

    Petrovic Nemanja

    2009-01-01

    Full Text Available Abstract We present a supervised learning approach for object-category specific restoration, recognition, and segmentation of images which are blurred using an unknown kernel. The novelty of this work is a multilayer graphical model which unifies the low-level vision task of restoration and the high-level vision task of recognition in a cooperative framework. The graphical model is an interconnected two-layer Markov random field. The restoration layer accounts for the compatibility between sharp and blurred images and models the association between adjacent patches in the sharp image. The recognition layer encodes the entity class and its location in the underlying scene. The potentials are represented using nonparametric kernel densities and are learnt from training data. Inference is performed using nonparametric belief propagation. Experiments demonstrate the effectiveness of our model for the restoration and recognition of blurred license plates as well as face images.

  12. Models for Patch-Based Image Restoration

    Directory of Open Access Journals (Sweden)

    Mithun Das Gupta

    2009-01-01

    Full Text Available We present a supervised learning approach for object-category specific restoration, recognition, and segmentation of images which are blurred using an unknown kernel. The novelty of this work is a multilayer graphical model which unifies the low-level vision task of restoration and the high-level vision task of recognition in a cooperative framework. The graphical model is an interconnected two-layer Markov random field. The restoration layer accounts for the compatibility between sharp and blurred images and models the association between adjacent patches in the sharp image. The recognition layer encodes the entity class and its location in the underlying scene. The potentials are represented using nonparametric kernel densities and are learnt from training data. Inference is performed using nonparametric belief propagation. Experiments demonstrate the effectiveness of our model for the restoration and recognition of blurred license plates as well as face images.

  13. An image adaptive, wavelet-based watermarking of digital images

    Science.gov (United States)

    Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia

    2007-12-01

    In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.

  14. Image annotation based on positive-negative instances learning

    Science.gov (United States)

    Zhang, Kai; Hu, Jiwei; Liu, Quan; Lou, Ping

    2017-07-01

    Automatic image annotation is now a tough task in computer vision, the main sense of this tech is to deal with managing the massive image on the Internet and assisting intelligent retrieval. This paper designs a new image annotation model based on visual bag of words, using the low level features like color and texture information as well as mid-level feature as SIFT, and mixture the pic2pic, label2pic and label2label correlation to measure the correlation degree of labels and images. We aim to prune the specific features for each single label and formalize the annotation task as a learning process base on Positive-Negative Instances Learning. Experiments are performed using the Corel5K Dataset, and provide a quite promising result when comparing with other existing methods.

  15. A fractal-based image encryption system

    KAUST Repository

    Abd-El-Hafiz, S. K.

    2014-12-01

    This study introduces a novel image encryption system based on diffusion and confusion processes in which the image information is hidden inside the complex details of fractal images. A simplified encryption technique is, first, presented using a single-fractal image and statistical analysis is performed. A general encryption system utilising multiple fractal images is, then, introduced to improve the performance and increase the encryption key up to hundreds of bits. This improvement is achieved through several parameters: feedback delay, multiplexing and independent horizontal or vertical shifts. The effect of each parameter is studied separately and, then, they are combined to illustrate their influence on the encryption quality. The encryption quality is evaluated using different analysis techniques such as correlation coefficients, differential attack measures, histogram distributions, key sensitivity analysis and the National Institute of Standards and Technology (NIST) statistical test suite. The obtained results show great potential compared to other techniques.

  16. PIXEL PATTERN BASED STEGANOGRAPHY ON IMAGES

    Directory of Open Access Journals (Sweden)

    R. Rejani

    2015-02-01

    Full Text Available One of the drawback of most of the existing steganography methods is that it alters the bits used for storing color information. Some of the examples include LSB or MSB based steganography. There are also various existing methods like Dynamic RGB Intensity Based Steganography Scheme, Secure RGB Image Steganography from Pixel Indicator to Triple Algorithm etc that can be used to find out the steganography method used and break it. Another drawback of the existing methods is that it adds noise to the image which makes the image look dull or grainy making it suspicious for a person about existence of a hidden message within the image. To overcome these shortcomings we have come up with a pixel pattern based steganography which involved hiding the message within in image by using the existing RGB values whenever possible at pixel level or with minimum changes. Along with the image a key will also be used to decrypt the message stored at pixel levels. For further protection, both the message stored as well as the key file will be in encrypted format which can have same or different keys or decryption. Hence we call it as a RGB pixel pattern based steganography.

  17. Image standards in Tissue-Based Diagnosis (Diagnostic Surgical Pathology

    Directory of Open Access Journals (Sweden)

    Vollmer Ekkehard

    2008-04-01

    Full Text Available Abstract Background Progress in automated image analysis, virtual microscopy, hospital information systems, and interdisciplinary data exchange require image standards to be applied in tissue-based diagnosis. Aims To describe the theoretical background, practical experiences and comparable solutions in other medical fields to promote image standards applicable for diagnostic pathology. Theory and experiences Images used in tissue-based diagnosis present with pathology – specific characteristics. It seems appropriate to discuss their characteristics and potential standardization in relation to the levels of hierarchy in which they appear. All levels can be divided into legal, medical, and technological properties. Standards applied to the first level include regulations or aims to be fulfilled. In legal properties, they have to regulate features of privacy, image documentation, transmission, and presentation; in medical properties, features of disease – image combination, human – diagnostics, automated information extraction, archive retrieval and access; and in technological properties features of image acquisition, display, formats, transfer speed, safety, and system dynamics. The next lower second level has to implement the prescriptions of the upper one, i.e. describe how they are implemented. Legal aspects should demand secure encryption for privacy of all patient related data, image archives that include all images used for diagnostics for a period of 10 years at minimum, accurate annotations of dates and viewing, and precise hardware and software information. Medical aspects should demand standardized patients' files such as DICOM 3 or HL 7 including history and previous examinations, information of image display hardware and software, of image resolution and fields of view, of relation between sizes of biological objects and image sizes, and of access to archives and retrieval. Technological aspects should deal with image

  18. Greenhouse gas Laser Imaging Tomography Experiment (GreenLITE)

    Energy Technology Data Exchange (ETDEWEB)

    Dobler, Jeremy [Exelis Inc., Fort Wayne, IN (United States); Zaccheo, T. Scott [Exelis Inc., Fort Wayne, IN (United States); Blume, Nathan [Exelis Inc., Fort Wayne, IN (United States); Pernini, Timothy [Exelis Inc., Fort Wayne, IN (United States); Braun, Michael [Exelis Inc., Fort Wayne, IN (United States); Botos, Christopher [Exelis Inc., Fort Wayne, IN (United States)

    2016-03-31

    This report describes the development and testing of a novel system, the Greenhouse gas Laser Imaging Tomography Experiment (GreenLITE), for Monitoring, Reporting and Verification (MRV) of CO2 at Geological Carbon Storage (GCS) sites. The system consists of a pair of laser based transceivers, a number of retroreflectors, and a set of cloud based data processing, storage and dissemination tools, which enable 2-D mapping of the CO2 in near real time. A system was built, tested locally in New Haven, Indiana, and then deployed to the Zero Emissions Research and Technology (ZERT) facility in Bozeman, MT. Testing at ZERT demonstrated the ability of the GreenLITE system to identify and map small underground leaks, in the presence of other biological sources and with widely varying background concentrations. The system was then ruggedized and tested at the Harris test site in New Haven, IN, during winter time while exposed to temperatures as low as -15 °CºC. Additional testing was conducted using simulated concentration enhancements to validate the 2-D retrieval accuracy. This test resulted in a high confidence in the reconstruction ability to identify sources to tens of meters resolution in this configuration. Finally, the system was deployed for a period of approximately 6 months to an active industrial site, Illinois Basin – Decatur Project (IBDP), where >1M metric tons of CO2 had been injected into an underground sandstone basin. The main objective of this final deployment was to demonstrate autonomous operation over a wide range of environmental conditions with very little human interaction, and to demonstrate the feasibility of the system for long term deployment in a GCS environment.

  19. Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description.

    Science.gov (United States)

    Gu, Ke; Jakhetiya, Vinit; Qiao, Jun-Fei; Li, Xiaoli; Lin, Weisi; Thalmann, Daniel

    2017-07-28

    New challenges have been brought out along with the emerging of 3D-related technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, etc, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced- and no-reference models.

  20. Parallel CT image reconstruction based on GPUs

    International Nuclear Information System (INIS)

    Flores, Liubov A.; Vidal, Vicent; Mayo, Patricia; Rodenas, Francisco; Verdú, Gumersindo

    2014-01-01

    In X-ray computed tomography (CT) iterative methods are more suitable for the reconstruction of images with high contrast and precision in noisy conditions from a small number of projections. However, in practice, these methods are not widely used due to the high computational cost of their implementation. Nowadays technology provides the possibility to reduce effectively this drawback. It is the goal of this work to develop a fast GPU-based algorithm to reconstruct high quality images from under sampled and noisy projection data. - Highlights: • We developed GPU-based iterative algorithm to reconstruct images. • Iterative algorithms are capable to reconstruct images from under sampled set of projections. • The computer cost of the implementation of the developed algorithm is low. • The efficiency of the algorithm increases for the large scale problems

  1. DENOTATIVE ORIGINS OF ABSTRACT IMAGES IN LINGUISTIC EXPERIMENT

    Directory of Open Access Journals (Sweden)

    Elina, E.

    2017-03-01

    Full Text Available The article discusses the refusal from denotation (the subject, as the basic principle of abstract images, and semiotic problems arising in connection with this principle: how to solve the contradiction between the pointlessness and iconic nature of the image? Is it correct in the absence of denotation to recognize abstract representation of a single-level entity? The solution is proposed to decide these questions with the help of a psycholinguistic experiment in which the verbal interpretation of abstract images made by both experienced and “naive” audience-recipients demonstrates the objectivity of perception of denotative “traces” and the presence of denotative invariant in an abstract form.

  2. Intelligent image retrieval based on radiology reports

    Energy Technology Data Exchange (ETDEWEB)

    Gerstmair, Axel; Langer, Mathias; Kotter, Elmar [University Medical Center Freiburg, Department of Diagnostic Radiology, Freiburg (Germany); Daumke, Philipp; Simon, Kai [Averbis GmbH, Freiburg (Germany)

    2012-12-15

    To create an advanced image retrieval and data-mining system based on in-house radiology reports. Radiology reports are semantically analysed using natural language processing (NLP) techniques and stored in a state-of-the-art search engine. Images referenced by sequence and image number in the reports are retrieved from the picture archiving and communication system (PACS) and stored for later viewing. A web-based front end is used as an interface to query for images and show the results with the retrieved images and report text. Using a comprehensive radiological lexicon for the underlying terminology, the search algorithm also finds results for synonyms, abbreviations and related topics. The test set was 108 manually annotated reports analysed by different system configurations. Best results were achieved using full syntactic and semantic analysis with a precision of 0.929 and recall of 0.952. Operating successfully since October 2010, 258,824 reports have been indexed and a total of 405,146 preview images are stored in the database. Data-mining and NLP techniques provide quick access to a vast repository of images and radiology reports with both high precision and recall values. Consequently, the system has become a valuable tool in daily clinical routine, education and research. (orig.)

  3. IMAGE ANALYSIS BASED ON EDGE DETECTION TECHNIQUES

    Institute of Scientific and Technical Information of China (English)

    纳瑟; 刘重庆

    2002-01-01

    A method that incorporates edge detection technique, Markov Random field (MRF), watershed segmentation and merging techniques was presented for performing image segmentation and edge detection tasks. It first applies edge detection technique to obtain a Difference In Strength (DIS) map. An initial segmented result is obtained based on K-means clustering technique and the minimum distance. Then the region process is modeled by MRF to obtain an image that contains different intensity regions. The gradient values are calculated and then the watershed technique is used. DIS calculation is used for each pixel to define all the edges (weak or strong) in the image. The DIS map is obtained. This help as priority knowledge to know the possibility of the region segmentation by the next step (MRF), which gives an image that has all the edges and regions information. In MRF model,gray level l, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The segmentation results are improved by using watershed algorithm. After all pixels of the segmented regions are processed, a map of primitive region with edges is generated. The edge map is obtained using a merge process based on averaged intensity mean values. A common edge detectors that work on (MRF) segmented image are used and the results are compared. The segmentation and edge detection result is one closed boundary per actual region in the image.

  4. Composite Techniques Based Color Image Compression

    Directory of Open Access Journals (Sweden)

    Zainab Ibrahim Abood

    2017-03-01

    Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.

  5. Intelligent image retrieval based on radiology reports

    International Nuclear Information System (INIS)

    Gerstmair, Axel; Langer, Mathias; Kotter, Elmar; Daumke, Philipp; Simon, Kai

    2012-01-01

    To create an advanced image retrieval and data-mining system based on in-house radiology reports. Radiology reports are semantically analysed using natural language processing (NLP) techniques and stored in a state-of-the-art search engine. Images referenced by sequence and image number in the reports are retrieved from the picture archiving and communication system (PACS) and stored for later viewing. A web-based front end is used as an interface to query for images and show the results with the retrieved images and report text. Using a comprehensive radiological lexicon for the underlying terminology, the search algorithm also finds results for synonyms, abbreviations and related topics. The test set was 108 manually annotated reports analysed by different system configurations. Best results were achieved using full syntactic and semantic analysis with a precision of 0.929 and recall of 0.952. Operating successfully since October 2010, 258,824 reports have been indexed and a total of 405,146 preview images are stored in the database. Data-mining and NLP techniques provide quick access to a vast repository of images and radiology reports with both high precision and recall values. Consequently, the system has become a valuable tool in daily clinical routine, education and research. (orig.)

  6. [PACS-based endoscope image acquisition workstation].

    Science.gov (United States)

    Liu, J B; Zhuang, T G

    2001-01-01

    A practical PACS-based Endoscope Image Acquisition Workstation is here introduced. By a Multimedia Video Card, the endoscope video is digitized and captured dynamically or statically into computer. This workstation realizes a variety of functions such as the endoscope video's acquisition and display, as well as the editing, processing, managing, storage, printing, communication of related information. Together with other medical image workstation, it can make up the image sources of PACS for hospitals. In addition, it can also act as an independent endoscopy diagnostic system.

  7. Jet-Based Local Image Descriptors

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo; Darkner, Sune; Dahl, Anders Lindbjerg

    2012-01-01

    We present a general novel image descriptor based on higherorder differential geometry and investigate the effect of common descriptor choices. Our investigation is twofold in that we develop a jet-based descriptor and perform a comparative evaluation with current state-of-the-art descriptors on ...

  8. Dialog-based Interactive Image Retrieval

    OpenAIRE

    Guo, Xiaoxiao; Wu, Hui; Cheng, Yu; Rennie, Steven; Feris, Rogerio Schmidt

    2018-01-01

    Existing methods for interactive image retrieval have demonstrated the merit of integrating user feedback, improving retrieval results. However, most current systems rely on restricted forms of user feedback, such as binary relevance responses, or feedback based on a fixed set of relative attributes, which limits their impact. In this paper, we introduce a new approach to interactive image search that enables users to provide feedback via natural language, allowing for more natural and effect...

  9. CLASSIFICATION OF CROP-SHELTER COVERAGE BY RGB AERIAL IMAGES: A COMPENDIUM OF EXPERIENCES AND FINDINGS

    Directory of Open Access Journals (Sweden)

    Claudia Arcidiacono

    2010-09-01

    Full Text Available Image processing is a powerful tool apt to perform selective data extraction from high-content images. In agricultural studies, image processing has been applied to different scopes, among them the classification of crop shelters has been recently considered especially in areas where there is a lack of public control in the building activity. The application of image processing to crop-shelter feature recognition make it possible to automatically produce thematic maps that constitute a basic knowledge for local authorities to cope with environmental problems and for technicians to be used in their planning activity. This paper reviews the authors’ experience in the definition of methodologies, based on the main image processing methods, for crop-shelter feature extraction from aerial digital images. Some experiences of pixel-based and object-oriented methods are described and discussed. The results show that the methodology based on object-oriented methods improves crop-shelter classification and reduces computational time, compared to pixel-based methodologies.

  10. A data base for reactor physics experiments at KUCA, 1

    International Nuclear Information System (INIS)

    Ichihara, Chihiro; Hayashi, Masatoshi; Fujine, Shigenori; Wakamatsu, Susumu.

    1986-01-01

    A data base of the experiment done at the Critical Assembly of Kyoto University(KUCA) was constructed both on personal computers and a main frame. A retrieval data base based on each experiment serve as the key data base. The critical experiment data, geometries of the core configuration or fuel elements, and the various numeric data are referred after the results of the retrieval. The personal computer program for this data base is made using BASIC language and the whole system consist of the retrieval data base and the graphic data. The construction of the critical experiment data is now in progress. The data base system can be supplied to the KUCA users with floppy disks. A universal information retrieval system, FAIRS is prepared at the Data Processing Center Kyoto University. By using this system, the retrieval data base of the experiment was constructed. The image information such as core configuration and fuel elements are stored by using ELF system which can be linked to the FAIRS. The data base on FAIRS can be referred from each university through an online network. However, ELF is a closed service within Kyoto University at present. (author)

  11. Nonlaser-based 3D surface imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  12. Measurable realistic image-based 3D mapping

    Science.gov (United States)

    Liu, W.; Wang, J.; Wang, J. J.; Ding, W.; Almagbile, A.

    2011-12-01

    Maps with 3D visual models are becoming a remarkable feature of 3D map services. High-resolution image data is obtained for the construction of 3D visualized models.The3D map not only provides the capabilities of 3D measurements and knowledge mining, but also provides the virtual experienceof places of interest, such as demonstrated in the Google Earth. Applications of 3D maps are expanding into the areas of architecture, property management, and urban environment monitoring. However, the reconstruction of high quality 3D models is time consuming, and requires robust hardware and powerful software to handle the enormous amount of data. This is especially for automatic implementation of 3D models and the representation of complicated surfacesthat still need improvements with in the visualisation techniques. The shortcoming of 3D model-based maps is the limitation of detailed coverage since a user can only view and measure objects that are already modelled in the virtual environment. This paper proposes and demonstrates a 3D map concept that is realistic and image-based, that enables geometric measurements and geo-location services. Additionally, image-based 3D maps provide more detailed information of the real world than 3D model-based maps. The image-based 3D maps use geo-referenced stereo images or panoramic images. The geometric relationships between objects in the images can be resolved from the geometric model of stereo images. The panoramic function makes 3D maps more interactive with users but also creates an interesting immersive circumstance. Actually, unmeasurable image-based 3D maps already exist, such as Google street view, but only provide virtual experiences in terms of photos. The topographic and terrain attributes, such as shapes and heights though are omitted. This paper also discusses the potential for using a low cost land Mobile Mapping System (MMS) to implement realistic image 3D mapping, and evaluates the positioning accuracy that a measureable

  13. Neural network based multiscale image restoration approach

    Science.gov (United States)

    de Castro, Ana Paula A.; da Silva, José D. S.

    2007-02-01

    This paper describes a neural network based multiscale image restoration approach. Multilayer perceptrons are trained with artificial images of degraded gray level circles, in an attempt to make the neural network learn inherent space relations of the degraded pixels. The present approach simulates the degradation by a low pass Gaussian filter blurring operation and the addition of noise to the pixels at pre-established rates. The training process considers the degraded image as input and the non-degraded image as output for the supervised learning process. The neural network thus performs an inverse operation by recovering a quasi non-degraded image in terms of least squared. The main difference of the approach to existing ones relies on the fact that the space relations are taken from different scales, thus providing relational space data to the neural network. The approach is an attempt to come up with a simple method that leads to an optimum solution to the problem. Considering different window sizes around a pixel simulates the multiscale operation. In the generalization phase the neural network is exposed to indoor, outdoor, and satellite degraded images following the same steps use for the artificial circle image.

  14. Discriminative Projection Selection Based Face Image Hashing

    Science.gov (United States)

    Karabat, Cagatay; Erdogan, Hakan

    Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.

  15. Whole slide images and digital media in pathology education, testing, and practice: the Oklahoma experience.

    Science.gov (United States)

    Fung, Kar-Ming; Hassell, Lewis A; Talbert, Michael L; Wiechmann, Allan F; Chaser, Brad E; Ramey, Joel

    2012-01-01

    Examination of glass slides is of paramount importance in pathology training. Until the introduction of digitized whole slide images that could be accessed through computer networks, the sharing of pathology slides was a major logistic issue in pathology education and practice. With the help of whole slide images, our department has developed several online pathology education websites. Based on a modular architecture, this program provides online access to whole slide images, still images, case studies, quizzes and didactic text at different levels. Together with traditional lectures and hands-on experiences, it forms the back bone of our histology and pathology education system for residents and medical students. The use of digitized whole slide images has a.lso greatly improved the communication between clinicians and pathologist in our institute.

  16. Improvement of Secret Image Invisibility in Circulation Image with Dyadic Wavelet Based Data Hiding with Run-Length Coded Secret Images of Which Location of Codes are Determined with Random Number

    OpenAIRE

    Kohei Arai; Yuji Yamada

    2011-01-01

    An attempt is made for improvement of secret image invisibility in circulation images with dyadic wavelet based data hiding with run-length coded secret images of which location of codes are determined by random number. Through experiments, it is confirmed that secret images are almost invisible in circulation images. Also robustness of the proposed data hiding method against data compression of circulation images is discussed. Data hiding performance in terms of invisibility of secret images...

  17. Optical colour image watermarking based on phase-truncated linear canonical transform and image decomposition

    Science.gov (United States)

    Su, Yonggang; Tang, Chen; Li, Biyuan; Lei, Zhenkun

    2018-05-01

    This paper presents a novel optical colour image watermarking scheme based on phase-truncated linear canonical transform (PT-LCT) and image decomposition (ID). In this proposed scheme, a PT-LCT-based asymmetric cryptography is designed to encode the colour watermark into a noise-like pattern, and an ID-based multilevel embedding method is constructed to embed the encoded colour watermark into a colour host image. The PT-LCT-based asymmetric cryptography, which can be optically implemented by double random phase encoding with a quadratic phase system, can provide a higher security to resist various common cryptographic attacks. And the ID-based multilevel embedding method, which can be digitally implemented by a computer, can make the information of the colour watermark disperse better in the colour host image. The proposed colour image watermarking scheme possesses high security and can achieve a higher robustness while preserving the watermark’s invisibility. The good performance of the proposed scheme has been demonstrated by extensive experiments and comparison with other relevant schemes.

  18. SU-E-J-237: Image Feature Based DRR and Portal Image Registration

    Energy Technology Data Exchange (ETDEWEB)

    Wang, X; Chang, J [NY Weill Cornell Medical Ctr, NY (United States)

    2014-06-01

    Purpose: Two-dimensional (2D) matching of the kV X-ray and digitally reconstructed radiography (DRR) images is an important setup technique for image-guided radiotherapy (IGRT). In our clinics, mutual information based methods are used for this purpose on commercial linear accelerators, but with often needs for manual corrections. This work proved the feasibility that feature based image transform can be used to register kV and DRR images. Methods: The scale invariant feature transform (SIFT) method was implemented to detect the matching image details (or key points) between the kV and DRR images. These key points represent high image intensity gradients, and thus the scale invariant features. Due to the poor image contrast from our kV image, direct application of the SIFT method yielded many detection errors. To assist the finding of key points, the center coordinates of the kV and DRR images were read from the DICOM header, and the two groups of key points with similar relative positions to their corresponding centers were paired up. Using these points, a rigid transform (with scaling, horizontal and vertical shifts) was estimated. We also artificially introduced vertical and horizontal shifts to test the accuracy of our registration method on anterior-posterior (AP) and lateral pelvic images. Results: The results provided a satisfactory overlay of the transformed kV onto the DRR image. The introduced vs. detected shifts were fit into a linear regression. In the AP image experiments, linear regression analysis showed a slope of 1.15 and 0.98 with an R2 of 0.89 and 0.99 for the horizontal and vertical shifts, respectively. The results are 1.2 and 1.3 with R2 of 0.72 and 0.82 for the lateral image shifts. Conclusion: This work provided an alternative technique for kV to DRR alignment. Further improvements in the estimation accuracy and image contrast tolerance are underway.

  19. LSB Based Quantum Image Steganography Algorithm

    Science.gov (United States)

    Jiang, Nan; Zhao, Na; Wang, Luo

    2016-01-01

    Quantum steganography is the technique which hides a secret message into quantum covers such as quantum images. In this paper, two blind LSB steganography algorithms in the form of quantum circuits are proposed based on the novel enhanced quantum representation (NEQR) for quantum images. One algorithm is plain LSB which uses the message bits to substitute for the pixels' LSB directly. The other is block LSB which embeds a message bit into a number of pixels that belong to one image block. The extracting circuits can regain the secret message only according to the stego cover. Analysis and simulation-based experimental results demonstrate that the invisibility is good, and the balance between the capacity and the robustness can be adjusted according to the needs of applications.

  20. Video-based noncooperative iris image segmentation.

    Science.gov (United States)

    Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig

    2011-02-01

    In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.

  1. Image Recommendation Algorithm Using Feature-Based Collaborative Filtering

    Science.gov (United States)

    Kim, Deok-Hwan

    As the multimedia contents market continues its rapid expansion, the amount of image contents used in mobile phone services, digital libraries, and catalog service is increasing remarkably. In spite of this rapid growth, users experience high levels of frustration when searching for the desired image. Even though new images are profitable to the service providers, traditional collaborative filtering methods cannot recommend them. To solve this problem, in this paper, we propose feature-based collaborative filtering (FBCF) method to reflect the user's most recent preference by representing his purchase sequence in the visual feature space. The proposed approach represents the images that have been purchased in the past as the feature clusters in the multi-dimensional feature space and then selects neighbors by using an inter-cluster distance function between their feature clusters. Various experiments using real image data demonstrate that the proposed approach provides a higher quality recommendation and better performance than do typical collaborative filtering and content-based filtering techniques.

  2. Sodium MR imaging of human brain neoplasms. A preliminary experience

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, Shu; Yoshikawa, Kohki; Takakura, Kintomo; Iio, Masahiro

    1988-08-01

    We reported the experience of the sodium magnetic resonance imaging of 5 patients with brain tumors (4 astrocytomas and 1 craniopharyngioma), using a Siemens 1.5 Tesla superconductive magnet. We used two-dimensional Fourier imaging with a spin-echo scanning sequence (and with the repetition time of 140 msec and the echo time of 11 - 14 msec). The radiofrequency was maintained at 17 MHz. Sodium MR imaging was achieved with a 64 x 64 data acquisition (30 mm slice thickness) in 19.1 min. On the sodium MRI, all four astrocytomas, along with the eye balls and the cerebrospinal fluid spaces, appeared as high-intensity areas. Peritumoral edema is also visualized as highly intense, so that it is difficult to discriminate tumor extent from the surrounding edema. Our comparative studies with malignant glioma cases using the same equipment are needed to clarify the relationship between sodium signal intensities and the malignancy of gliomas, and to evaluate the potential clinical utility of sodium MRI. A craniopharyngioma than contained a yellowish cystic fluid with a sodium concentration as high as CSF was shown on sodium MRI as a mass with highly intense signals. The ability to differentiate extracellular from intracellular sodium, that has been studied by several investigators, would greatly augment the clinical specificity of MR imaging.

  3. Hyperspectral Imaging of Forest Resources: The Malaysian Experience

    Science.gov (United States)

    Mohd Hasmadi, I.; Kamaruzaman, J.

    2008-08-01

    Remote sensing using satellite and aircraft images are well established technology. Remote sensing application of hyperspectral imaging, however, is relatively new to Malaysian forestry. Through a wide range of wavelengths hyperspectral data are precisely capable to capture narrow bands of spectra. Airborne sensors typically offer greatly enhanced spatial and spectral resolution over their satellite counterparts, and able to control experimental design closely during image acquisition. The first study using hyperspectral imaging for forest inventory in Malaysia were conducted by Professor Hj. Kamaruzaman from the Faculty of Forestry, Universiti Putra Malaysia in 2002 using the AISA sensor manufactured by Specim Ltd, Finland. The main objective has been to develop methods that are directly suited for practical tropical forestry application at the high level of accuracy. Forest inventory and tree classification including development of single spectral signatures have been the most important interest at the current practices. Experiences from the studies showed that retrieval of timber volume and tree discrimination using this system is well and some or rather is better than other remote sensing methods. This article reviews the research and application of airborne hyperspectral remote sensing for forest survey and assessment in Malaysia.

  4. AUTOMATIC MULTILEVEL IMAGE SEGMENTATION BASED ON FUZZY REASONING

    Directory of Open Access Journals (Sweden)

    Liang Tang

    2011-05-01

    Full Text Available An automatic multilevel image segmentation method based on sup-star fuzzy reasoning (SSFR is presented. Using the well-known sup-star fuzzy reasoning technique, the proposed algorithm combines the global statistical information implied in the histogram with the local information represented by the fuzzy sets of gray-levels, and aggregates all the gray-levels into several classes characterized by the local maximum values of the histogram. The presented method has the merits of determining the number of the segmentation classes automatically, and avoiding to calculating thresholds of segmentation. Emulating and real image segmentation experiments demonstrate that the SSFR is effective.

  5. EVOLUTION OF SOUTHERN AFRICAN CRATONS BASED ON SEISMIC IMAGING

    DEFF Research Database (Denmark)

    Thybo, Hans; Soliman, Mohammad Youssof Ahmad; Artemieva, Irina

    2014-01-01

    present a new seismic model for the structure of the crust and lithospheric mantle of the Kalahari Craton, constrained by seismic receiver functions and finite-frequency tomography based on the seismological data from the South Africa Seismic Experiment (SASE). The combination of these two methods...... since formation of the craton, and (3) seismically fast lithospheric keels are imaged in the Kaapvaal and Zimabwe cratons to depths of 300-350 km. Relatively low velocity anomalies are imaged beneath both the paleo-orogenic Limpopo Belt and the Bushveld Complex down to depths of ~250 km and ~150 km...

  6. Preliminary Experience with Small Animal SPECT Imaging on Clinical Gamma Cameras

    Directory of Open Access Journals (Sweden)

    P. Aguiar

    2014-01-01

    Full Text Available The traditional lack of techniques suitable for in vivo imaging has induced a great interest in molecular imaging for preclinical research. Nevertheless, its use spreads slowly due to the difficulties in justifying the high cost of the current dedicated preclinical scanners. An alternative for lowering the costs is to repurpose old clinical gamma cameras to be used for preclinical imaging. In this paper we assess the performance of a portable device, that is, working coupled to a single-head clinical gamma camera, and we present our preliminary experience in several small animal applications. Our findings, based on phantom experiments and animal studies, provided an image quality, in terms of contrast-noise trade-off, comparable to dedicated preclinical pinhole-based scanners. We feel that our portable device offers an opportunity for recycling the widespread availability of clinical gamma cameras in nuclear medicine departments to be used in small animal SPECT imaging and we hope that it can contribute to spreading the use of preclinical imaging within institutions on tight budgets.

  7. Comparisons of three alternative breast modalities in a common phantom imaging experiment

    International Nuclear Information System (INIS)

    Li Dun; Meaney, Paul M.; Tosteson, Tor D.; Jiang Shudong; Kerner, Todd E.; McBride, Troy O.; Pogue, Brian W.; Hartov, Alexander; Paulsen, Keith D.

    2003-01-01

    Four model-based imaging systems are currently being developed for breast cancer detection at Dartmouth College. A potential advantage of multimodality imaging is the prospect of combining information collected from each system to provide a more complete diagnostic tool that covers the full range of the patient and pathology spectra. In this paper it is shown through common phantom experiments on three of these imaging systems that it was possible to correlate different types of image information to potentially improve the reliability of tumor detection. Imaging experiments were conducted with common phantoms which mimic both dielectric and optical properties of the human breast. Cross modality comparison was investigated through a statistical study based on the repeated data sets of reconstructed parameters for each modality. The system standard error between all methods was generally less than 10% and the correlation coefficient across modalities ranged from 0.68 to 0.91. Future work includes the minimization of bias (artifacts) on the periphery of electrical impedance spectroscopy images to improve cross modality correlation and implementation of the multimodality diagnosis for breast cancer detection

  8. Deformation Measurements of Gabion Walls Using Image Based Modeling

    Directory of Open Access Journals (Sweden)

    Marek Fraštia

    2014-06-01

    Full Text Available The image based modeling finds use in applications where it is necessary to reconstructthe 3D surface of the observed object with a high level of detail. Previous experiments showrelatively high variability of the results depending on the camera type used, the processingsoftware, or the process evaluation. The authors tested the method of SFM (Structure fromMotion to determine the stability of gabion walls. The results of photogrammetricmeasurements were compared to precise geodetic point measurements.

  9. Integrated optical 3D digital imaging based on DSP scheme

    Science.gov (United States)

    Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

    2008-03-01

    We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

  10. Homotopy Based Reconstruction from Acoustic Images

    DEFF Research Database (Denmark)

    Sharma, Ojaswa

    of the inherent arrangement. The problem of reconstruction from arbitrary cross sections is a generic problem and is also shown to be solved here using the mathematical tool of continuous deformations. As part of a complete processing, segmentation using level set methods is explored for acoustic images and fast...... GPU (Graphics Processing Unit) based methods are suggested for a streaming computation on large volumes of data. Validation of results for acoustic images is not straightforward due to unavailability of ground truth. Accuracy figures for the suggested methods are provided using phantom object...

  11. The Microgravity Research Experiments (MICREX) Data Base

    Science.gov (United States)

    Winter, C. A.; Jones, J. C.

    1996-01-01

    An electronic data base identifying over 800 fluids and materials processing experiments performed in a low-gravity environment has been created at NASA Marshall Space Flight Center. The compilation, called MICREX (MICrogravity Research Experiments) was designed to document all such experimental efforts performed (1) on U.S. manned space vehicles, (2) on payloads deployed from U.S. manned space vehicles, and (3) on all domestic and international sounding rockets (excluding those of China and the former U.S.S.R.). Data available on most experiments include (1) principal and co-investigator (2) low-gravity mission, (3) processing facility, (4) experimental objectives and results, (5) identifying key words, (6) sample materials, (7) applications of the processed materials/research area, (8) experiment descriptive publications, and (9) contacts for more information concerning the experiment. This technical memorandum (1) summarizes the historical interest in reduced-gravity fluid dynamics, (2) describes the importance of a low-gravity fluids and materials processing data base, (4) describes thE MICREX data base format and computational World Wide Web access procedures, and (5) documents (in hard-copy form) the descriptions of the first 600 fluids and materials processing experiments entered into MICREX.

  12. Wind Statistics Offshore based on Satellite Images

    DEFF Research Database (Denmark)

    Hasager, Charlotte Bay; Mouche, Alexis; Badger, Merete

    2009-01-01

    -based observations become available. At present preliminary results are obtained using the routine methods. The first step in the process is to retrieve raw SAR data, calibrate the images and use a priori wind direction as input to the geophysical model function. From this process the wind speed maps are produced....... The wind maps are geo-referenced. The second process is the analysis of a series of geo-referenced SAR-based wind maps. Previous research has shown that a relatively large number of images are needed for achieving certain accuracies on mean wind speed, Weibull A and k (scale and shape parameters......Ocean wind maps from satellites are routinely processed both at Risø DTU and CLS based on the European Space Agency Envisat ASAR data. At Risø the a priori wind direction is taken from the atmospheric model NOGAPS (Navel Operational Global Atmospheric Prediction System) provided by the U.S. Navy...

  13. The impact of brand image on customer experience – Company X

    OpenAIRE

    Siitonen, Hannes

    2017-01-01

    The aim of this thesis was to find out what kind of relationship there is between brand image and customer experience, and how the brand image affects to customer experience. The aim was also to define the company’s brand image and customer experience among the target groups, and what factors do affect to them. In addition, this thesis aimed to produce valuable information for the company about their brand image, customer experience, customer behaviour and customer satisfaction, followed by i...

  14. A REGION-BASED MULTI-SCALE APPROACH FOR OBJECT-BASED IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    T. Kavzoglu

    2016-06-01

    Full Text Available Within the last two decades, object-based image analysis (OBIA considering objects (i.e. groups of pixels instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient. Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.

  15. Image Coding Based on Address Vector Quantization.

    Science.gov (United States)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing

  16. Software for medical image based phantom modelling

    International Nuclear Information System (INIS)

    Possani, R.G.; Massicano, F.; Coelho, T.S.; Yoriyaz, H.

    2011-01-01

    Latest treatment planning systems depends strongly on CT images, so the tendency is that the dosimetry procedures in nuclear medicine therapy be also based on images, such as magnetic resonance imaging (MRI) or computed tomography (CT), to extract anatomical and histological information, as well as, functional imaging or activities map as PET or SPECT. This information associated with the simulation of radiation transport software is used to estimate internal dose in patients undergoing treatment in nuclear medicine. This work aims to re-engineer the software SCMS, which is an interface software between the Monte Carlo code MCNP, and the medical images, that carry information from the patient in treatment. In other words, the necessary information contained in the images are interpreted and presented in a specific format to the Monte Carlo MCNP code to perform the simulation of radiation transport. Therefore, the user does not need to understand complex process of inputting data on MCNP, as the SCMS is responsible for automatically constructing anatomical data from the patient, as well as the radioactive source data. The SCMS was originally developed in Fortran- 77. In this work it was rewritten in an object-oriented language (JAVA). New features and data options have also been incorporated into the software. Thus, the new software has a number of improvements, such as intuitive GUI and a menu for the selection of the energy spectra correspondent to a specific radioisotope stored in a XML data bank. The new version also supports new materials and the user can specify an image region of interest for the calculation of absorbed dose. (author)

  17. Fourier transform based scalable image quality measure.

    Science.gov (United States)

    Narwaria, Manish; Lin, Weisi; McLoughlin, Ian; Emmanuel, Sabu; Chia, Liang-Tien

    2012-08-01

    We present a new image quality assessment (IQA) algorithm based on the phase and magnitude of the 2D (twodimensional) Discrete Fourier Transform (DFT). The basic idea is to compare the phase and magnitude of the reference and distorted images to compute the quality score. However, it is well known that the Human Visual Systems (HVSs) sensitivity to different frequency components is not the same. We accommodate this fact via a simple yet effective strategy of nonuniform binning of the frequency components. This process also leads to reduced space representation of the image thereby enabling the reduced-reference (RR) prospects of the proposed scheme. We employ linear regression to integrate the effects of the changes in phase and magnitude. In this way, the required weights are determined via proper training and hence more convincing and effective. Lastly, using the fact that phase usually conveys more information than magnitude, we use only the phase for RR quality assessment. This provides the crucial advantage of further reduction in the required amount of reference image information. The proposed method is therefore further scalable for RR scenarios. We report extensive experimental results using a total of 9 publicly available databases: 7 image (with a total of 3832 distorted images with diverse distortions) and 2 video databases (totally 228 distorted videos). These show that the proposed method is overall better than several of the existing fullreference (FR) algorithms and two RR algorithms. Additionally, there is a graceful degradation in prediction performance as the amount of reference image information is reduced thereby confirming its scalability prospects. To enable comparisons and future study, a Matlab implementation of the proposed algorithm is available at http://www.ntu.edu.sg/home/wslin/reduced_phase.rar.

  18. Image-based spectroscopy for environmental monitoring

    Science.gov (United States)

    Bachmakov, Eduard; Molina, Carolyn; Wynne, Rosalind

    2014-03-01

    An image-processing algorithm for use with a nano-featured spectrometer chemical agent detection configuration is presented. The spectrometer chip acquired from Nano-Optic DevicesTM can reduce the size of the spectrometer down to a coin. The nanospectrometer chip was aligned with a 635nm laser source, objective lenses, and a CCD camera. The images from a nanospectrometer chip were collected and compared to reference spectra. Random background noise contributions were isolated and removed from the diffraction pattern image analysis via a threshold filter. Results are provided for the image-based detection of the diffraction pattern produced by the nanospectrometer. The featured PCF spectrometer has the potential to measure optical absorption spectra in order to detect trace amounts of contaminants. MATLAB tools allow for implementation of intelligent, automatic detection of the relevant sub-patterns in the diffraction patterns and subsequent extraction of the parameters using region-detection algorithms such as the generalized Hough transform, which detects specific shapes within the image. This transform is a method for detecting curves by exploiting the duality between points on a curve and parameters of that curve. By employing this imageprocessing technique, future sensor systems will benefit from new applications such as unsupervised environmental monitoring of air or water quality.

  19. Fluorescence based molecular in vivo imaging

    International Nuclear Information System (INIS)

    Ebert, Bernd

    2008-01-01

    Molecular imaging represents a modern research area that allows the in vivo study of molecular biological process kinetics using appropriate probes and visualization methods. This methodology may be defined- apart from the contrast media injection - as non-abrasive. In order to reach an in vivo molecular process imaging as accurate as possible the effects of the used probes on the biological should not be too large. The contrast media as important part of the molecular imaging can significantly contribute to the understanding of molecular processes and to the development of tailored diagnostics and therapy. Since more than 15 years PTB is developing optic imaging systems that may be used for fluorescence based visualization of tissue phantoms, small animal models and the localization of tumors and their predecessors, and for the early recognition of inflammatory processes in clinical trials. Cellular changes occur during many diseases, thus the molecular imaging might be of importance for the early diagnosis of chronic inflammatory diseases. Fluorescent dyes can be used as unspecific or also as specific contrast media, which allow enhanced detection sensitivity

  20. Image Quality Assessment of High-Resolution Satellite Images with Mtf-Based Fuzzy Comprehensive Evaluation Method

    Science.gov (United States)

    Wu, Z.; Luo, Z.; Zhang, Y.; Guo, F.; He, L.

    2018-04-01

    A Modulation Transfer Function (MTF)-based fuzzy comprehensive evaluation method was proposed in this paper for the purpose of evaluating high-resolution satellite image quality. To establish the factor set, two MTF features and seven radiant features were extracted from the knife-edge region of image patch, which included Nyquist, MTF0.5, entropy, peak signal to noise ratio (PSNR), average difference, edge intensity, average gradient, contrast and ground spatial distance (GSD). After analyzing the statistical distribution of above features, a fuzzy evaluation threshold table and fuzzy evaluation membership functions was established. The experiments for comprehensive quality assessment of different natural and artificial objects was done with GF2 image patches. The results showed that the calibration field image has the highest quality scores. The water image has closest image quality to the calibration field, quality of building image is a little poor than water image, but much higher than farmland image. In order to test the influence of different features on quality evaluation, the experiment with different weights were tested on GF2 and SPOT7 images. The results showed that different weights correspond different evaluating effectiveness. In the case of setting up the weights of edge features and GSD, the image quality of GF2 is better than SPOT7. However, when setting MTF and PSNR as main factor, the image quality of SPOT7 is better than GF2.

  1. Chemistry Graduate Teaching Assistants' Experiences in Academic Laboratories and Development of a Teaching Self-image

    Science.gov (United States)

    Gatlin, Todd Adam

    Graduate teaching assistants (GTAs) play a prominent role in chemistry laboratory instruction at research based universities. They teach almost all undergraduate chemistry laboratory courses. However, their role in laboratory instruction has often been overlooked in educational research. Interest in chemistry GTAs has been placed on training and their perceived expectations, but less attention has been paid to their experiences or their potential benefits from teaching. This work was designed to investigate GTAs' experiences in and benefits from laboratory instructional environments. This dissertation includes three related studies on GTAs' experiences teaching in general chemistry laboratories. Qualitative methods were used for each study. First, phenomenological analysis was used to explore GTAs' experiences in an expository laboratory program. Post-teaching interviews were the primary data source. GTAs experiences were described in three dimensions: doing, knowing, and transferring. Gains available to GTAs revolved around general teaching skills. However, no gains specifically related to scientific development were found in this laboratory format. Case-study methods were used to explore and illustrate ways GTAs develop a GTA self-image---the way they see themselves as instructors. Two general chemistry laboratory programs that represent two very different instructional frameworks were chosen for the context of this study. The first program used a cooperative project-based approach. The second program used weekly, verification-type activities. End of the semester interviews were collected and served as the primary data source. A follow-up case study of a new cohort of GTAs in the cooperative problem-based laboratory was undertaken to investigate changes in GTAs' self-images over the course of one semester. Pre-semester and post-semester interviews served as the primary data source. Findings suggest that GTAs' construction of their self-image is shaped through the

  2. Web Based Distributed Coastal Image Analysis System, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project develops Web based distributed image analysis system processing the Moderate Resolution Imaging Spectroradiometer (MODIS) data to provide decision...

  3. Optical image reconstruction using DC data: simulations and experiments

    International Nuclear Information System (INIS)

    Huabei Jiang; Paulsen, K.D.; Oesterberg, U.L.

    1996-01-01

    In this paper, we explore optical image formation using a diffusion approximation of light propagation in tissue which is modelled with a finite-element method for optically heterogeneous media. We demonstrate successful image reconstruction based on absolute experimental DC data obtained with a continuous wave 633 nm He-Ne laser system and a 751 nm diode laser system in laboratory phantoms having two optically distinct regions. The experimental systems used exploit a tomographic type of data collection scheme that provides information from which a spatially variable optical property map is deduced. Reconstruction of scattering coefficient only and simultaneous reconstruction of both scattering and absorption profiles in tissue-like phantoms are obtained from measured and simulated data. Images with different contrast levels between the heterogeneity and the background are also reported and the results show that although it is possible to obtain qualitative visual information on the location and size of a heterogeneity, it may not be possible to quantitatively resolve contrast levels or optical properties using reconstructions from DC data only. Sensitivity of image reconstruction to noise in the measurement data is investigated through simulations. The application of boundary constraints has also been addressed. (author)

  4. Pleasant/Unpleasant Filtering for Affective Image Retrieval Based on Cross-Correlation of EEG Features

    Directory of Open Access Journals (Sweden)

    Keranmu Xielifuguli

    2014-01-01

    Full Text Available People often make decisions based on sensitivity rather than rationality. In the field of biological information processing, methods are available for analyzing biological information directly based on electroencephalogram: EEG to determine the pleasant/unpleasant reactions of users. In this study, we propose a sensitivity filtering technique for discriminating preferences (pleasant/unpleasant for images using a sensitivity image filtering system based on EEG. Using a set of images retrieved by similarity retrieval, we perform the sensitivity-based pleasant/unpleasant classification of images based on the affective features extracted from images with the maximum entropy method: MEM. In the present study, the affective features comprised cross-correlation features obtained from EEGs produced when an individual observed an image. However, it is difficult to measure the EEG when a subject visualizes an unknown image. Thus, we propose a solution where a linear regression method based on canonical correlation is used to estimate the cross-correlation features from image features. Experiments were conducted to evaluate the validity of sensitivity filtering compared with image similarity retrieval methods based on image features. We found that sensitivity filtering using color correlograms was suitable for the classification of preferred images, while sensitivity filtering using local binary patterns was suitable for the classification of unpleasant images. Moreover, sensitivity filtering using local binary patterns for unpleasant images had a 90% success rate. Thus, we conclude that the proposed method is efficient for filtering unpleasant images.

  5. Canny edge-based deformable image registration.

    Science.gov (United States)

    Kearney, Vasant; Huang, Yihui; Mao, Weihua; Yuan, Baohong; Tang, Liping

    2017-02-07

    This work focuses on developing a 2D Canny edge-based deformable image registration (Canny DIR) algorithm to register in vivo white light images taken at various time points. This method uses a sparse interpolation deformation algorithm to sparsely register regions of the image with strong edge information. A stability criterion is enforced which removes regions of edges that do not deform in a smooth uniform manner. Using a synthetic mouse surface ground truth model, the accuracy of the Canny DIR algorithm was evaluated under axial rotation in the presence of deformation. The accuracy was also tested using fluorescent dye injections, which were then used for gamma analysis to establish a second ground truth. The results indicate that the Canny DIR algorithm performs better than rigid registration, intensity corrected Demons, and distinctive features for all evaluation matrices and ground truth scenarios. In conclusion Canny DIR performs well in the presence of the unique lighting and shading variations associated with white-light-based image registration.

  6. Illumination compensation in ground based hyperspectral imaging

    Science.gov (United States)

    Wendel, Alexander; Underwood, James

    2017-07-01

    Hyperspectral imaging has emerged as an important tool for analysing vegetation data in agricultural applications. Recently, low altitude and ground based hyperspectral imaging solutions have come to the fore, providing very high resolution data for mapping and studying large areas of crops in detail. However, these platforms introduce a unique set of challenges that need to be overcome to ensure consistent, accurate and timely acquisition of data. One particular problem is dealing with changes in environmental illumination while operating with natural light under cloud cover, which can have considerable effects on spectral shape. In the past this has been commonly achieved by imaging known reference targets at the time of data acquisition, direct measurement of irradiance, or atmospheric modelling. While capturing a reference panel continuously or very frequently allows accurate compensation for illumination changes, this is often not practical with ground based platforms, and impossible in aerial applications. This paper examines the use of an autonomous unmanned ground vehicle (UGV) to gather high resolution hyperspectral imaging data of crops under natural illumination. A process of illumination compensation is performed to extract the inherent reflectance properties of the crops, despite variable illumination. This work adapts a previously developed subspace model approach to reflectance and illumination recovery. Though tested on a ground vehicle in this paper, it is applicable to low altitude unmanned aerial hyperspectral imagery also. The method uses occasional observations of reference panel training data from within the same or other datasets, which enables a practical field protocol that minimises in-field manual labour. This paper tests the new approach, comparing it against traditional methods. Several illumination compensation protocols for high volume ground based data collection are presented based on the results. The findings in this paper are

  7. Imaging of skull base: Pictorial essay

    International Nuclear Information System (INIS)

    Raut, Abhijit A; Naphade, Prashant S; Chawla, Ashish

    2012-01-01

    The skull base anatomy is complex. Numerous vital neurovascular structures pass through multiple channels and foramina located in the base skull. With the advent of computerized tomography (CT) and magnetic resonance imaging (MRI), accurate preoperative lesion localization and evaluation of its relationship with adjacent neurovascular structures is possible. It is imperative that the radiologist and skull base surgeons are familiar with this complex anatomy for localizing the skull base lesion, reaching appropriate differential diagnosis, and deciding the optimal surgical approach. CT and MRI are complementary to each other and are often used together for the demonstration of the full disease extent. This article focuses on the radiological anatomy of the skull base and discusses few of the common pathologies affecting the skull base

  8. ImageSURF: An ImageJ Plugin for Batch Pixel-Based Image Segmentation Using Random Forests

    Directory of Open Access Journals (Sweden)

    Aidan O'Mara

    2017-11-01

    Full Text Available Image segmentation is a necessary step in automated quantitative imaging. ImageSURF is a macro-compatible ImageJ2/FIJI plugin for pixel-based image segmentation that considers a range of image derivatives to train pixel classifiers which are then applied to image sets of any size to produce segmentations without bias in a consistent, transparent and reproducible manner. The plugin is available from ImageJ update site http://sites.imagej.net/ImageSURF/ and source code from https://github.com/omaraa/ImageSURF. Funding statement: This research was supported by an Australian Government Research Training Program Scholarship.

  9. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance

    Directory of Open Access Journals (Sweden)

    Liyun Zhuang

    2017-01-01

    Full Text Available This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE, which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE. Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image.

  10. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance

    Science.gov (United States)

    2017-01-01

    This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529

  11. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance.

    Science.gov (United States)

    Zhuang, Liyun; Guan, Yepeng

    2017-01-01

    This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image.

  12. Image preprocessing study on KPCA-based face recognition

    Science.gov (United States)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  13. Microcontroller-based Feedback Control Laboratory Experiments

    Directory of Open Access Journals (Sweden)

    Chiu Choi

    2014-06-01

    Full Text Available this paper is a result of the implementation of the recommendations on enhancing hands-on experience of control engineering education using single chip, small scale computers such as microcontrollers. A set of microcontroller-based feedback control experiments was developed for the Electrical Engineering curriculum at the University of North Florida. These experiments provided hands-on techniques that students can utilize in the development of complete solutions for a number of servo control problems. Significant effort was devoted to software development of feedback controllers and the associated signal conditioning circuits interfacing between the microcontroller and the physical plant. These experiments have stimulated the interest of our students in control engineering.

  14. Image based 3D city modeling : Comparative study

    Directory of Open Access Journals (Sweden)

    S. P. Singh

    2014-06-01

    Full Text Available 3D city model is a digital representation of the Earth’s surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India. This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can’t do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good

  15. GENDER-BASED VIOLENCE: COMPARING THE EXPERIENCES ...

    African Journals Online (AJOL)

    User

    One of the biggest challenge Sudanese women encounters in their host ... as well as embracing the new culture which presents them with opportunities of furthering ... and attained higher education are less likely to secure jobs due to gender based ... However, the women continue to experience, within the Sudanese.

  16. Remote Sensing Image Classification Based on Stacked Denoising Autoencoder

    Directory of Open Access Journals (Sweden)

    Peng Liang

    2017-12-01

    Full Text Available Focused on the issue that conventional remote sensing image classification methods have run into the bottlenecks in accuracy, a new remote sensing image classification method inspired by deep learning is proposed, which is based on Stacked Denoising Autoencoder. First, the deep network model is built through the stacked layers of Denoising Autoencoder. Then, with noised input, the unsupervised Greedy layer-wise training algorithm is used to train each layer in turn for more robust expressing, characteristics are obtained in supervised learning by Back Propagation (BP neural network, and the whole network is optimized by error back propagation. Finally, Gaofen-1 satellite (GF-1 remote sensing data are used for evaluation, and the total accuracy and kappa accuracy reach 95.7% and 0.955, respectively, which are higher than that of the Support Vector Machine and Back Propagation neural network. The experiment results show that the proposed method can effectively improve the accuracy of remote sensing image classification.

  17. Self-training-based spectral image reconstruction for art paintings with multispectral imaging.

    Science.gov (United States)

    Xu, Peng; Xu, Haisong; Diao, Changyu; Ye, Zhengnan

    2017-10-20

    A self-training-based spectral reflectance recovery method was developed to accurately reconstruct the spectral images of art paintings with multispectral imaging. By partitioning the multispectral images with the k-means clustering algorithm, the training samples are directly extracted from the art painting itself to restrain the deterioration of spectral estimation caused by the material inconsistency between the training samples and the art painting. Coordinate paper is used to locate the extracted training samples. The spectral reflectances of the extracted training samples are acquired indirectly with a spectroradiometer, and the circle Hough transform is adopted to detect the circle measuring area of the spectroradiometer. Through simulation and a practical experiment, the implementation of the proposed method is explained in detail, and it is verified to have better reflectance recovery performance than that using the commercial target and is comparable to the approach using a painted color target.

  18. Tie Points Extraction for SAR Images Based on Differential Constraints

    Science.gov (United States)

    Xiong, X.; Jin, G.; Xu, Q.; Zhang, H.

    2018-04-01

    Automatically extracting tie points (TPs) on large-size synthetic aperture radar (SAR) images is still challenging because the efficiency and correct ratio of the image matching need to be improved. This paper proposes an automatic TPs extraction method based on differential constraints for large-size SAR images obtained from approximately parallel tracks, between which the relative geometric distortions are small in azimuth direction and large in range direction. Image pyramids are built firstly, and then corresponding layers of pyramids are matched from the top to the bottom. In the process, the similarity is measured by the normalized cross correlation (NCC) algorithm, which is calculated from a rectangular window with the long side parallel to the azimuth direction. False matches are removed by the differential constrained random sample consensus (DC-RANSAC) algorithm, which appends strong constraints in azimuth direction and weak constraints in range direction. Matching points in the lower pyramid images are predicted with the local bilinear transformation model in range direction. Experiments performed on ENVISAT ASAR and Chinese airborne SAR images validated the efficiency, correct ratio and accuracy of the proposed method.

  19. TIE POINTS EXTRACTION FOR SAR IMAGES BASED ON DIFFERENTIAL CONSTRAINTS

    Directory of Open Access Journals (Sweden)

    X. Xiong

    2018-04-01

    Full Text Available Automatically extracting tie points (TPs on large-size synthetic aperture radar (SAR images is still challenging because the efficiency and correct ratio of the image matching need to be improved. This paper proposes an automatic TPs extraction method based on differential constraints for large-size SAR images obtained from approximately parallel tracks, between which the relative geometric distortions are small in azimuth direction and large in range direction. Image pyramids are built firstly, and then corresponding layers of pyramids are matched from the top to the bottom. In the process, the similarity is measured by the normalized cross correlation (NCC algorithm, which is calculated from a rectangular window with the long side parallel to the azimuth direction. False matches are removed by the differential constrained random sample consensus (DC-RANSAC algorithm, which appends strong constraints in azimuth direction and weak constraints in range direction. Matching points in the lower pyramid images are predicted with the local bilinear transformation model in range direction. Experiments performed on ENVISAT ASAR and Chinese airborne SAR images validated the efficiency, correct ratio and accuracy of the proposed method.

  20. Characterization of porcine eyes based on autofluorescence lifetime imaging

    Science.gov (United States)

    Batista, Ana; Breunig, Hans Georg; Uchugonova, Aisada; Morgado, António Miguel; König, Karsten

    2015-03-01

    Multiphoton microscopy is a non-invasive imaging technique with ideal characteristics for biological applications. In this study, we propose to characterize three major structures of the porcine eye, the cornea, crystalline lens, and retina using two-photon excitation fluorescence lifetime imaging microscopy (2PE-FLIM). Samples were imaged using a laser-scanning microscope, consisting of a broadband sub-15 femtosecond (fs) near-infrared laser. Signal detection was performed using a 16-channel photomultiplier tube (PMT) detector (PML-16PMT). Therefore, spectral analysis of the fluorescence lifetime data was possible. To ensure a correct spectral analysis of the autofluorescence lifetime data, the spectra of the individual endogenous fluorophores were acquired with the 16-channel PMT and with a spectrometer. All experiments were performed within 12h of the porcine eye enucleation. We were able to image the cornea, crystalline lens, and retina at multiple depths. Discrimination of each structure based on their autofluorescence intensity and lifetimes was possible. Furthermore, discrimination between different layers of the same structure was also possible. To the best of our knowledge, this was the first time that 2PE-FLIM was used for porcine lens imaging and layer discrimination. With this study we further demonstrated the feasibility of 2PE-FLIM to image and differentiate three of the main components of the eye and its potential as an ophthalmologic technique.

  1. Fast method of constructing image correlations to build a free network based on image multivocabulary trees

    Science.gov (United States)

    Zhan, Zongqian; Wang, Xin; Wei, Minglu

    2015-05-01

    In image-based three-dimensional (3-D) reconstruction, one topic of growing importance is how to quickly obtain a 3-D model from a large number of images. The retrieval of the correct and relevant images for the model poses a considerable technological challenge. The "image vocabulary tree" has been proposed as a method to search for similar images. However, a significant drawback of this approach is identified in its low time efficiency and barely satisfactory classification result. The method proposed is inspired by, and improves upon, some recent methods. Specifically, vocabulary quality is considered and multivocabulary trees are designed to improve the classification result. A marked improvement was, indeed, observed in our evaluation of the proposed method. To improve time efficiency, graphics processing unit (GPU) computer unified device architecture parallel computation is applied in the multivocabulary trees. The results of the experiments showed that the GPU was three to four times more efficient than the enumeration matching and CPU methods when the number of images is large. This paper presents a reliable reference method for the rapid construction of a free network to be used for the computing of 3-D information.

  2. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    Science.gov (United States)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  3. NSCT BASED LOCAL ENHANCEMENT FOR ACTIVE CONTOUR BASED IMAGE SEGMENTATION APPLICATION

    Directory of Open Access Journals (Sweden)

    Hiren Mewada

    2010-08-01

    Full Text Available Because of cross-disciplinary nature, Active Contour modeling techniques have been utilized extensively for the image segmentation. In traditional active contour based segmentation techniques based on level set methods, the energy functions are defined based on the intensity gradient. This makes them highly sensitive to the situation where the underlying image content is characterized by image nonhomogeneities due to illumination and contrast condition. This is the most difficult problem to make them as fully automatic image segmentation techniques. This paper introduces one of the approaches based on image enhancement to this problem. The enhanced image is obtained using NonSubsampled Contourlet Transform, which improves the edges strengths in the direction where the illumination is not proper and then active contour model based on level set technique is utilized to segment the object. Experiment results demonstrate that proposed method can be utilized along with existing active contour model based segmentation method under situation characterized by intensity non-homogeneity to make them fully automatic.

  4. Sociocultural experiences, body image, and indoor tanning among young adult women.

    Science.gov (United States)

    Stapleton, Jerod L; Manne, Sharon L; Greene, Kathryn; Darabos, Katie; Carpenter, Amanda; Hudson, Shawna V; Coups, Elliot J

    2017-10-01

    The purpose of this survey study was to evaluate a model of body image influences on indoor tanning behavior. Participants were 823 young adult women recruited from a probability-based web panel in the United States. Consistent with our hypothesized model, tanning-related sociocultural experiences were indirectly associated with lifetime indoor tanning use and intentions to tan as mediated through tan surveillance and tan dissatisfaction. Findings suggest the need for targeting body image constructs as mechanisms of behavior change in indoor tanning behavioral interventions.

  5. Interactive classification and content-based retrieval of tissue images

    Science.gov (United States)

    Aksoy, Selim; Marchisio, Giovanni B.; Tusk, Carsten; Koperski, Krzysztof

    2002-11-01

    We describe a system for interactive classification and retrieval of microscopic tissue images. Our system models tissues in pixel, region and image levels. Pixel level features are generated using unsupervised clustering of color and texture values. Region level features include shape information and statistics of pixel level feature values. Image level features include statistics and spatial relationships of regions. To reduce the gap between low-level features and high-level expert knowledge, we define the concept of prototype regions. The system learns the prototype regions in an image collection using model-based clustering and density estimation. Different tissue types are modeled using spatial relationships of these regions. Spatial relationships are represented by fuzzy membership functions. The system automatically selects significant relationships from training data and builds models which can also be updated using user relevance feedback. A Bayesian framework is used to classify tissues based on these models. Preliminary experiments show that the spatial relationship models we developed provide a flexible and powerful framework for classification and retrieval of tissue images.

  6. Regularized image denoising based on spectral gradient optimization

    International Nuclear Information System (INIS)

    Lukić, Tibor; Lindblad, Joakim; Sladoje, Nataša

    2011-01-01

    Image restoration methods, such as denoising, deblurring, inpainting, etc, are often based on the minimization of an appropriately defined energy function. We consider energy functions for image denoising which combine a quadratic data-fidelity term and a regularization term, where the properties of the latter are determined by a used potential function. Many potential functions are suggested for different purposes in the literature. We compare the denoising performance achieved by ten different potential functions. Several methods for efficient minimization of regularized energy functions exist. Most are only applicable to particular choices of potential functions, however. To enable a comparison of all the observed potential functions, we propose to minimize the objective function using a spectral gradient approach; spectral gradient methods put very weak restrictions on the used potential function. We present and evaluate the performance of one spectral conjugate gradient and one cyclic spectral gradient algorithm, and conclude from experiments that both are well suited for the task. We compare the performance with three total variation-based state-of-the-art methods for image denoising. From the empirical evaluation, we conclude that denoising using the Huber potential (for images degraded by higher levels of noise; signal-to-noise ratio below 10 dB) and the Geman and McClure potential (for less noisy images), in combination with the spectral conjugate gradient minimization algorithm, shows the overall best performance

  7. Novel spirometry based on optical surface imaging

    Energy Technology Data Exchange (ETDEWEB)

    Li, Guang, E-mail: lig2@mskcc.org; Huang, Hailiang; Li, Diana G.; Chen, Qing; Gaebler, Carl P.; Mechalakos, James [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Wei, Jie [Department of Computer Science, City College of New York, New York, New York 10031 (United States); Sullivan, James [Pulmonary Laboratories, Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Zatcky, Joan; Rimner, Andreas [Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States)

    2015-04-15

    Purpose: To evaluate the feasibility of using optical surface imaging (OSI) to measure the dynamic tidal volume (TV) of the human torso during free breathing. Methods: We performed experiments to measure volume or volume change in geometric and deformable phantoms as well as human subjects using OSI. To assess the accuracy of OSI in volume determination, we performed experiments using five geometric phantoms and two deformable body phantoms and compared the values with those derived from geometric calculations and computed tomography (CT) measurements, respectively. To apply this technique to human subjects, an institutional review board protocol was established and three healthy volunteers were studied. In the human experiment, a high-speed image capture mode of OSI was applied to acquire torso images at 4–5 frames per second, which was synchronized with conventional spirometric measurements at 5 Hz. An in-house MATLAB program was developed to interactively define the volume of interest (VOI), separate the thorax and abdomen, and automatically calculate the thoracic and abdominal volumes within the VOIs. The torso volume change (TV C = ΔV{sub torso} = ΔV{sub thorax} + ΔV{sub abdomen}) was automatically calculated using full-exhalation phase as the reference. The volumetric breathing pattern (BP{sub v} = ΔV{sub thorax}/ΔV{sub torso}) quantifying thoracic and abdominal volume variations was also calculated. Under quiet breathing, TVC should equal the tidal volume measured concurrently by a spirometer with a conversion factor (1.08) accounting for internal and external differences of temperature and moisture. Another MATLAB program was implemented to control the conventional spirometer that was used as the standard. Results: The volumes measured from the OSI imaging of geometric phantoms agreed with the calculated volumes with a discrepancy of 0.0% ± 1.6% (range −1.9% to 2.5%). In measurements from the deformable torso/thorax phantoms, the volume

  8. Novel spirometry based on optical surface imaging

    International Nuclear Information System (INIS)

    Li, Guang; Huang, Hailiang; Li, Diana G.; Chen, Qing; Gaebler, Carl P.; Mechalakos, James; Wei, Jie; Sullivan, James; Zatcky, Joan; Rimner, Andreas

    2015-01-01

    Purpose: To evaluate the feasibility of using optical surface imaging (OSI) to measure the dynamic tidal volume (TV) of the human torso during free breathing. Methods: We performed experiments to measure volume or volume change in geometric and deformable phantoms as well as human subjects using OSI. To assess the accuracy of OSI in volume determination, we performed experiments using five geometric phantoms and two deformable body phantoms and compared the values with those derived from geometric calculations and computed tomography (CT) measurements, respectively. To apply this technique to human subjects, an institutional review board protocol was established and three healthy volunteers were studied. In the human experiment, a high-speed image capture mode of OSI was applied to acquire torso images at 4–5 frames per second, which was synchronized with conventional spirometric measurements at 5 Hz. An in-house MATLAB program was developed to interactively define the volume of interest (VOI), separate the thorax and abdomen, and automatically calculate the thoracic and abdominal volumes within the VOIs. The torso volume change (TV C = ΔV torso = ΔV thorax + ΔV abdomen ) was automatically calculated using full-exhalation phase as the reference. The volumetric breathing pattern (BP v = ΔV thorax /ΔV torso ) quantifying thoracic and abdominal volume variations was also calculated. Under quiet breathing, TVC should equal the tidal volume measured concurrently by a spirometer with a conversion factor (1.08) accounting for internal and external differences of temperature and moisture. Another MATLAB program was implemented to control the conventional spirometer that was used as the standard. Results: The volumes measured from the OSI imaging of geometric phantoms agreed with the calculated volumes with a discrepancy of 0.0% ± 1.6% (range −1.9% to 2.5%). In measurements from the deformable torso/thorax phantoms, the volume differences measured using OSI

  9. A hash-based image encryption algorithm

    Science.gov (United States)

    Cheddad, Abbas; Condell, Joan; Curran, Kevin; McKevitt, Paul

    2010-03-01

    There exist several algorithms that deal with text encryption. However, there has been little research carried out to date on encrypting digital images or video files. This paper describes a novel way of encrypting digital images with password protection using 1D SHA-2 algorithm coupled with a compound forward transform. A spatial mask is generated from the frequency domain by taking advantage of the conjugate symmetry of the complex imagery part of the Fourier Transform. This mask is then XORed with the bit stream of the original image. Exclusive OR (XOR), a logical symmetric operation, that yields 0 if both binary pixels are zeros or if both are ones and 1 otherwise. This can be verified simply by modulus (pixel1, pixel2, 2). Finally, confusion is applied based on the displacement of the cipher's pixels in accordance with a reference mask. Both security and performance aspects of the proposed method are analyzed, which prove that the method is efficient and secure from a cryptographic point of view. One of the merits of such an algorithm is to force a continuous tone payload, a steganographic term, to map onto a balanced bits distribution sequence. This bit balance is needed in certain applications, such as steganography and watermarking, since it is likely to have a balanced perceptibility effect on the cover image when embedding.

  10. Unsupervised image matching based on manifold alignment.

    Science.gov (United States)

    Pei, Yuru; Huang, Fengchun; Shi, Fuhao; Zha, Hongbin

    2012-08-01

    This paper challenges the issue of automatic matching between two image sets with similar intrinsic structures and different appearances, especially when there is no prior correspondence. An unsupervised manifold alignment framework is proposed to establish correspondence between data sets by a mapping function in the mutual embedding space. We introduce a local similarity metric based on parameterized distance curves to represent the connection of one point with the rest of the manifold. A small set of valid feature pairs can be found without manual interactions by matching the distance curve of one manifold with the curve cluster of the other manifold. To avoid potential confusions in image matching, we propose an extended affine transformation to solve the nonrigid alignment in the embedding space. The comparatively tight alignments and the structure preservation can be obtained simultaneously. The point pairs with the minimum distance after alignment are viewed as the matchings. We apply manifold alignment to image set matching problems. The correspondence between image sets of different poses, illuminations, and identities can be established effectively by our approach.

  11. Toward CMOS image sensor based glucose monitoring.

    Science.gov (United States)

    Devadhasan, Jasmine Pramila; Kim, Sanghyo

    2012-09-07

    Complementary metal oxide semiconductor (CMOS) image sensor is a powerful tool for biosensing applications. In this present study, CMOS image sensor has been exploited for detecting glucose levels by simple photon count variation with high sensitivity. Various concentrations of glucose (100 mg dL(-1) to 1000 mg dL(-1)) were added onto a simple poly-dimethylsiloxane (PDMS) chip and the oxidation of glucose was catalyzed with the aid of an enzymatic reaction. Oxidized glucose produces a brown color with the help of chromogen during enzymatic reaction and the color density varies with the glucose concentration. Photons pass through the PDMS chip with varying color density and hit the sensor surface. Photon count was recognized by CMOS image sensor depending on the color density with respect to the glucose concentration and it was converted into digital form. By correlating the obtained digital results with glucose concentration it is possible to measure a wide range of blood glucose levels with great linearity based on CMOS image sensor and therefore this technique will promote a convenient point-of-care diagnosis.

  12. BEE FORAGE MAPPING BASED ON MULTISPECTRAL IMAGES LANDSAT

    Directory of Open Access Journals (Sweden)

    A. Moskalenko

    2016-10-01

    Full Text Available Possibilities of bee forage identification and mapping based on multispectral images have been shown in the research. Spectral brightness of bee forage has been determined with the use of satellite images. The effectiveness of some methods of image classification for mapping of bee forage is shown. Keywords: bee forage, mapping, multispectral images, image classification.

  13. Microcontroller-based locking in optics experiments

    International Nuclear Information System (INIS)

    Huang, K.; Le Jeannic, H.; Ruaudel, J.; Morin, O.; Laurat, J.

    2014-01-01

    Optics experiments critically require the stable and accurate locking of relative phases between light beams or the stabilization of Fabry-Perot cavity lengths. Here, we present a simple and inexpensive technique based on a stand-alone microcontroller unit to perform such tasks. Easily programmed in C language, this reconfigurable digital locking system also enables automatic relocking and sequential functioning. Different algorithms are detailed and applied to fringe locking and to low- and high-finesse optical cavity stabilization, without the need of external modulations or error signals. This technique can readily replace a number of analog locking systems advantageously in a variety of optical experiments

  14. Microcontroller-based locking in optics experiments.

    Science.gov (United States)

    Huang, K; Le Jeannic, H; Ruaudel, J; Morin, O; Laurat, J

    2014-12-01

    Optics experiments critically require the stable and accurate locking of relative phases between light beams or the stabilization of Fabry-Perot cavity lengths. Here, we present a simple and inexpensive technique based on a stand-alone microcontroller unit to perform such tasks. Easily programmed in C language, this reconfigurable digital locking system also enables automatic relocking and sequential functioning. Different algorithms are detailed and applied to fringe locking and to low- and high-finesse optical cavity stabilization, without the need of external modulations or error signals. This technique can readily replace a number of analog locking systems advantageously in a variety of optical experiments.

  15. Microcontroller-based locking in optics experiments

    Energy Technology Data Exchange (ETDEWEB)

    Huang, K. [Laboratoire Kastler Brossel, UPMC-Sorbonne Universités, CNRS, ENS-PSL Research University, Collège de France, 4 place Jussieu, 75005 Paris (France); State Key Laboratory of Precision Spectroscopy, East China Normal University, Shanghai 200062 (China); Le Jeannic, H.; Ruaudel, J.; Morin, O.; Laurat, J., E-mail: julien.laurat@upmc.fr [Laboratoire Kastler Brossel, UPMC-Sorbonne Universités, CNRS, ENS-PSL Research University, Collège de France, 4 place Jussieu, 75005 Paris (France)

    2014-12-15

    Optics experiments critically require the stable and accurate locking of relative phases between light beams or the stabilization of Fabry-Perot cavity lengths. Here, we present a simple and inexpensive technique based on a stand-alone microcontroller unit to perform such tasks. Easily programmed in C language, this reconfigurable digital locking system also enables automatic relocking and sequential functioning. Different algorithms are detailed and applied to fringe locking and to low- and high-finesse optical cavity stabilization, without the need of external modulations or error signals. This technique can readily replace a number of analog locking systems advantageously in a variety of optical experiments.

  16. Fully automated rodent brain MR image processing pipeline on a Midas server: from acquired images to region-based statistics.

    Science.gov (United States)

    Budin, Francois; Hoogstoel, Marion; Reynolds, Patrick; Grauer, Michael; O'Leary-Moore, Shonagh K; Oguz, Ipek

    2013-01-01

    Magnetic resonance imaging (MRI) of rodent brains enables study of the development and the integrity of the brain under certain conditions (alcohol, drugs etc.). However, these images are difficult to analyze for biomedical researchers with limited image processing experience. In this paper we present an image processing pipeline running on a Midas server, a web-based data storage system. It is composed of the following steps: rigid registration, skull-stripping, average computation, average parcellation, parcellation propagation to individual subjects, and computation of region-based statistics on each image. The pipeline is easy to configure and requires very little image processing knowledge. We present results obtained by processing a data set using this pipeline and demonstrate how this pipeline can be used to find differences between populations.

  17. Image superresolution of cytology images using wavelet based patch search

    Science.gov (United States)

    Vargas, Carlos; García-Arteaga, Juan D.; Romero, Eduardo

    2015-01-01

    Telecytology is a new research area that holds the potential of significantly reducing the number of deaths due to cervical cancer in developing countries. This work presents a novel super-resolution technique that couples high and low frequency information in order to reduce the bandwidth consumption of cervical image transmission. The proposed approach starts by decomposing into wavelets the high resolution images and transmitting only the lower frequency coefficients. The transmitted coefficients are used to reconstruct an image of the original size. Additional details are added by iteratively replacing patches of the wavelet reconstructed image with equivalent high resolution patches from a previously acquired image database. Finally, the original transmitted low frequency coefficients are used to correct the final image. Results show a higher signal to noise ratio in the proposed method over simply discarding high frequency wavelet coefficients or replacing directly down-sampled patches from the image-database.

  18. The Role of Consumer Experiences in Building the image of brands: A Study in Airlines

    Directory of Open Access Journals (Sweden)

    Ana Iris Tomás Vasconcelos

    2015-04-01

    Full Text Available Studies on brand and consumer experience gained emphasis from the twentieth century, however the relationship between these themes still has gaps. Therefore, this study examines the role of consumer experiences in building the brand image through the identification of thoughts, feelings and actions arising from consumer experiences with airlines, and the types of associations that the consumer makes such marks. Therefore, a variation of qualitative critical incident technique was used, considering those remembered experiences that have excelled in consumer perception, interviewing ten users of air services, based on a two parts semi-structured form: description of experiences with airlines and information about the image of the brands of airlines. The analyzed data have revealed that thoughts, feelings and actions arising from consumer experiences become important elements in shaping the perception of brands of airlines. Through the consumption experience, consumers mainly use the service attributes to build their perception of the marks of the airlines. These attributes are used either directly as to support other types of associations such as those related to company size.

  19. Tag-Based Social Image Search: Toward Relevant and Diverse Results

    Science.gov (United States)

    Yang, Kuiyuan; Wang, Meng; Hua, Xian-Sheng; Zhang, Hong-Jiang

    Recent years have witnessed a great success of social media websites. Tag-based image search is an important approach to access the image content of interest on these websites. However, the existing ranking methods for tag-based image search frequently return results that are irrelevant or lack of diversity. This chapter presents a diverse relevance ranking scheme which simultaneously takes relevance and diversity into account by exploring the content of images and their associated tags. First, it estimates the relevance scores of images with respect to the query term based on both visual information of images and semantic information of associated tags. Then semantic similarities of social images are estimated based on their tags. Based on the relevance scores and the similarities, the ranking list is generated by a greedy ordering algorithm which optimizes Average Diverse Precision (ADP), a novel measure that is extended from the conventional Average Precision (AP). Comprehensive experiments and user studies demonstrate the effectiveness of the approach.

  20. Magnetic resonance imaging of the female pelvis: initial experience

    International Nuclear Information System (INIS)

    Hricak, H.; Alpers, C.; Crooks, L.E.; Sheldon, P.E.

    1983-01-01

    The potential of magnetic resonance imaging (MRI) was evaluated in 21 female subjects: seven volunteers, 12 patients scanned for reasons unrelated to the lower genitourinary tract, and two patients referred with gynecologic disease. The uterus at several stages was examined; the premenarcheal uterus (one patient), the uterus of reproductive age (12 patients), the postmenopausal uterus (two patients), and in an 8 week pregnancy (one patient). The myometrium and cyclic endometrium in the reproductive age separated by a low-intensity line (probably stratum basale), which allows recognition of changes in thickness of the cyclic endometrium during the menstrual cycle. The corpus uteri can be distinguished from the cervix by the transitional zone of the isthmus. The anatomic relation of the uterus to bladder and rectum is easily outlined. The vagina can be distinguished from the cervix, and the anatomic display of the closely apposed bladder, vagina, and rectum is clear on axial and coronal images. The ovary is identified; the signal intensity from the ovary depends on the acquisition parameter used. Uterine leiomyoma, endometriosis, and dermoid cyst were depicted, but further experience is needed to ascertain the specificity of the findings

  1. Visual wetness perception based on image color statistics.

    Science.gov (United States)

    Sawayama, Masataka; Adelson, Edward H; Nishida, Shin'ya

    2017-05-01

    Color vision provides humans and animals with the abilities to discriminate colors based on the wavelength composition of light and to determine the location and identity of objects of interest in cluttered scenes (e.g., ripe fruit among foliage). However, we argue that color vision can inform us about much more than color alone. Since a trichromatic image carries more information about the optical properties of a scene than a monochromatic image does, color can help us recognize complex material qualities. Here we show that human vision uses color statistics of an image for the perception of an ecologically important surface condition (i.e., wetness). Psychophysical experiments showed that overall enhancement of chromatic saturation, combined with a luminance tone change that increases the darkness and glossiness of the image, tended to make dry scenes look wetter. Theoretical analysis along with image analysis of real objects indicated that our image transformation, which we call the wetness enhancing transformation, is consistent with actual optical changes produced by surface wetting. Furthermore, we found that the wetness enhancing transformation operator was more effective for the images with many colors (large hue entropy) than for those with few colors (small hue entropy). The hue entropy may be used to separate surface wetness from other surface states having similar optical properties. While surface wetness and surface color might seem to be independent, there are higher order color statistics that can influence wetness judgments, in accord with the ecological statistics. The present findings indicate that the visual system uses color image statistics in an elegant way to help estimate the complex physical status of a scene.

  2. Compressive sensing based ptychography image encryption

    Science.gov (United States)

    Rawat, Nitin

    2015-09-01

    A compressive sensing (CS) based ptychography combined with an optical image encryption is proposed. The diffraction pattern is recorded through ptychography technique further compressed by non-uniform sampling via CS framework. The system requires much less encrypted data and provides high security. The diffraction pattern as well as the lesser measurements of the encrypted samples serves as a secret key which make the intruder attacks more difficult. Furthermore, CS shows that the linearly projected few random samples have adequate information for decryption with a dramatic volume reduction. Experimental results validate the feasibility and effectiveness of our proposed technique compared with the existing techniques. The retrieved images do not reveal any information with the original information. In addition, the proposed system can be robust even with partial encryption and under brute-force attacks.

  3. SQL based cardiovascular ultrasound image classification.

    Science.gov (United States)

    Nandagopalan, S; Suryanarayana, Adiga B; Sudarshan, T S B; Chandrashekar, Dhanalakshmi; Manjunath, C N

    2013-01-01

    This paper proposes a novel method to analyze and classify the cardiovascular ultrasound echocardiographic images using Naïve-Bayesian model via database OLAP-SQL. Efficient data mining algorithms based on tightly-coupled model is used to extract features. Three algorithms are proposed for classification namely Naïve-Bayesian Classifier for Discrete variables (NBCD) with SQL, NBCD with OLAP-SQL, and Naïve-Bayesian Classifier for Continuous variables (NBCC) using OLAP-SQL. The proposed model is trained with 207 patient images containing normal and abnormal categories. Out of the three proposed algorithms, a high classification accuracy of 96.59% was achieved from NBCC which is better than the earlier methods.

  4. 基于BP神经网络改进算法的遥感图像分类试验%Experiment on Classification of Remote Sensing Image Based on Improvement of BP Algoritm

    Institute of Scientific and Technical Information of China (English)

    石丽

    2014-01-01

    BP神经网络分类方法是一种新的模式识别方法,在。感图像分类识别处理中有良好的应用前景。本文在阐明标准BP算法及其改进算法---Levenberg-Marquardt算法的基础上,介绍了BP神经网络的。感图像分类过程,并在MATLAB平台下对基于BP神经网络的分类算法进行了试验。实验结果表明基于BP神经网络的。感图像分类方法是一种有效的图像分类方法。%The classification based on BP neural network is a new pattern recognition method and has a wide applied future in the field of remote sensing image processing. Based on discussing BP Algorithm and its Improvement-LM algorithm,this paper describes the course of the classification of remote sensing Image on BP neural network and presents the classification algorithm of BP Neural Network developed using Matlab. The experimental results demonstrate that the classification method based on BP neural network is an effective approach.

  5. Accelerated Compressed Sensing Based CT Image Reconstruction.

    Science.gov (United States)

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  6. Accelerated Compressed Sensing Based CT Image Reconstruction

    Directory of Open Access Journals (Sweden)

    SayedMasoud Hashemi

    2015-01-01

    Full Text Available In X-ray computed tomography (CT an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  7. Graph-cut based discrete-valued image reconstruction.

    Science.gov (United States)

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  8. Research of image retrieval technology based on color feature

    Science.gov (United States)

    Fu, Yanjun; Jiang, Guangyu; Chen, Fengying

    2009-10-01

    make rotating and translation does not change. The HSV color space is used to show color characteristic of image, which is suitable to the visual characteristic of human. Taking advance of human's feeling to color, it quantifies color sector with unequal interval, and get characteristic vector. Finally, it matches the similarity of image with the algorithm of the histogram intersection and the partition-overall histogram. Users can choose a demonstration image to show inquired vision require, and also can adjust several right value through the relevance-feedback method to obtain the best result of search.An image retrieval system based on these approaches is presented. The result of the experiments shows that the image retrieval based on partition-overall histogram can keep the space distribution information while abstracting color feature efficiently, and it is superior to the normal color histograms in precision rate while researching. The query precision rate is more than 95%. In addition, the efficient block expression will lower the complicate degree of the images to be searched, and thus the searching efficiency will be increased. The image retrieval algorithms based on the partition-overall histogram proposed in the paper is efficient and effective.

  9. Regularized Fractional Power Parameters for Image Denoising Based on Convex Solution of Fractional Heat Equation

    Directory of Open Access Journals (Sweden)

    Hamid A. Jalab

    2014-01-01

    Full Text Available The interest in using fractional mask operators based on fractional calculus operators has grown for image denoising. Denoising is one of the most fundamental image restoration problems in computer vision and image processing. This paper proposes an image denoising algorithm based on convex solution of fractional heat equation with regularized fractional power parameters. The performances of the proposed algorithms were evaluated by computing the PSNR, using different types of images. Experiments according to visual perception and the peak signal to noise ratio values show that the improvements in the denoising process are competent with the standard Gaussian filter and Wiener filter.

  10. The impact of brand experience on attitudes and brand image : A quantitative study

    OpenAIRE

    Isotalo, Anni; Watanen, Samu

    2015-01-01

    Research questions: How to create an engaging brand experience in marketing context? How does an engaging brand experience affect consumer attitudes and brand image? Purpose of the study: The authors propose that the relationship between brand experience and formation of brand loyalty can be mediated by brand affect: positive attitude and brand image. The study discovers the components of an engaging brand experience and indicates their effect on consumer attitudes and brand image. Conclusion...

  11. Beyond where it started: a look at the "Healing Images" experience.

    Science.gov (United States)

    Goodsmith, Lauren

    2007-01-01

    In March 2004, the Baltimore-based nonprofit organization Advocates for Survivors of Torture and Trauma (ASTT) initiated a photography-based therapeutic programme for clients. Developed by a professional photographer/teacher in collaboration with a psychologist, the programme has the goal of enabling clients to engage in creative self-exploration within a supportive, group setting. Since its inception, thirty survivors of conflict-related trauma and torture from five different countries have taken part in the programme, known as "Healing Images", using digital cameras to gather individually-chosen images that are subsequently shared and discussed within the group. These images include depictions of the natural and manmade environments in which clients find themselves; people, places and objects that offer comfort; and self-portraits that reflect the reality of the life of a refugee in the United States. This description of the "Healing Images" programme is based on comments gathered through discussion with participants and through interviews. Additional information was gathered from observation of early workshop sessions, review of numerous client photographs and captions, and pertinent organizational materials. A fundamental benefit of the programme was that it offered a mutually supportive group environment that diminished clients' feelings of psychological and physical isolation. Participants gained deep satisfaction from learning the technical skills related to use of the cameras, from the empowering experience of framing and creating specific images, and from exploring the personal significance of these images. Programme activities sparked a process of self-expression that participants valued on the level of personal discovery and growth. Some clients also welcomed opportunities to share their work publicly, as a means of raising awareness of the experience of survivors.

  12. Eye gazing direction inspection based on image processing technique

    Science.gov (United States)

    Hao, Qun; Song, Yong

    2005-02-01

    According to the research result in neural biology, human eyes can obtain high resolution only at the center of view of field. In the research of Virtual Reality helmet, we design to detect the gazing direction of human eyes in real time and feed it back to the control system to improve the resolution of the graph at the center of field of view. In the case of current display instruments, this method can both give attention to the view field of virtual scene and resolution, and improve the immersion of virtual system greatly. Therefore, detecting the gazing direction of human eyes rapidly and exactly is the basis of realizing the design scheme of this novel VR helmet. In this paper, the conventional method of gazing direction detection that based on Purklinje spot is introduced firstly. In order to overcome the disadvantage of the method based on Purklinje spot, this paper proposed a method based on image processing to realize the detection and determination of the gazing direction. The locations of pupils and shapes of eye sockets change with the gazing directions. With the aid of these changes, analyzing the images of eyes captured by the cameras, gazing direction of human eyes can be determined finally. In this paper, experiments have been done to validate the efficiency of this method by analyzing the images. The algorithm can carry out the detection of gazing direction base on normal eye image directly, and it eliminates the need of special hardware. Experiment results show that the method is easy to implement and have high precision.

  13. Imaging of the central skull base.

    Science.gov (United States)

    Borges, Alexandra

    2009-11-01

    The central skull base (CSB) constitutes a frontier between the extracranial head and neck and the middle cranial fossa. The anatomy of this region is complex, containing most of the bony foramina and canals of the skull base traversed by several neurovascular structures that can act as routes of spread for pathologic processes. Lesions affecting the CSB can be intrinsic to its bony-cartilaginous components; can arise from above, within the intracranial compartment; or can arise from below, within the extracranial head and neck. Crosssectional imaging is indispensable in the diagnosis, treatment planning, and follow-up of patients with CSB lesions. This review focuses on a systematic approach to this region based on an anatomic division that takes into account the major tissue constituents of the CSB.

  14. General filtering method for electronic speckle pattern interferometry fringe images with various densities based on variational image decomposition.

    Science.gov (United States)

    Li, Biyuan; Tang, Chen; Gao, Guannan; Chen, Mingming; Tang, Shuwei; Lei, Zhenkun

    2017-06-01

    Filtering off speckle noise from a fringe image is one of the key tasks in electronic speckle pattern interferometry (ESPI). In general, ESPI fringe images can be divided into three categories: low-density fringe images, high-density fringe images, and variable-density fringe images. In this paper, we first present a general filtering method based on variational image decomposition that can filter speckle noise for ESPI fringe images with various densities. In our method, a variable-density ESPI fringe image is decomposed into low-density fringes, high-density fringes, and noise. A low-density fringe image is decomposed into low-density fringes and noise. A high-density fringe image is decomposed into high-density fringes and noise. We give some suitable function spaces to describe low-density fringes, high-density fringes, and noise, respectively. Then we construct several models and numerical algorithms for ESPI fringe images with various densities. And we investigate the performance of these models via our extensive experiments. Finally, we compare our proposed models with the windowed Fourier transform method and coherence enhancing diffusion partial differential equation filter. These two methods may be the most effective filtering methods at present. Furthermore, we use the proposed method to filter a collection of the experimentally obtained ESPI fringe images with poor quality. The experimental results demonstrate the performance of our proposed method.

  15. elastix: a toolbox for intensity-based medical image registration.

    Science.gov (United States)

    Klein, Stefan; Staring, Marius; Murphy, Keelin; Viergever, Max A; Pluim, Josien P W

    2010-01-01

    Medical image registration is an important task in medical image processing. It refers to the process of aligning data sets, possibly from different modalities (e.g., magnetic resonance and computed tomography), different time points (e.g., follow-up scans), and/or different subjects (in case of population studies). A large number of methods for image registration are described in the literature. Unfortunately, there is not one method that works for all applications. We have therefore developed elastix, a publicly available computer program for intensity-based medical image registration. The software consists of a collection of algorithms that are commonly used to solve medical image registration problems. The modular design of elastix allows the user to quickly configure, test, and compare different registration methods for a specific application. The command-line interface enables automated processing of large numbers of data sets, by means of scripting. The usage of elastix for comparing different registration methods is illustrated with three example experiments, in which individual components of the registration method are varied.

  16. OCML-based colour image encryption

    International Nuclear Information System (INIS)

    Rhouma, Rhouma; Meherzi, Soumaya; Belghith, Safya

    2009-01-01

    The chaos-based cryptographic algorithms have suggested some new ways to develop efficient image-encryption schemes. While most of these schemes are based on low-dimensional chaotic maps, it has been proposed recently to use high-dimensional chaos namely spatiotemporal chaos, which is modelled by one-way coupled-map lattices (OCML). Owing to their hyperchaotic behaviour, such systems are assumed to enhance the cryptosystem security. In this paper, we propose an OCML-based colour image encryption scheme with a stream cipher structure. We use a 192-bit-long external key to generate the initial conditions and the parameters of the OCML. We have made several tests to check the security of the proposed cryptosystem namely, statistical tests including histogram analysis, calculus of the correlation coefficients of adjacent pixels, security test against differential attack including calculus of the number of pixel change rate (NPCR) and unified average changing intensity (UACI), and entropy calculus. The cryptosystem speed is analyzed and tested as well.

  17. Image-Based Models Using Crowdsourcing Strategy

    Directory of Open Access Journals (Sweden)

    Antonia Spanò

    2016-12-01

    Full Text Available The conservation and valorization of Cultural Heritage require an extensive documentation, both in properly historic-artistic terms and regarding the physical characteristics of position, shape, color, and geometry. With the use of digital photogrammetry that make acquisition of overlapping images for 3D photo modeling and with the development of dense and accurate 3D point models, it is possible to obtain high-resolution orthoprojections of surfaces.Recent years have seen a growing interest in crowdsourcing that holds in the field of the protection and dissemination of cultural heritage, in parallel there is an increasing awareness for contributing the generation of digital models with the immense wealth of images available on the web which are useful for documentation heritage.In this way, the availability and ease the automation of SfM (Structure from Motion algorithm enables the generation of digital models of the built heritage, which can be inserted positively in crowdsourcing processes. In fact, non-expert users can handle the technology in the process of acquisition, which today is one of the fundamental points to involve the wider public to the cultural heritage protection. To present the image based models and their derivatives that can be made from a great digital resource; the current approach is useful for the little-known heritage or not easily accessible buildings as an emblematic case study that was selected. It is the Vank Cathedral in Isfahan in Iran: the availability of accurate point clouds and reliable orthophotos are very convenient since the building of the Safavid epoch (cent. XVII-XVIII completely frescoed with the internal surfaces, which the architecture and especially the architectural decoration reach their peak.The experimental part of the paper explores also some aspects of usability of the digital output from the image based modeling methods. The availability of orthophotos allows and facilitates the iconographic

  18. Image quality assessment based on multiscale geometric analysis.

    Science.gov (United States)

    Gao, Xinbo; Lu, Wen; Tao, Dacheng; Li, Xuelong

    2009-07-01

    Reduced-reference (RR) image quality assessment (IQA) has been recognized as an effective and efficient way to predict the visual quality of distorted images. The current standard is the wavelet-domain natural image statistics model (WNISM), which applies the Kullback-Leibler divergence between the marginal distributions of wavelet coefficients of the reference and distorted images to measure the image distortion. However, WNISM fails to consider the statistical correlations of wavelet coefficients in different subbands and the visual response characteristics of the mammalian cortical simple cells. In addition, wavelet transforms are optimal greedy approximations to extract singularity structures, so they fail to explicitly extract the image geometric information, e.g., lines and curves. Finally, wavelet coefficients are dense for smooth image edge contours. In this paper, to target the aforementioned problems in IQA, we develop a novel framework for IQA to mimic the human visual system (HVS) by incorporating the merits from multiscale geometric analysis (MGA), contrast sensitivity function (CSF), and the Weber's law of just noticeable difference (JND). In the proposed framework, MGA is utilized to decompose images and then extract features to mimic the multichannel structure of HVS. Additionally, MGA offers a series of transforms including wavelet, curvelet, bandelet, contourlet, wavelet-based contourlet transform (WBCT), and hybrid wavelets and directional filter banks (HWD), and different transforms capture different types of image geometric information. CSF is applied to weight coefficients obtained by MGA to simulate the appearance of images to observers by taking into account many of the nonlinearities inherent in HVS. JND is finally introduced to produce a noticeable variation in sensory experience. Thorough empirical studies are carried out upon the LIVE database against subjective mean opinion score (MOS) and demonstrate that 1) the proposed framework has

  19. Pedestrian detection from thermal images: A sparse representation based approach

    Science.gov (United States)

    Qi, Bin; John, Vijay; Liu, Zheng; Mita, Seiichi

    2016-05-01

    Pedestrian detection, a key technology in computer vision, plays a paramount role in the applications of advanced driver assistant systems (ADASs) and autonomous vehicles. The objective of pedestrian detection is to identify and locate people in a dynamic environment so that accidents can be avoided. With significant variations introduced by illumination, occlusion, articulated pose, and complex background, pedestrian detection is a challenging task for visual perception. Different from visible images, thermal images are captured and presented with intensity maps based objects' emissivity, and thus have an enhanced spectral range to make human beings perceptible from the cool background. In this study, a sparse representation based approach is proposed for pedestrian detection from thermal images. We first adopted the histogram of sparse code to represent image features and then detect pedestrian with the extracted features in an unimodal and a multimodal framework respectively. In the unimodal framework, two types of dictionaries, i.e. joint dictionary and individual dictionary, are built by learning from prepared training samples. In the multimodal framework, a weighted fusion scheme is proposed to further highlight the contributions from features with higher separability. To validate the proposed approach, experiments were conducted to compare with three widely used features: Haar wavelets (HWs), histogram of oriented gradients (HOG), and histogram of phase congruency (HPC) as well as two classification methods, i.e. AdaBoost and support vector machine (SVM). Experimental results on a publicly available data set demonstrate the superiority of the proposed approach.

  20. Coffee Bean Grade Determination Based on Image Parameter

    Directory of Open Access Journals (Sweden)

    F. Ferdiansjah

    2011-12-01

    Full Text Available Quality standard for coffee as an agriculture commodity in Indonesia uses defect system which is regulated in Standar Nasional Indonesia (SNI for coffee bean, No: 01-2907-1999. In the Defect System standard, coffee bean is classified into six grades, from grade I to grade VI depending on the number of defect found in the coffee bean. Accuracy of this method heavily depends on the experience and the expertise of the human operators. The objective of the research is to develop a system to determine the coffee bean grading based on SNI No: 01-2907-1999. A visual sensor, a webcam connected to a computer, was used for image acquisition of coffee bean image samples, which were placed under uniform illumination of 414.5+2.9 lux. The computer performs feature extraction from parameters of coffee bean image samples in the term of texture (energy, entropy, contrast, homogeneity and color (R mean, G mean, and B mean and determines the grade of coffee bean based on the image parameters by implementing neural network algorithm. The accuracy of system testing for the coffee beans of grade I, II, III, IVA, IVB, V, and VI have the value of 100, 80, 60, 40, 100, 40, and 100%, respectively.

  1. Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.

    Science.gov (United States)

    Liu, Min; Wang, Xueping; Zhang, Hongzhong

    2018-03-01

    In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Transmyocardial laser revascularization - first experiences of imaging in MRT

    International Nuclear Information System (INIS)

    Weber, C.; Maas, R.; Steiner, P.; Beese, M.; Hvalic, M.; Buecheler, E.; Stubbe, M.

    1998-01-01

    Purpose: Imaging of myocardial signal alteration and perfusion differences after transmyocardial laser revascularization (TMLR). Methods and Material: 5 patients suffering from coronary vessel disease underwent MRI (0.5 T) pre- and 4-7 d post-TMLR. T 1 -weighted spin echo sequences were acquired ECG-triggered native and after injection of gadolinium. Qualitative analysis was performed on both native and contrast-enhanced images. Myocardial signal alterations and wall changes were evaluated. Qualitative and quantitative analyses of contrast-enhanced images were performed with regard of post therapeutic perfusion differences. Analysis was based on contrast-to-noise (C/N) data obtained from operator defined 'regions of interest'. Results: Visualization of laser-induced channels was not possible. Native scans obtained before and after TMLR revealed no significant change with regard to the qualitative analysis. Both qualitative and quantitative analyses demonstrated a posttherapeutic increase of C/N in both the left ventricular myocardium (64.4 pre-TMLR; 89.1 post-TMLR; p=0.06) and the septum in the majority of cases. No significant difference between laser-treated left myocardium and untreated septum was observed (p>0.05). Discussion: Single myocardial laser channels could not be visualized with a 0.5-T MRI. However, visualization of increased myocardial contrast enhancement in laser-treated left ventricular myocardium was evident in the majority of cases on the basis of qualitative and quantitative analyses. Conclusions: The MRI technique used enabled a first, limited depiction of TMLR-induced myocardial changes. The clinical value and impact still have to be defined. (orig.) [de

  3. Images from the Mind: BCI image reconstruction based on Rapid Serial Visual Presentations of polygon primitives

    Directory of Open Access Journals (Sweden)

    Luís F Seoane

    2015-04-01

    Full Text Available We provide a proof of concept for an EEG-based reconstruction of a visual image which is on a user's mind. Our approach is based on the Rapid Serial Visual Presentation (RSVP of polygon primitives and Brain-Computer Interface (BCI technology. In an experimental setup, subjects were presented bursts of polygons: some of them contributed to building a target image (because they matched the shape and/or color of the target while some of them did not. The presentation of the contributing polygons triggered attention-related EEG patterns. These Event Related Potentials (ERPs could be determined using BCI classification and could be matched to the stimuli that elicited them. These stimuli (i.e. the ERP-correlated polygons were accumulated in the display until a satisfactory reconstruction of the target image was reached. As more polygons were accumulated, finer visual details were attained resulting in more challenging classification tasks. In our experiments, we observe an average classification accuracy of around 75%. An in-depth investigation suggests that many of the misclassifications were not misinterpretations of the BCI concerning the users' intent, but rather caused by ambiguous polygons that could contribute to reconstruct several different images. When we put our BCI-image reconstruction in perspective with other RSVP BCI paradigms, there is large room for improvement both in speed and accuracy. These results invite us to be optimistic. They open a plethora of possibilities to explore non-invasive BCIs for image reconstruction both in healthy and impaired subjects and, accordingly, suggest interesting recreational and clinical applications.

  4. Understanding God images and God concepts : Towards a pastoral hermeneutics of the God attachment experience

    NARCIS (Netherlands)

    Counted, Agina Victor

    2015-01-01

    The author looks at the God image experience as an attachment relationship experience with God. Hence, arguing that the God image experience is borne originally out of a parent–child attachment contagion, in such a way that God is often represented in either secure or insecure attachment patterns.

  5. Multi-Label Classification Based on Low Rank Representation for Image Annotation

    Directory of Open Access Journals (Sweden)

    Qiaoyu Tan

    2017-01-01

    Full Text Available Annotating remote sensing images is a challenging task for its labor demanding annotation process and requirement of expert knowledge, especially when images can be annotated with multiple semantic concepts (or labels. To automatically annotate these multi-label images, we introduce an approach called Multi-Label Classification based on Low Rank Representation (MLC-LRR. MLC-LRR firstly utilizes low rank representation in the feature space of images to compute the low rank constrained coefficient matrix, then it adapts the coefficient matrix to define a feature-based graph and to capture the global relationships between images. Next, it utilizes low rank representation in the label space of labeled images to construct a semantic graph. Finally, these two graphs are exploited to train a graph-based multi-label classifier. To validate the performance of MLC-LRR against other related graph-based multi-label methods in annotating images, we conduct experiments on a public available multi-label remote sensing images (Land Cover. We perform additional experiments on five real-world multi-label image datasets to further investigate the performance of MLC-LRR. Empirical study demonstrates that MLC-LRR achieves better performance on annotating images than these comparing methods across various evaluation criteria; it also can effectively exploit global structure and label correlations of multi-label images.

  6. A Novel Image Tag Completion Method Based on Convolutional Neural Transformation

    KAUST Repository

    Geng, Yanyan; Zhang, Guohui; Li, Weizhi; Gu, Yi; Liang, Ru-Ze; Liang, Gaoyuan; Wang, Jingbin; Wu, Yanbin; Patil, Nitin; Wang, Jing-Yan

    2017-01-01

    In the problems of image retrieval and annotation, complete textual tag lists of images play critical roles. However, in real-world applications, the image tags are usually incomplete, thus it is important to learn the complete tags for images. In this paper, we study the problem of image tag complete and proposed a novel method for this problem based on a popular image representation method, convolutional neural network (CNN). The method estimates the complete tags from the convolutional filtering outputs of images based on a linear predictor. The CNN parameters, linear predictor, and the complete tags are learned jointly by our method. We build a minimization problem to encourage the consistency between the complete tags and the available incomplete tags, reduce the estimation error, and reduce the model complexity. An iterative algorithm is developed to solve the minimization problem. Experiments over benchmark image data sets show its effectiveness.

  7. A Novel Image Tag Completion Method Based on Convolutional Neural Transformation

    KAUST Repository

    Geng, Yanyan

    2017-10-24

    In the problems of image retrieval and annotation, complete textual tag lists of images play critical roles. However, in real-world applications, the image tags are usually incomplete, thus it is important to learn the complete tags for images. In this paper, we study the problem of image tag complete and proposed a novel method for this problem based on a popular image representation method, convolutional neural network (CNN). The method estimates the complete tags from the convolutional filtering outputs of images based on a linear predictor. The CNN parameters, linear predictor, and the complete tags are learned jointly by our method. We build a minimization problem to encourage the consistency between the complete tags and the available incomplete tags, reduce the estimation error, and reduce the model complexity. An iterative algorithm is developed to solve the minimization problem. Experiments over benchmark image data sets show its effectiveness.

  8. Pilot study in the treatment of endometrial carcinoma with 3D image-based high-dose-rate brachytherapy using modified Heyman packing: Clinical experience and dose-volume histogram analysis

    International Nuclear Information System (INIS)

    Weitmann, Hajo Dirk; Poetter, Richard; Waldhaeusl, Claudia; Nechvile, Elisabeth; Kirisits, Christian; Knocke, Tomas Hendrik

    2005-01-01

    Purpose: The aim of this study was to evaluate dose distribution within uterus (clinical target volume [CTV]) and tumor (gross tumor volume [GTV]) and the resulting clinical outcome based on systematic three-dimensional treatment planning with dose-volume adaptation. Dose-volume assessment and adaptation in organs at risk and its impact on side effects were investigated in parallel. Methods and Materials: Sixteen patients with either locally confined endometrial carcinoma (n = 15) or adenocarcinoma of uterus and ovaries after bilateral salpingo-oophorectomy (n = 1) were included. Heyman packing was performed with mean 11 Norman-Simon applicators (3-18). Three-dimensional treatment planning based on computed tomography (n = 29) or magnetic resonance imaging (n = 18) was done in all patients with contouring of CTV, GTV, and organs at risk. Dose-volume adaptation was achieved by dwell location and time variation (intensity modulation). Twelve patients treated with curative intent received five to seven fractions of high-dose-rate brachytherapy (7 Gy per fraction) corresponding to a total dose of 60 Gy (2 Gy per fraction and α/β of 10 Gy) to the CTV. Four patients had additional external beam radiotherapy (range, 10-40 Gy). One patient had salvage brachytherapy and 3 patients were treated with palliative intent. A dose-volume histogram analysis was performed in all patients. On average, 68% of the CTV and 92% of the GTV were encompassed by the 60 Gy reference volume. Median minimum dose to 90% of CTV and GTV (D90) was 35.3 Gy and 74 Gy, respectively. Results: All patients treated with curative intent had complete remission (12/12). After a median follow-up of 47 months, 5 patients are alive without tumor. Seven patients died without tumor from intercurrent disease after median 22 months. The patient with salvage treatment had a second local recurrence after 27 months and died of endometrial carcinoma after 57 months. In patients treated with palliative intent

  9. A simple method for detecting tumor in T2-weighted MRI brain images. An image-based analysis

    International Nuclear Information System (INIS)

    Lau, Phooi-Yee; Ozawa, Shinji

    2006-01-01

    The objective of this paper is to present a decision support system which uses a computer-based procedure to detect tumor blocks or lesions in digitized medical images. The authors developed a simple method with a low computation effort to detect tumors on T2-weighted Magnetic Resonance Imaging (MRI) brain images, focusing on the connection between the spatial pixel value and tumor properties from four different perspectives: cases having minuscule differences between two images using a fixed block-based method, tumor shape and size using the edge and binary images, tumor properties based on texture values using spatial pixel intensity distribution controlled by a global discriminate value, and the occurrence of content-specific tumor pixel for threshold images. Measurements of the following medical datasets were performed: different time interval images, and different brain disease images on single and multiple slice images. Experimental results have revealed that our proposed technique incurred an overall error smaller than those in other proposed methods. In particular, the proposed method allowed decrements of false alarm and missed alarm errors, which demonstrate the effectiveness of our proposed technique. In this paper, we also present a prototype system, known as PCB, to evaluate the performance of the proposed methods by actual experiments, comparing the detection accuracy and system performance. (author)

  10. GPU-based relative fuzzy connectedness image segmentation

    International Nuclear Information System (INIS)

    Zhuge Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.

    2013-01-01

    Purpose:Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an ℓ ∞ -based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA’s Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  11. GPU-based relative fuzzy connectedness image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhuge Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W. [Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland 20892 (United States); Department of Mathematics, West Virginia University, Morgantown, West Virginia 26506 (United States) and Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland 20892 (United States)

    2013-01-15

    Purpose:Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an Script-Small-L {sub {infinity}}-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8 Multiplication-Sign , 22.9 Multiplication-Sign , 20.9 Multiplication-Sign , and 17.5 Multiplication-Sign , correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  12. GPU-based relative fuzzy connectedness image segmentation.

    Science.gov (United States)

    Zhuge, Ying; Ciesielski, Krzysztof C; Udupa, Jayaram K; Miller, Robert W

    2013-01-01

    Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. The most common FC segmentations, optimizing an [script-l](∞)-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  13. GPU-based relative fuzzy connectedness image segmentation

    Science.gov (United States)

    Zhuge, Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.

    2013-01-01

    Purpose: Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an ℓ∞-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA’s Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology. PMID:23298094

  14. Color Image Quality Assessment Based on CIEDE2000

    Directory of Open Access Journals (Sweden)

    Yang Yang

    2012-01-01

    Full Text Available Combining the color difference formula of CIEDE2000 and the printing industry standard for visual verification, we present an objective color image quality assessment method correlated with subjective vision perception. An objective score conformed to subjective perception (OSCSP Q was proposed to directly reflect the subjective visual perception. In addition, we present a general method to calibrate correction factors of color difference formula under real experimental conditions. Our experiment results show that the present DE2000-based metric can be consistent with human visual system in general application environment.

  15. Fuzzy Matching Based on Gray-scale Difference for Quantum Images

    Science.gov (United States)

    Luo, GaoFeng; Zhou, Ri-Gui; Liu, XingAo; Hu, WenWen; Luo, Jia

    2018-05-01

    Quantum image processing has recently emerged as an essential problem in practical tasks, e.g. real-time image matching. Previous studies have shown that the superposition and entanglement of quantum can greatly improve the efficiency of complex image processing. In this paper, a fuzzy quantum image matching scheme based on gray-scale difference is proposed to find out the target region in a reference image, which is very similar to the template image. Firstly, we employ the proposed enhanced quantum representation (NEQR) to store digital images. Then some certain quantum operations are used to evaluate the gray-scale difference between two quantum images by thresholding. If all of the obtained gray-scale differences are not greater than the threshold value, it indicates a successful fuzzy matching of quantum images. Theoretical analysis and experiments show that the proposed scheme performs fuzzy matching at a low cost and also enables exponentially significant speedup via quantum parallel computation.

  16. Remote diagnosis via a telecommunication satellite--ultrasonic tomographic image transmission experiments.

    Science.gov (United States)

    Nakajima, I; Inokuchi, S; Tajima, T; Takahashi, T

    1985-04-01

    An experiment to transmit ultrasonic tomographic section images required for remote medical diagnosis and care was conducted using the mobile telecommunication satellite OSCAR-10. The images received showed the intestinal condition of a patient incapable of verbal communication, however the image screen had a fairly coarse particle structure. On the basis of these experiments, were considered as the transmission of ultrasonic tomographic images extremely effective in remote diagnosis.

  17. VLSI-based video event triggering for image data compression

    Science.gov (United States)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  18. An atlas-based multimodal registration method for 2D images with discrepancy structures.

    Science.gov (United States)

    Lv, Wenchao; Chen, Houjin; Peng, Yahui; Li, Yanfeng; Li, Jupeng

    2018-06-04

    An atlas-based multimodal registration method for 2-dimension images with discrepancy structures was proposed in this paper. Atlas was utilized for complementing the discrepancy structure information in multimodal medical images. The scheme includes three steps: floating image to atlas registration, atlas to reference image registration, and field-based deformation. To evaluate the performance, a frame model, a brain model, and clinical images were employed in registration experiments. We measured the registration performance by the squared sum of intensity differences. Results indicate that this method is robust and performs better than the direct registration for multimodal images with discrepancy structures. We conclude that the proposed method is suitable for multimodal images with discrepancy structures. Graphical Abstract An Atlas-based multimodal registration method schematic diagram.

  19. Prototype study of the Cherenkov imager of the AMS experiment

    International Nuclear Information System (INIS)

    Aguayo, P.; Aguilar-Benitez, M.; Arruda, L.; Barao, F.; Barreira, G.; Barrau, A.; Baret, B.; Belmont, E.; Berdugo, J.; Boudoul, G.; Borges, J.; Buenerd, M.; Casadei, D.; Casaus, J.; Delgado, C.; Diaz, C.; Derome, L.; Eraud, L.; Gallin-Martel, L.; Giovacchini, F.; Goncalves, P.; Lanciotti, E.; Laurenti, G.; Malinine, A.; Mana, C.; Marin, J.; Martinez, G.; Menchaca-Rocha, A.; Palomares, C.; Pereira, R.; Pimenta, M.; Protasov, K.; Sanchez, E.; Seo, E.-S.; Sevilla, I.; Torrento, A.; Vargas-Trevino, M.; Veziant, O.

    2006-01-01

    The AMS experiment includes a Cherenkov imager for mass and charge identification of charged cosmic rays. A second generation prototype has been constructed and its performances evaluated both with cosmic ray particles and with beam ions. In-beam tests have been performed using secondary nuclei from the fragmentation of 20GeV/c per nucleon Pb ions and 158GeV/c per nucleon In from the CERN SPS in 2002 and 2003. Partial results are reported. The performances of the prototype for the velocity and the charge measurements have been studied over the range of ion charge Z-bar 30. A sample of candidate silica aerogel radiators for the flight model of the detector has been tested. The measured velocity resolution of the detector was found to scale with Z -1 as expected, with a value σ(β)/β∼0.7-110 -3 for singly charged particles and an asymptotic limit in Z of 0.4-0.6x10 -4 . The measured charge resolution obtained for the n=1.05 aerogel radiator material selected for the flight model of the detector is σ(Z)=0.18 (statistical) -bar 0.015 (systematic), ensuring a good charge separation up to the iron element, for the prototype in the reported experimental conditions

  20. Automated image based prominent nucleoli detection.

    Science.gov (United States)

    Yap, Choon K; Kalaw, Emarene M; Singh, Malay; Chong, Kian T; Giron, Danilo M; Huang, Chao-Hui; Cheng, Li; Law, Yan N; Lee, Hwee Kuan

    2015-01-01

    Nucleolar changes in cancer cells are one of the cytologic features important to the tumor pathologist in cancer assessments of tissue biopsies. However, inter-observer variability and the manual approach to this work hamper the accuracy of the assessment by pathologists. In this paper, we propose a computational method for prominent nucleoli pattern detection. Thirty-five hematoxylin and eosin stained images were acquired from prostate cancer, breast cancer, renal clear cell cancer and renal papillary cell cancer tissues. Prostate cancer images were used for the development of a computer-based automated prominent nucleoli pattern detector built on a cascade farm. An ensemble of approximately 1000 cascades was constructed by permuting different combinations of classifiers such as support vector machines, eXclusive component analysis, boosting, and logistic regression. The output of cascades was then combined using the RankBoost algorithm. The output of our prominent nucleoli pattern detector is a ranked set of detected image patches of patterns of prominent nucleoli. The mean number of detected prominent nucleoli patterns in the top 100 ranked detected objects was 58 in the prostate cancer dataset, 68 in the breast cancer dataset, 86 in the renal clear cell cancer dataset, and 76 in the renal papillary cell cancer dataset. The proposed cascade farm performs twice as good as the use of a single cascade proposed in the seminal paper by Viola and Jones. For comparison, a naive algorithm that randomly chooses a pixel as a nucleoli pattern would detect five correct patterns in the first 100 ranked objects. Detection of sparse nucleoli patterns in a large background of highly variable tissue patterns is a difficult challenge our method has overcome. This study developed an accurate prominent nucleoli pattern detector with the potential to be used in the clinical settings.

  1. Automated image based prominent nucleoli detection

    Directory of Open Access Journals (Sweden)

    Choon K Yap

    2015-01-01

    Full Text Available Introduction: Nucleolar changes in cancer cells are one of the cytologic features important to the tumor pathologist in cancer assessments of tissue biopsies. However, inter-observer variability and the manual approach to this work hamper the accuracy of the assessment by pathologists. In this paper, we propose a computational method for prominent nucleoli pattern detection. Materials and Methods: Thirty-five hematoxylin and eosin stained images were acquired from prostate cancer, breast cancer, renal clear cell cancer and renal papillary cell cancer tissues. Prostate cancer images were used for the development of a computer-based automated prominent nucleoli pattern detector built on a cascade farm. An ensemble of approximately 1000 cascades was constructed by permuting different combinations of classifiers such as support vector machines, eXclusive component analysis, boosting, and logistic regression. The output of cascades was then combined using the RankBoost algorithm. The output of our prominent nucleoli pattern detector is a ranked set of detected image patches of patterns of prominent nucleoli. Results: The mean number of detected prominent nucleoli patterns in the top 100 ranked detected objects was 58 in the prostate cancer dataset, 68 in the breast cancer dataset, 86 in the renal clear cell cancer dataset, and 76 in the renal papillary cell cancer dataset. The proposed cascade farm performs twice as good as the use of a single cascade proposed in the seminal paper by Viola and Jones. For comparison, a naive algorithm that randomly chooses a pixel as a nucleoli pattern would detect five correct patterns in the first 100 ranked objects. Conclusions: Detection of sparse nucleoli patterns in a large background of highly variable tissue patterns is a difficult challenge our method has overcome. This study developed an accurate prominent nucleoli pattern detector with the potential to be used in the clinical settings.

  2. Experiences in fragment-based drug discovery.

    Science.gov (United States)

    Murray, Christopher W; Verdonk, Marcel L; Rees, David C

    2012-05-01

    Fragment-based drug discovery (FBDD) has become established in both industry and academia as an alternative approach to high-throughput screening for the generation of chemical leads for drug targets. In FBDD, specialised detection methods are used to identify small chemical compounds (fragments) that bind to the drug target, and structural biology is usually employed to establish their binding mode and to facilitate their optimisation. In this article, we present three recent and successful case histories in FBDD. We then re-examine the key concepts and challenges of FBDD with particular emphasis on recent literature and our own experience from a substantial number of FBDD applications. Our opinion is that careful application of FBDD is living up to its promise of delivering high quality leads with good physical properties and that in future many drug molecules will be derived from fragment-based approaches. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. The Calibration Home Base for Imaging Spectrometers

    Directory of Open Access Journals (Sweden)

    Johannes Felix Simon Brachmann

    2016-08-01

    Full Text Available The Calibration Home Base (CHB is an optical laboratory designed for the calibration of imaging spectrometers for the VNIR/SWIR wavelength range. Radiometric, spectral and geometric calibration as well as the characterization of sensor signal dependency on polarization are realized in a precise and highly automated fashion. This allows to carry out a wide range of time consuming measurements in an ecient way. The implementation of ISO 9001 standards in all procedures ensures a traceable quality of results. Spectral measurements in the wavelength range 380–1000 nm are performed to a wavelength uncertainty of +- 0.1 nm, while an uncertainty of +-0.2 nm is reached in the wavelength range 1000 – 2500 nm. Geometric measurements are performed at increments of 1.7 µrad across track and 7.6 µrad along track. Radiometric measurements reach an absolute uncertainty of +-3% (k=1. Sensor artifacts, such as caused by stray light will be characterizable and correctable in the near future. For now, the CHB is suitable for the characterization of pushbroom sensors, spectrometers and cameras. However, it is planned to extend the CHBs capabilities in the near future such that snapshot hyperspectral imagers can be characterized as well. The calibration services of the CHB are open to third party customers from research institutes as well as industry.

  4. Augmented reality based real-time subcutaneous vein imaging system.

    Science.gov (United States)

    Ai, Danni; Yang, Jian; Fan, Jingfan; Zhao, Yitian; Song, Xianzheng; Shen, Jianbing; Shao, Ling; Wang, Yongtian

    2016-07-01

    A novel 3D reconstruction and fast imaging system for subcutaneous veins by augmented reality is presented. The study was performed to reduce the failure rate and time required in intravenous injection by providing augmented vein structures that back-project superimposed veins on the skin surface of the hand. Images of the subcutaneous vein are captured by two industrial cameras with extra reflective near-infrared lights. The veins are then segmented by a multiple-feature clustering method. Vein structures captured by the two cameras are matched and reconstructed based on the epipolar constraint and homographic property. The skin surface is reconstructed by active structured light with spatial encoding values and fusion displayed with the reconstructed vein. The vein and skin surface are both reconstructed in the 3D space. Results show that the structures can be precisely back-projected to the back of the hand for further augmented display and visualization. The overall system performance is evaluated in terms of vein segmentation, accuracy of vein matching, feature points distance error, duration times, accuracy of skin reconstruction, and augmented display. All experiments are validated with sets of real vein data. The imaging and augmented system produces good imaging and augmented reality results with high speed.

  5. Fractal Image Coding Based on a Fitting Surface

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2014-01-01

    Full Text Available A no-search fractal image coding method based on a fitting surface is proposed. In our research, an improved gray-level transform with a fitting surface is introduced. One advantage of this method is that the fitting surface is used for both the range and domain blocks and one set of parameters can be saved. Another advantage is that the fitting surface can approximate the range and domain blocks better than the previous fitting planes; this can result in smaller block matching errors and better decoded image quality. Since the no-search and quadtree techniques are adopted, smaller matching errors also imply less number of blocks matching which results in a faster encoding process. Moreover, by combining all the fitting surfaces, a fitting surface image (FSI is also proposed to speed up the fractal decoding. Experiments show that our proposed method can yield superior performance over the other three methods. Relative to range-averaged image, FSI can provide faster fractal decoding process. Finally, by combining the proposed fractal coding method with JPEG, a hybrid coding method is designed which can provide higher PSNR than JPEG while maintaining the same Bpp.

  6. Strong reflector-based beamforming in ultrasound medical imaging.

    Science.gov (United States)

    Szasz, Teodora; Basarab, Adrian; Kouamé, Denis

    2016-03-01

    This paper investigates the use of sparse priors in creating original two-dimensional beamforming methods for ultrasound imaging. The proposed approaches detect the strong reflectors from the scanned medium based on the well known Bayesian Information Criteria used in statistical modeling. Moreover, they allow a parametric selection of the level of speckle in the final beamformed image. These methods are applied on simulated data and on recorded experimental data. Their performance is evaluated considering the standard image quality metrics: contrast ratio (CR), contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR). A comparison is made with the classical delay-and-sum and minimum variance beamforming methods to confirm the ability of the proposed methods to precisely detect the number and the position of the strong reflectors in a sparse medium and to accurately reduce the speckle and highly enhance the contrast in a non-sparse medium. We confirm that our methods improve the contrast of the final image for both simulated and experimental data. In all experiments, the proposed approaches tend to preserve the speckle, which can be of major interest in clinical examinations, as it can contain useful information. In sparse mediums we achieve a highly improvement in contrast compared with the classical methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Incident Light Frequency-Based Image Defogging Algorithm

    Directory of Open Access Journals (Sweden)

    Wenbo Zhang

    2017-01-01

    Full Text Available To solve the color distortion problem produced by the dark channel prior algorithm, an improved method for calculating transmittance of all channels, respectively, was proposed in this paper. Based on the Beer-Lambert Law, the influence between the frequency of the incident light and the transmittance was analyzed, and the ratios between each channel’s transmittance were derived. Then, in order to increase efficiency, the input image was resized to a smaller size before acquiring the refined transmittance which will be resized to the same size of original image. Finally, all the transmittances were obtained with the help of the proportion between each color channel, and then they were used to restore the defogging image. Experiments suggest that the improved algorithm can produce a much more natural result image in comparison with original algorithm, which means the problem of high color saturation was eliminated. What is more, the improved algorithm speeds up by four to nine times compared to the original algorithm.

  8. Automatic medical image annotation and keyword-based image retrieval using relevance feedback.

    Science.gov (United States)

    Ko, Byoung Chul; Lee, JiHyeon; Nam, Jae-Yeal

    2012-08-01

    This paper presents novel multiple keywords annotation for medical images, keyword-based medical image retrieval, and relevance feedback method for image retrieval for enhancing image retrieval performance. For semantic keyword annotation, this study proposes a novel medical image classification method combining local wavelet-based center symmetric-local binary patterns with random forests. For keyword-based image retrieval, our retrieval system use the confidence score that is assigned to each annotated keyword by combining probabilities of random forests with predefined body relation graph. To overcome the limitation of keyword-based image retrieval, we combine our image retrieval system with relevance feedback mechanism based on visual feature and pattern classifier. Compared with other annotation and relevance feedback algorithms, the proposed method shows both improved annotation performance and accurate retrieval results.

  9. Mobile object retrieval in server-based image databases

    Science.gov (United States)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  10. The Sun Radio Imaging Space Experiment (SunRISE) Mission

    Science.gov (United States)

    Kasper, J. C.; Lazio, J.; Alibay, F.; Amiri, N.; Bastian, T.; Cohen, C.; Landi, E.; Hegedus, A. M.; Maksimovic, M.; Manchester, W.; Reinard, A.; Schwadron, N.; Cecconi, B.; Hallinan, G.; Krupar, V.

    2017-12-01

    Radio emission from coronal mass ejections (CMEs) is a direct tracer of particle acceleration in the inner heliosphere and potential magnetic connections from the lower solar corona to the larger heliosphere. Energized electrons excite Langmuir waves, which then convert into intense radio emission at the local plasma frequency, with the most intense acceleration thought to occur within 20 R_S. The radio emission from CMEs is quite strong such that only a relatively small number of antennas is required to detect and map it, but many aspects of this particle acceleration and transport remain poorly constrained. Ground-based arrays would be quite capable of tracking the radio emission associated with CMEs, but absorption by the Earth's ionosphere limits the frequency coverage of ground-based arrays (nu > 15 MHz), which in turn limits the range of solar distances over which they can track the radio emission (concept: A constellation of small spacecraft in a geostationary graveyard orbit designed to localize and track radio emissions in the inner heliosphere. Each spacecraft would carry a receiving system for observations below 25 MHz, and SunRISE would produce the first images of CMEs more than a few solar radii from the Sun. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.

  11. Image based book cover recognition and retrieval

    Science.gov (United States)

    Sukhadan, Kalyani; Vijayarajan, V.; Krishnamoorthi, A.; Bessie Amali, D. Geraldine

    2017-11-01

    In this we are developing a graphical user interface using MATLAB for the users to check the information related to books in real time. We are taking the photos of the book cover using GUI, then by using MSER algorithm it will automatically detect all the features from the input image, after this it will filter bifurcate non-text features which will be based on morphological difference between text and non-text regions. We implemented a text character alignment algorithm which will improve the accuracy of the original text detection. We will also have a look upon the built in MATLAB OCR recognition algorithm and an open source OCR which is commonly used to perform better detection results, post detection algorithm is implemented and natural language processing to perform word correction and false detection inhibition. Finally, the detection result will be linked to internet to perform online matching. More than 86% accuracy can be obtained by this algorithm.

  12. Performance Analysis of Segmentation of Hyperspectral Images Based on Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Praveen Agarwal

    2017-06-01

    Full Text Available Image segmentation is a fundamental approach in the field of image processing and based on user’s application .This paper propose an original and simple segmentation strategy based on the EM approach that resolves many informatics problems about hyperspectral images which are observed by airborne sensors. In a first step, to simplify the input color textured image into a color image without texture. The final segmentation is simply achieved by a spatially color segmentation using feature vector with the set of color values contained around the pixel to be classified with some mathematical equations. The spatial constraint allows taking into account the inherent spatial relationships of any image and its color. This approach provides effective PSNR for the segmented image. These results have the better performance as the segmented images are compared with Watershed & Region Growing Algorithm and provide effective segmentation for the Spectral Images & Medical Images.

  13. Image-based reflectance conversion of ASTER and IKONOS ...

    African Journals Online (AJOL)

    Spectral signatures derived from different image-based models for ASTER and IKONOS were inspected visually as first departure. This was followed by comparison of the total accuracy and Kappa index computed from supervised classification of images that were derived from different image-based atmospheric correction ...

  14. X-ray crystal imagers for inertial confinement fusion experiments (invited)

    International Nuclear Information System (INIS)

    Aglitskiy, Y.; Lehecka, T.; Obenschain, S.; Pawley, C.; Brown, C.M.; Seely, J.

    1999-01-01

    We report on our continued development of high resolution monochromatic x-ray imaging system based on spherically curved crystals. This system can be extensively used in the relevant experiments of the inertial confinement fusion (ICF) program. The system is currently used, but not limited to diagnostics of the targets ablatively accelerated by the Nike KrF laser. A spherically curved quartz crystal (2d=6.68703 Angstrom, R=200mm) has been used to produce monochromatic backlit images with the He-like Si resonance line (1865 eV) as the source of radiation. Another quartz crystal (2d=8.5099 Angstrom, R=200mm) with the H-like Mg resonance line (1473 eV) has been used for backlit imaging with higher contrast. The spatial resolution of the x-ray optical system is 1.7 μm in selected places and 2 - 3 μm over a larger area. A second crystal with a separate backlighter was added to the imaging system. This makes it possible to make use of all four strips of the framing camera. Time resolved, 20x magnified, backlit monochromatic images of CH planar targets driven by the Nike facility have been obtained with spatial resolution of 2.5 μm in selected places and 5 μm over the focal spot of the Nike laser. We are exploring the enhancement of this technique to the higher and lower backlighter energies. copyright 1999 American Institute of Physics

  15. Chaos-based image encryption algorithm

    International Nuclear Information System (INIS)

    Guan Zhihong; Huang Fangjun; Guan Wenjie

    2005-01-01

    In this Letter, a new image encryption scheme is presented, in which shuffling the positions and changing the grey values of image pixels are combined to confuse the relationship between the cipher-image and the plain-image. Firstly, the Arnold cat map is used to shuffle the positions of the image pixels in the spatial-domain. Then the discrete output signal of the Chen's chaotic system is preprocessed to be suitable for the grayscale image encryption, and the shuffled image is encrypted by the preprocessed signal pixel by pixel. The experimental results demonstrate that the key space is large enough to resist the brute-force attack and the distribution of grey values of the encrypted image has a random-like behavior

  16. A fractal-based image encryption system

    KAUST Repository

    Abd-El-Hafiz, S. K.; Radwan, Ahmed Gomaa; Abdel Haleem, Sherif H.; Barakat, Mohamed L.

    2014-01-01

    single-fractal image and statistical analysis is performed. A general encryption system utilising multiple fractal images is, then, introduced to improve the performance and increase the encryption key up to hundreds of bits. This improvement is achieved

  17. Multi region based image retrieval system

    Indian Academy of Sciences (India)

    data mining, information theory, statistics and psychology. ∗ .... ground complication and independent of image size and orientation (Zhang 2007). ..... Figure 2. Significant regions: (a) the input image, (b) the primary significant region, (c) the ...

  18. Machine learning based analysis of cardiovascular images

    NARCIS (Netherlands)

    Wolterink, JM

    2017-01-01

    Cardiovascular diseases (CVDs), including coronary artery disease (CAD) and congenital heart disease (CHD) are the global leading cause of death. Computed tomography (CT) and magnetic resonance imaging (MRI) allow non-invasive imaging of cardiovascular structures. This thesis presents machine

  19. Tissues segmentation based on multi spectral medical images

    Science.gov (United States)

    Li, Ya; Wang, Ying

    2017-11-01

    Each band image contains the most obvious tissue feature according to the optical characteristics of different tissues in different specific bands for multispectral medical images. In this paper, the tissues were segmented by their spectral information at each multispectral medical images. Four Local Binary Patter descriptors were constructed to extract blood vessels based on the gray difference between the blood vessels and their neighbors. The segmented tissue in each band image was merged to a clear image.

  20. COLOUR IMAGE ENHANCEMENT BASED ON HISTOGRAM EQUALIZATION

    OpenAIRE

    Kanika Kapoor and Shaveta Arora

    2015-01-01

    Histogram equalization is a nonlinear technique for adjusting the contrast of an image using its histogram. It increases the brightness of a gray scale image which is different from the mean brightness of the original image. There are various types of Histogram equalization techniques like Histogram Equalization, Contrast Limited Adaptive Histogram Equalization, Brightness Preserving Bi Histogram Equalization, Dualistic Sub Image Histogram Equalization, Minimum Mean Brightness Error Bi Histog...

  1. Location-based Services using Image Search

    DEFF Research Database (Denmark)

    Vertongen, Pieter-Paulus; Hansen, Dan Witzner

    2008-01-01

    Recent developments in image search has made them sufficiently efficient to be used in real-time applications. GPS has become a popular navigation tool. While GPS information provide reasonably good accuracy, they are not always present in all hand held devices nor are they accurate in all situat...... of the image search engine and database image location knowledge, the location is determined of the query image and associated data can be presented to the user....

  2. Efficient OCT Image Enhancement Based on Collaborative Shock Filtering.

    Science.gov (United States)

    Liu, Guohua; Wang, Ziyu; Mu, Guoying; Li, Peijin

    2018-01-01

    Efficient enhancement of noisy optical coherence tomography (OCT) images is a key task for interpreting them correctly. In this paper, to better enhance details and layered structures of a human retina image, we propose a collaborative shock filtering for OCT image denoising and enhancement. Noisy OCT image is first denoised by a collaborative filtering method with new similarity measure, and then the denoised image is sharpened by a shock-type filtering for edge and detail enhancement. For dim OCT images, in order to improve image contrast for the detection of tiny lesions, a gamma transformation is first used to enhance the images within proper gray levels. The proposed method integrating image smoothing and sharpening simultaneously obtains better visual results in experiments.

  3. Content-based image retrieval applied to bone age assessment

    Science.gov (United States)

    Fischer, Benedikt; Brosig, André; Welter, Petra; Grouls, Christoph; Günther, Rolf W.; Deserno, Thomas M.

    2010-03-01

    Radiological bone age assessment is based on local image regions of interest (ROI), such as the epiphysis or the area of carpal bones. These are compared to a standardized reference and scores determining the skeletal maturity are calculated. For computer-aided diagnosis, automatic ROI extraction and analysis is done so far mainly by heuristic approaches. Due to high variations in the imaged biological material and differences in age, gender and ethnic origin, automatic analysis is difficult and frequently requires manual interactions. On the contrary, epiphyseal regions (eROIs) can be compared to previous cases with known age by content-based image retrieval (CBIR). This requires a sufficient number of cases with reliable positioning of the eROI centers. In this first approach to bone age assessment by CBIR, we conduct leaving-oneout experiments on 1,102 left hand radiographs and 15,428 metacarpal and phalangeal eROIs from the USC hand atlas. The similarity of the eROIs is assessed by cross-correlation of 16x16 scaled eROIs. The effects of the number of eROIs, two age computation methods as well as the number of considered CBIR references are analyzed. The best results yield an error rate of 1.16 years and a standard deviation of 0.85 years. As the appearance of the hand varies naturally by up to two years, these results clearly demonstrate the applicability of the CBIR approach for bone age estimation.

  4. Hierarchical clustering of RGB surface water images based on MIA ...

    African Journals Online (AJOL)

    2009-11-25

    Nov 25, 2009 ... similar water-related images within a testing database of 126 RGB images. .... consequently treated by SVD-based PCA and the PCA outputs partitioned into .... green. Other colours, mostly brown and grey, dominate in.

  5. New LSB-based colour image steganography method to enhance ...

    Indian Academy of Sciences (India)

    Mustafa Cem kasapbaşi

    2018-04-27

    Apr 27, 2018 ... evaluate the proposed method, comparative performance tests are carried out against different spatial image ... image steganography applications based on LSB are ..... worst case scenario could occur when having highest.

  6. An improved three-dimensional non-scanning laser imaging system based on digital micromirror device

    Science.gov (United States)

    Xia, Wenze; Han, Shaokun; Lei, Jieyu; Zhai, Yu; Timofeev, Alexander N.

    2018-01-01

    Nowadays, there are two main methods to realize three-dimensional non-scanning laser imaging detection, which are detection method based on APD and detection method based on Streak Tube. However, the detection method based on APD possesses some disadvantages, such as small number of pixels, big pixel interval and complex supporting circuit. The detection method based on Streak Tube possesses some disadvantages, such as big volume, bad reliability and high cost. In order to resolve the above questions, this paper proposes an improved three-dimensional non-scanning laser imaging system based on Digital Micromirror Device. In this imaging system, accurate control of laser beams and compact design of imaging structure are realized by several quarter-wave plates and a polarizing beam splitter. The remapping fiber optics is used to sample the image plane of receiving optical lens, and transform the image into line light resource, which can realize the non-scanning imaging principle. The Digital Micromirror Device is used to convert laser pulses from temporal domain to spatial domain. The CCD with strong sensitivity is used to detect the final reflected laser pulses. In this paper, we also use an algorithm which is used to simulate this improved laser imaging system. In the last, the simulated imaging experiment demonstrates that this improved laser imaging system can realize three-dimensional non-scanning laser imaging detection.

  7. Quantum Image Steganography and Steganalysis Based On LSQu-Blocks Image Information Concealing Algorithm

    Science.gov (United States)

    A. AL-Salhi, Yahya E.; Lu, Songfeng

    2016-08-01

    Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.

  8. Patient positioning method based on binary image correlation between two edge images for proton-beam radiation therapy

    International Nuclear Information System (INIS)

    Sawada, Akira; Yoda, Kiyoshi; Numano, Masumi; Futami, Yasuyuki; Yamashita, Haruo; Murayama, Shigeyuki; Tsugami, Hironobu

    2005-01-01

    A new technique based on normalized binary image correlation between two edge images has been proposed for positioning proton-beam radiotherapy patients. A Canny edge detector was used to extract two edge images from a reference x-ray image and a test x-ray image of a patient before positioning. While translating and rotating the edged test image, the absolute value of the normalized binary image correlation between the two edge images is iteratively maximized. Each time before rotation, dilation is applied to the edged test image to avoid a steep reduction of the image correlation. To evaluate robustness of the proposed method, a simulation has been carried out using 240 simulated edged head front-view images extracted from a reference image by varying parameters of the Canny algorithm with a given range of rotation angles and translation amounts in x and y directions. It was shown that resulting registration errors have an accuracy of one pixel in x and y directions and zero degrees in rotation, even when the number of edge pixels significantly differs between the edged reference image and the edged simulation image. Subsequently, positioning experiments using several sets of head, lung, and hip data have been performed. We have observed that the differences of translation and rotation between manual positioning and the proposed method were within one pixel in translation and one degree in rotation. From the results of the validation study, it can be concluded that a significant reduction in workload for the physicians and technicians can be achieved with this method

  9. NMR imaging of bladder tumors in males. Preliminary clinical experience

    International Nuclear Information System (INIS)

    Sigal, R.; Rein, A.J.J.T.; Atlan, H.; Lanir, A.; Kedar, S.; Segal, S.

    1985-01-01

    Nuclear magnetic resonance (NMR) of the normal and pathologic bladder was performed in 10 male subjects: 5 normal volunteers, 4 with bladder primary carcinoma, 1 with bladder metastasis. All scanning was done using a superconductive magnet operating at 0.5 T. Spin echo was used as pulse sequence. The diagnosis was confirmed in all cases by NMR imaging. The ability of the technique to provide images in axial, sagital and coronal planes allowed a precise assessment of the morphology and the size of the tumors. The lack of hazards and the quality of images may promote NMR imaging to a prominent role in the diagnosis of human bladder cancer [fr

  10. Image dissimilarity-based quantification of lung disease from CT

    DEFF Research Database (Denmark)

    Sørensen, Lauge; Loog, Marco; Lo, Pechin

    2010-01-01

    In this paper, we propose to classify medical images using dissimilarities computed between collections of regions of interest. The images are mapped into a dissimilarity space using an image dissimilarity measure, and a standard vector space-based classifier is applied in this space. The classif......In this paper, we propose to classify medical images using dissimilarities computed between collections of regions of interest. The images are mapped into a dissimilarity space using an image dissimilarity measure, and a standard vector space-based classifier is applied in this space...

  11. Image-based corrosion recognition for ship steel structures

    Science.gov (United States)

    Ma, Yucong; Yang, Yang; Yao, Yuan; Li, Shengyuan; Zhao, Xuefeng

    2018-03-01

    Ship structures are subjected to corrosion inevitably in service. Existed image-based methods are influenced by the noises in images because they recognize corrosion by extracting features. In this paper, a novel method of image-based corrosion recognition for ship steel structures is proposed. The method utilizes convolutional neural networks (CNN) and will not be affected by noises in images. A CNN used to recognize corrosion was designed through fine-turning an existing CNN architecture and trained by datasets built using lots of images. Combining the trained CNN classifier with a sliding window technique, the corrosion zone in an image can be recognized.

  12. An Image Encryption Method Based on Bit Plane Hiding Technology

    Institute of Scientific and Technical Information of China (English)

    LIU Bin; LI Zhitang; TU Hao

    2006-01-01

    A novel image hiding method based on the correlation analysis of bit plane is described in this paper. Firstly, based on the correlation analysis, different bit plane of a secret image is hided in different bit plane of several different open images. And then a new hiding image is acquired by a nesting "Exclusive-OR" operation on those images obtained from the first step. At last, by employing image fusion technique, the final hiding result is achieved. The experimental result shows that the method proposed in this paper is effective.

  13. Design of a space-based infrared imaging interferometer

    Science.gov (United States)

    Hart, Michael; Hope, Douglas; Romeo, Robert

    2017-07-01

    Present space-based optical imaging sensors are expensive. Launch costs are dictated by weight and size, and system design must take into account the low fault tolerance of a system that cannot be readily accessed once deployed. We describe the design and first prototype of the space-based infrared imaging interferometer (SIRII) that aims to mitigate several aspects of the cost challenge. SIRII is a six-element Fizeau interferometer intended to operate in the short-wave and midwave IR spectral regions over a 6×6 mrad field of view. The volume is smaller by a factor of three than a filled-aperture telescope with equivalent resolving power. The structure and primary optics are fabricated from light-weight space-qualified carbon fiber reinforced polymer; they are easy to replicate and inexpensive. The design is intended to permit one-time alignment during assembly, with no need for further adjustment once on orbit. A three-element prototype of the SIRII imager has been constructed with a unit telescope primary mirror diameter of 165 mm and edge-to-edge baseline of 540 mm. The optics, structure, and interferometric signal processing principles draw on experience developed in ground-based astronomical applications designed to yield the highest sensitivity and resolution with cost-effective optical solutions. The initial motivation for the development of SIRII was the long-term collection of technical intelligence from geosynchronous orbit, but the scalable nature of the design will likely make it suitable for a range of IR imaging scenarios.

  14. Parallel image encryption algorithm based on discretized chaotic map

    International Nuclear Information System (INIS)

    Zhou Qing; Wong Kwokwo; Liao Xiaofeng; Xiang Tao; Hu Yue

    2008-01-01

    Recently, a variety of chaos-based algorithms were proposed for image encryption. Nevertheless, none of them works efficiently in parallel computing environment. In this paper, we propose a framework for parallel image encryption. Based on this framework, a new algorithm is designed using the discretized Kolmogorov flow map. It fulfills all the requirements for a parallel image encryption algorithm. Moreover, it is secure and fast. These properties make it a good choice for image encryption on parallel computing platforms

  15. An Image Matching Method Based on Fourier and LOG-Polar Transform

    Directory of Open Access Journals (Sweden)

    Zhijia Zhang

    2014-04-01

    Full Text Available This Traditional template matching methods are not appropriate for the situation of large angle rotation between two images in the online detection for industrial production. Aiming at this problem, Fourier transform algorithm was introduced to correct image rotation angle based on its rotatary invariance in time-frequency domain, orienting image under test in the same direction with reference image, and then match these images using matching algorithm based on log-polar transform. Compared with the current matching algorithms, experimental results show that the proposed algorithm can not only match two images with rotation of arbitrary angle, but also possess a high matching accuracy and applicability. In addition, the validity and reliability of algorithm was verified by simulated matching experiment targeting circular images.

  16. SU-E-J-181: Magnetic Resonance Image-Guided Radiation Therapy Workflow: Initial Clinical Experience

    International Nuclear Information System (INIS)

    Green, O; Kashani, R; Santanam, L; Wooten, H; Li, H; Rodriguez, V; Hu, Y; Mutic, S; Hand, T; Victoria, J; Steele, C

    2014-01-01

    Purpose: The aims of this work are to describe the workflow and initial clinical experience treating patients with an MRI-guided radiotherapy (MRIGRT) system. Methods: Patient treatments with a novel MR-IGRT system started at our institution in mid-January. The system consists of an on-board 0.35-T MRI, with IMRT-capable delivery via doubly-focused MLCs on three 60 Co heads. In addition to volumetric MR-imaging, real-time planar imaging is performed during treatment. So far, eleven patients started treatment (six finished), ranging from bladder to lung SBRT. While the system is capable of online adaptive radiotherapy and gating, a conventional workflow was used to start, consisting of volumetric imaging for patient setup using visible tumor, evaluation of tumor motion outside of PTV on cine images, and real-time imaging. Workflow times were collected and evaluated to increase efficiency and evaluate feasibility of adding the adaptive and gating features while maintaining a reasonable patient throughput. Results: For the first month, physicians attended every fraction to provide guidance on identifying the tumor and an acceptable level of positioning and anatomical deviation. Average total treatment times (including setup) were reduced from 55 to 45 min after physician presence was no longer required and the therapists had learned to align patients based on soft-tissue imaging. Presently, the source strengths were at half maximum (7.7K Ci each), therefore beam-on times will be reduced after source replacement. Current patient load is 10 per day, with increase to 25 anticipated in the near future. Conclusion: On-board, real-time MRI-guided RT has been incorporated into clinical use. Treatment times were kept to reasonable lengths while including volumetric imaging, previews of tumor movement, and physician evaluation. Workflow and timing is being continuously evaluated to increase efficiency. In near future, adaptive and gating capabilities of the system will be

  17. Adaptive polarization image fusion based on regional energy dynamic weighted average

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yong-qiang; PAN Quan; ZHANG Hong-cai

    2005-01-01

    According to the principle of polarization imaging and the relation between Stokes parameters and the degree of linear polarization, there are much redundant and complementary information in polarized images. Since man-made objects and natural objects can be easily distinguished in images of degree of linear polarization and images of Stokes parameters contain rich detailed information of the scene, the clutters in the images can be removed efficiently while the detailed information can be maintained by combining these images. An algorithm of adaptive polarization image fusion based on regional energy dynamic weighted average is proposed in this paper to combine these images. Through an experiment and simulations,most clutters are removed by this algorithm. The fusion method is used for different light conditions in simulation, and the influence of lighting conditions on the fusion results is analyzed.

  18. Block-Based Compressed Sensing for Neutron Radiation Image Using WDFB

    Directory of Open Access Journals (Sweden)

    Wei Jin

    2015-01-01

    Full Text Available An ideal compression method for neutron radiation image should have high compression ratio while keeping more details of the original image. Compressed sensing (CS, which can break through the restrictions of sampling theorem, is likely to offer an efficient compression scheme for the neutron radiation image. Combining wavelet transform with directional filter banks, a novel nonredundant multiscale geometry analysis transform named Wavelet Directional Filter Banks (WDFB is constructed and applied to represent neutron radiation image sparsely. Then, the block-based CS technique is introduced and a high performance CS scheme for neutron radiation image is proposed. By performing two-step iterative shrinkage algorithm the problem of L1 norm minimization is solved to reconstruct neutron radiation image from random measurements. The experiment results demonstrate that the scheme not only improves the quality of reconstructed image obviously but also retains more details of original image.

  19. The fast iris image clarity evaluation based on Tenengrad and ROI selection

    Science.gov (United States)

    Gao, Shuqin; Han, Min; Cheng, Xu

    2018-04-01

    In iris recognition system, the clarity of iris image is an important factor that influences recognition effect. In the process of recognition, the blurred image may possibly be rejected by the automatic iris recognition system, which will lead to the failure of identification. Therefore it is necessary to evaluate the iris image definition before recognition. Considered the existing evaluation methods on iris image definition, we proposed a fast algorithm to evaluate the definition of iris image in this paper. In our algorithm, firstly ROI (Region of Interest) is extracted based on the reference point which is determined by using the feature of the light spots within the pupil, then Tenengrad operator is used to evaluate the iris image's definition. Experiment results show that, the iris image definition algorithm proposed in this paper could accurately distinguish the iris images of different clarity, and the algorithm has the merit of low computational complexity and more effectiveness.

  20. Gabor filter based fingerprint image enhancement

    Science.gov (United States)

    Wang, Jin-Xiang

    2013-03-01

    Fingerprint recognition technology has become the most reliable biometric technology due to its uniqueness and invariance, which has been most convenient and most reliable technique for personal authentication. The development of Automated Fingerprint Identification System is an urgent need for modern information security. Meanwhile, fingerprint preprocessing algorithm of fingerprint recognition technology has played an important part in Automatic Fingerprint Identification System. This article introduces the general steps in the fingerprint recognition technology, namely the image input, preprocessing, feature recognition, and fingerprint image enhancement. As the key to fingerprint identification technology, fingerprint image enhancement affects the accuracy of the system. It focuses on the characteristics of the fingerprint image, Gabor filters algorithm for fingerprint image enhancement, the theoretical basis of Gabor filters, and demonstration of the filter. The enhancement algorithm for fingerprint image is in the windows XP platform with matlab.65 as a development tool for the demonstration. The result shows that the Gabor filter is effective in fingerprint image enhancement technology.

  1. Designing solar thermal experiments based on simulation

    International Nuclear Information System (INIS)

    Huleihil, Mahmoud; Mazor, Gedalya

    2013-01-01

    In this study three different models to describe the temperature distribution inside a cylindrical solid body subjected to high solar irradiation were examined, beginning with the simpler approach, which is the single dimension lump system (time), progressing through the two-dimensional distributed system approach (time and vertical direction), and ending with the three-dimensional distributed system approach with azimuthally symmetry (time, vertical direction, and radial direction). The three models were introduced and solved analytically and numerically. The importance of the models and their solution was addressed. The simulations based on them might be considered as a powerful tool in designing experiments, as they make it possible to estimate the different effects of the parameters involved in these models

  2. Computer Based Road Accident Reconstruction Experiences

    Directory of Open Access Journals (Sweden)

    Milan Batista

    2005-03-01

    Full Text Available Since road accident analyses and reconstructions are increasinglybased on specific computer software for simulationof vehicle d1iving dynamics and collision dynamics, and forsimulation of a set of trial runs from which the model that bestdescribes a real event can be selected, the paper presents anoverview of some computer software and methods available toaccident reconstruction experts. Besides being time-saving,when properly used such computer software can provide moreauthentic and more trustworthy accident reconstruction, thereforepractical experiences while using computer software toolsfor road accident reconstruction obtained in the TransportSafety Laboratory at the Faculty for Maritime Studies andTransport of the University of Ljubljana are presented and discussed.This paper addresses also software technology for extractingmaximum information from the accident photo-documentationto support accident reconstruction based on the simulationsoftware, as well as the field work of reconstruction expertsor police on the road accident scene defined by this technology.

  3. Image based radiotherapy, where we stand?

    International Nuclear Information System (INIS)

    Rangacharyulu, Chary

    2016-01-01

    Since the invention of X-ray tube, image based therapy has evolved in many ways. The latest tool is the MR-Linac, where MRI guided Linac Bremsstrahlung radiation therapy is being promoted to cure cancers. Studies are underway to combine proton radiation therapy with positron emission tomography. Also, there are ideas for Bremsstrahlung beam therapy using a few MeV photons to combine with real-time positron emission tomography. While these technologies offer promises to revolutionize radiation oncology, one should be concerned about the potential excessive doses and their consequences to the patient. Also, one should be wary about the instantaneous real-time responses from oncologist or, even a bit scarier, automated decisions based on algorithm dictated protocols, which may result in life and death or even worse, those which may adversely affect the well being of a patient. In essence, treatment protocols which incorporate a thorough, careful assessments are warranted. Further concerns are economics of these developments weighed against the quality of life to the patients and their beloved's. This talk will present the current status and speculate on the possible developments with a few cautionary remarks. (author)

  4. An Integrated Dictionary-Learning Entropy-Based Medical Image Fusion Framework

    Directory of Open Access Journals (Sweden)

    Guanqiu Qi

    2017-10-01

    Full Text Available Image fusion is widely used in different areas and can integrate complementary and relevant information of source images captured by multiple sensors into a unitary synthetic image. Medical image fusion, as an important image fusion application, can extract the details of multiple images from different imaging modalities and combine them into an image that contains complete and non-redundant information for increasing the accuracy of medical diagnosis and assessment. The quality of the fused image directly affects medical diagnosis and assessment. However, existing solutions have some drawbacks in contrast, sharpness, brightness, blur and details. This paper proposes an integrated dictionary-learning and entropy-based medical image-fusion framework that consists of three steps. First, the input image information is decomposed into low-frequency and high-frequency components by using a Gaussian filter. Second, low-frequency components are fused by weighted average algorithm and high-frequency components are fused by the dictionary-learning based algorithm. In the dictionary-learning process of high-frequency components, an entropy-based algorithm is used for informative blocks selection. Third, the fused low-frequency and high-frequency components are combined to obtain the final fusion results. The results and analyses of comparative experiments demonstrate that the proposed medical image fusion framework has better performance than existing solutions.

  5. A kind of color image segmentation algorithm based on super-pixel and PCNN

    Science.gov (United States)

    Xu, GuangZhu; Wang, YaWen; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun

    2018-04-01

    Image segmentation is a very important step in the low-level visual computing. Although image segmentation has been studied for many years, there are still many problems. PCNN (Pulse Coupled Neural network) has biological background, when it is applied to image segmentation it can be viewed as a region-based method, but due to the dynamics properties of PCNN, many connectionless neurons will pulse at the same time, so it is necessary to identify different regions for further processing. The existing PCNN image segmentation algorithm based on region growing is used for grayscale image segmentation, cannot be directly used for color image segmentation. In addition, the super-pixel can better reserve the edges of images, and reduce the influences resulted from the individual difference between the pixels on image segmentation at the same time. Therefore, on the basis of the super-pixel, the original PCNN algorithm based on region growing is improved by this paper. First, the color super-pixel image was transformed into grayscale super-pixel image which was used to seek seeds among the neurons that hadn't been fired. And then it determined whether to stop growing by comparing the average of each color channel of all the pixels in the corresponding regions of the color super-pixel image. Experiment results show that the proposed algorithm for the color image segmentation is fast and effective, and has a certain effect and accuracy.

  6. Content-based image retrieval using a signature graph and a self-organizing map

    Directory of Open Access Journals (Sweden)

    Van Thanh The

    2016-06-01

    Full Text Available In order to effectively retrieve a large database of images, a method of creating an image retrieval system CBIR (contentbased image retrieval is applied based on a binary index which aims to describe features of an image object of interest. This index is called the binary signature and builds input data for the problem of matching similar images. To extract the object of interest, we propose an image segmentation method on the basis of low-level visual features including the color and texture of the image. These features are extracted at each block of the image by the discrete wavelet frame transform and the appropriate color space. On the basis of a segmented image, we create a binary signature to describe the location, color and shape of the objects of interest. In order to match similar images, we provide a similarity measure between the images based on binary signatures. Then, we present a CBIR model which combines a signature graph and a self-organizing map to cluster and store similar images. To illustrate the proposed method, experiments on image databases are reported, including COREL,Wang and MSRDI.

  7. Understanding God images and God concepts: Towards a pastoral hermeneutics of the God attachment experience

    Directory of Open Access Journals (Sweden)

    Victor Counted

    2015-03-01

    Full Text Available The author looks at the God image experience as an attachment relationship experience with God. Hence, arguing that the God image experience is borne originally out of a parent�child attachment contagion, in such a way that God is often represented in either secure or insecure attachment patterns. The article points out that insecure God images often develop head-to-head with God concepts in a believer�s emotional experience of God. On the other hand, the author describes God concepts as indicators of a religious faith and metaphorical standards for regulating insecure attachment patterns. The goals of this article, however, is to highlight the relationship between God images and God concepts, and to provide a hermeneutical process for interpreting and surviving the God image experience.Intradisciplinary and/or interdisciplinary implications: Given that most scholars within the discipline of Practical Theology discuss the subject of God images from cultural and theological perspectives, this article has discussed God images from an attachment perspective, which is a popular framework in psychology of religion. This is rare. The study is therefore interdisciplinary in this regards. The article further helps the reader to understand the intrapsychic process of the God image experience, and thus provides us with hermeneutical answers for dealing with the God image experience from methodologies grounded in Practical Theology and pastoral care.

  8. A Novel Quantum Image Steganography Scheme Based on LSB

    Science.gov (United States)

    Zhou, Ri-Gui; Luo, Jia; Liu, XingAo; Zhu, Changming; Wei, Lai; Zhang, Xiafen

    2018-06-01

    Based on the NEQR representation of quantum images and least significant bit (LSB) scheme, a novel quantum image steganography scheme is proposed. The sizes of the cover image and the original information image are assumed to be 4 n × 4 n and n × n, respectively. Firstly, the bit-plane scrambling method is used to scramble the original information image. Then the scrambled information image is expanded to the same size of the cover image by using the key only known to the operator. The expanded image is scrambled to be a meaningless image with the Arnold scrambling. The embedding procedure and extracting procedure are carried out by K 1 and K 2 which are under control of the operator. For validation of the presented scheme, the peak-signal-to-noise ratio (PSNR), the capacity, the security of the images and the circuit complexity are analyzed.

  9. Molecular–Genetic Imaging: A Nuclear Medicine–Based Perspective

    Directory of Open Access Journals (Sweden)

    Ronald G. Blasberg

    2002-07-01

    Full Text Available Molecular imaging is a relatively new discipline, which developed over the past decade, initially driven by in situ reporter imaging technology. Noninvasive in vivo molecular–genetic imaging developed more recently and is based on nuclear (positron emission tomography [PET], gamma camera, autoradiography imaging as well as magnetic resonance (MR and in vivo optical imaging. Molecular–genetic imaging has its roots in both molecular biology and cell biology, as well as in new imaging technologies. The focus of this presentation will be nuclear-based molecular–genetic imaging, but it will comment on the value and utility of combining different imaging modalities. Nuclear-based molecular imaging can be viewed in terms of three different imaging strategies: (1 “indirect” reporter gene imaging; (2 “direct” imaging of endogenous molecules; or (3 “surrogate” or “bio-marker” imaging. Examples of each imaging strategy will be presented and discussed. The rapid growth of in vivo molecular imaging is due to the established base of in vivo imaging technologies, the established programs in molecular and cell biology, and the convergence of these disciplines. The development of versatile and sensitive assays that do not require tissue samples will be of considerable value for monitoring molecular–genetic and cellular processes in animal models of human disease, as well as for studies in human subjects in the future. Noninvasive imaging of molecular–genetic and cellular processes will complement established ex vivo molecular–biological assays that require tissue sampling, and will provide a spatial as well as a temporal dimension to our understanding of various diseases and disease processes.

  10. SAR Image Classification Based on Its Texture Features

    Institute of Scientific and Technical Information of China (English)

    LI Pingxiang; FANG Shenghui

    2003-01-01

    SAR images not only have the characteristics of all-ay, all-eather, but also provide object information which is different from visible and infrared sensors. However, SAR images have some faults, such as more speckles and fewer bands. The authors conducted the experiments of texture statistics analysis on SAR image features in order to improve the accuracy of SAR image interpretation.It is found that the texture analysis is an effective method for improving the accuracy of the SAR image interpretation.

  11. Ghost imaging based on Pearson correlation coefficients

    International Nuclear Information System (INIS)

    Yu Wen-Kai; Yao Xu-Ri; Liu Xue-Feng; Li Long-Zhen; Zhai Guang-Jie

    2015-01-01

    Correspondence imaging is a new modality of ghost imaging, which can retrieve a positive/negative image by simple conditional averaging of the reference frames that correspond to relatively large/small values of the total intensity measured at the bucket detector. Here we propose and experimentally demonstrate a more rigorous and general approach in which a ghost image is retrieved by calculating a Pearson correlation coefficient between the bucket detector intensity and the brightness at a given pixel of the reference frames, and at the next pixel, and so on. Furthermore, we theoretically provide a statistical interpretation of these two imaging phenomena, and explain how the error depends on the sample size and what kind of distribution the error obeys. According to our analysis, the image signal-to-noise ratio can be greatly improved and the sampling number reduced by means of our new method. (paper)

  12. TSV last for hybrid pixel detectors: Application to particle physics and imaging experiments

    CERN Document Server

    Henry, D; Berthelot, A; Cuchet, R; Chantre, C; Campbell, M

    Hybrid pixel detectors are now widely used in particle physics experiments and at synchrotron light sources. They have also stimulated growing interest in other fields and, in particular, in medical imaging. Through the continuous pursuit of miniaturization in CMOS it has been possible to increase the functionality per pixel while maintaining or even shrinking pixel dimensions. The main constraint on the more extensive use of the technology in all fields is the cost of module building and the difficulty of covering large areas seamlessly [1]. On another hand, in the field of electronic component integration, a new approach has been developed in the last years, called 3D Integration. This concept, based on using the vertical axis for component integration, allows improving the global performance of complex systems. Thanks to this technology, the cost and the form factor of components could be decreased and the performance of the global system could be enhanced. In the field of radiation imaging detectors the a...

  13. Improved Mesh_Based Image Morphing ‎

    Directory of Open Access Journals (Sweden)

    Mohammed Abdullah Taha

    2017-11-01

    Full Text Available Image morphing is a multi-step process that generates a sequence of transitions between two images. The thought is to get a ₔgrouping of middle pictures which, when ₔassembled with the first pictures would represent the change from one picture to the other.  The process of morphing requires time and attention to detail in order to get good results. Morphing image requires at least two processes warping and cross dissolve. Warping is the process of geometric transformation of images. The cross dissolve is the process interpolation of color of eachₔ pixel from the first image value to theₔ corresponding second imageₔ value over the time. Image morphing techniques differ from in the approach of image warping procedure. This work presents a survey of different techniques to construct morphing images by review the different warping techniques. One of the predominant approaches of warping process is mesh warping which suffers from some problems including ghosting. This work proposed and implements an improved mesh warping technique to construct morphing images. The results show that the proposed approach can overcome the problems of the traditional mesh technique

  14. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments\\' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  15. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    KAUST Repository

    Wang, Jim Jing-Yan; Almasri, Islam

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  16. Evidence-based medicine Training: Kazakhstan experience.

    Science.gov (United States)

    Kamalbekova, G; Kalieva, M

    2015-01-01

    practice. These were: failure in implementing, lack of understanding on the part of colleagues, commitment to traditional obsolete methods of treatment, discrepancy between some of the existing standards of diagnosis and treatment and principles of evidence-based medicine.To the question: «Are there any end products after listening to the seminar?» 67% of the respondents answered in affirmative. The end products were mainly marked by the publication of articles and abstracts, including international publications, and participation in the working group on the revision and development of clinical protocols. Barriers to implementation of Evidence-Based Medicine in education and practice are lack of funding to provide access to reliable sources of information, websites; outdated research methodology skills in medical education, lack of skills in critical evaluation of medical information; tradition of authoritarian relationships, use of past experience stencils; failure to comply with continuing education programs ("from training to professional development"). Knowledge of Evidence-Based Medicine, skills to perform searches for scientific data, to evaluate their validity and to transform scientific data into practical solutions are necessary for health workers in their daily activities. This culture needs to be rooted in modern medical education.

  17. Image Registration-Based Bolt Loosening Detection of Steel Joints

    Science.gov (United States)

    2018-01-01

    Self-loosening of bolts caused by repetitive loads and vibrations is one of the common defects that can weaken the structural integrity of bolted steel joints in civil structures. Many existing approaches for detecting loosening bolts are based on physical sensors and, hence, require extensive sensor deployment, which limit their abilities to cost-effectively detect loosened bolts in a large number of steel joints. Recently, computer vision-based structural health monitoring (SHM) technologies have demonstrated great potential for damage detection due to the benefits of being low cost, easy to deploy, and contactless. In this study, we propose a vision-based non-contact bolt loosening detection method that uses a consumer-grade digital camera. Two images of the monitored steel joint are first collected during different inspection periods and then aligned through two image registration processes. If the bolt experiences rotation between inspections, it will introduce differential features in the registration errors, serving as a good indicator for bolt loosening detection. The performance and robustness of this approach have been validated through a series of experimental investigations using three laboratory setups including a gusset plate on a cross frame, a column flange, and a girder web. The bolt loosening detection results are presented for easy interpretation such that informed decisions can be made about the detected loosened bolts. PMID:29597264

  18. Image Registration-Based Bolt Loosening Detection of Steel Joints.

    Science.gov (United States)

    Kong, Xiangxiong; Li, Jian

    2018-03-28

    Self-loosening of bolts caused by repetitive loads and vibrations is one of the common defects that can weaken the structural integrity of bolted steel joints in civil structures. Many existing approaches for detecting loosening bolts are based on physical sensors and, hence, require extensive sensor deployment, which limit their abilities to cost-effectively detect loosened bolts in a large number of steel joints. Recently, computer vision-based structural health monitoring (SHM) technologies have demonstrated great potential for damage detection due to the benefits of being low cost, easy to deploy, and contactless. In this study, we propose a vision-based non-contact bolt loosening detection method that uses a consumer-grade digital camera. Two images of the monitored steel joint are first collected during different inspection periods and then aligned through two image registration processes. If the bolt experiences rotation between inspections, it will introduce differential features in the registration errors, serving as a good indicator for bolt loosening detection. The performance and robustness of this approach have been validated through a series of experimental investigations using three laboratory setups including a gusset plate on a cross frame, a column flange, and a girder web. The bolt loosening detection results are presented for easy interpretation such that informed decisions can be made about the detected loosened bolts.

  19. CMOS image sensor-based implantable glucose sensor using glucose-responsive fluorescent hydrogel.

    Science.gov (United States)

    Tokuda, Takashi; Takahashi, Masayuki; Uejima, Kazuhiro; Masuda, Keita; Kawamura, Toshikazu; Ohta, Yasumi; Motoyama, Mayumi; Noda, Toshihiko; Sasagawa, Kiyotaka; Okitsu, Teru; Takeuchi, Shoji; Ohta, Jun

    2014-11-01

    A CMOS image sensor-based implantable glucose sensor based on an optical-sensing scheme is proposed and experimentally verified. A glucose-responsive fluorescent hydrogel is used as the mediator in the measurement scheme. The wired implantable glucose sensor was realized by integrating a CMOS image sensor, hydrogel, UV light emitting diodes, and an optical filter on a flexible polyimide substrate. Feasibility of the glucose sensor was verified by both in vitro and in vivo experiments.

  20. Multi-clues image retrieval based on improved color invariants

    Science.gov (United States)

    Liu, Liu; Li, Jian-Xun

    2012-05-01

    At present, image retrieval has a great progress in indexing efficiency and memory usage, which mainly benefits from the utilization of the text retrieval technology, such as the bag-of-features (BOF) model and the inverted-file structure. Meanwhile, because the robust local feature invariants are selected to establish BOF, the retrieval precision of BOF is enhanced, especially when it is applied to a large-scale database. However, these local feature invariants mainly consider the geometric variance of the objects in the images, and thus the color information of the objects fails to be made use of. Because of the development of the information technology and Internet, the majority of our retrieval objects is color images. Therefore, retrieval performance can be further improved through proper utilization of the color information. We propose an improved method through analyzing the flaw of shadow-shading quasi-invariant. The response and performance of shadow-shading quasi-invariant for the object edge with the variance of lighting are enhanced. The color descriptors of the invariant regions are extracted and integrated into BOF based on the local feature. The robustness of the algorithm and the improvement of the performance are verified in the final experiments.

  1. Artistic image analysis using graph-based learning approaches.

    Science.gov (United States)

    Carneiro, Gustavo

    2013-08-01

    We introduce a new methodology for the problem of artistic image analysis, which among other tasks, involves the automatic identification of visual classes present in an art work. In this paper, we advocate the idea that artistic image analysis must explore a graph that captures the network of artistic influences by computing the similarities in terms of appearance and manual annotation. One of the novelties of our methodology is the proposed formulation that is a principled way of combining these two similarities in a single graph. Using this graph, we show that an efficient random walk algorithm based on an inverted label propagation formulation produces more accurate annotation and retrieval results compared with the following baseline algorithms: bag of visual words, label propagation, matrix completion, and structural learning. We also show that the proposed approach leads to a more efficient inference and training procedures. This experiment is run on a database containing 988 artistic images (with 49 visual classification problems divided into a multiclass problem with 27 classes and 48 binary problems), where we show the inference and training running times, and quantitative comparisons with respect to several retrieval and annotation performance measures.

  2. The CORONAS-Photon/TESIS experiment on EUV imaging spectroscopy of the Sun

    Science.gov (United States)

    Kuzin, S.; Zhitnik, I.; Bogachev, S.; Bugaenko, O.; Ignat'ev, A.; Mitrofanov, A.; Perzov, A.; Shestov, S.; Slemzin, V.; Suhodrev, N.

    The new experiment TESIS is developent for russian CORONAS-Photon mission launch is planned on the end of 2007 The experiment is aimed on the study of activity of the Sun in the phases of minimum rise and maximum of 24 th cycle of Solar activity by the method of XUV imaging spectroscopy The method is based on the registration full-Sun monochromatic images with high spatial and temporal resolution The scientific tasks of the experiment are i Investigation dynamic processes in corona flares CME etc with high spatial up to 1 and temporal up to 1 second resolution ii determination of the main plasma parameters like plasma electron and ion density and temperature differential emission measure etc iii study of the processes of appearance and development large scale long-life magnetic structures in the solar corona study of the fluency of this structures on the global activity of the corona iv study of the mechanisms of energy accumulation and release in the solar flares and mechanisms of transformation of this energy into the heating of the plasma and kinematics energy To get the information for this studies the TESIS will register full-Sun images in narrow spectral intervals and the monochromatic lines of HeII SiXI FeXXI-FeXXIII MgXII ions The instrument includes 5 independent channels 2 telescopes for 304 and 132 A wide-field 2 5 degrees coronograph 280-330A and 8 42 A spectroheliographs The detailed description of the TESIS experiment and the instrument is presented

  3. Image fusion between whole body FDG PET images and whole body MRI images using a full-automatic mutual information-based multimodality image registration software

    International Nuclear Information System (INIS)

    Uchida, Yoshitaka; Nakano, Yoshitada; Fujibuchi, Toshiou; Isobe, Tomoko; Kazama, Toshiki; Ito, Hisao

    2006-01-01

    We attempted image fusion between whole body PET and whole body MRI of thirty patients using a full-automatic mutual information (MI) -based multimodality image registration software and evaluated accuracy of this method and impact of the coregistrated imaging on diagnostic accuracy. For 25 of 30 fused images in body area, translating gaps were within 6 mm in all axes and rotating gaps were within 2 degrees around all axes. In head and neck area, considerably much gaps caused by difference of head inclination at imaging occurred in 16 patients, however these gaps were able to decrease by fused separately. In 6 patients, diagnostic accuracy using PET/MRI fused images was superior compared by PET image alone. This work shows that whole body FDG PET images and whole body MRI images can be automatically fused using MI-based multimodality image registration software accurately and this technique can add useful information when evaluating FDG PET images. (author)

  4. Anesthesia Experiences During Magnetic Imaging Process on Pediatric Patients

    OpenAIRE

    Öztürk, Ömür; Üstebay, Sefer; Bilge, Ali

    2017-01-01

    We aim to study the quality of sedation and complications ratios during anesthesia applied with sodium thiopental and propofol and the reason of the magnetic imaging requests on pediatric patients retrospectively according to the hospital data. Material and Method: In this study, 109 patients, aged from 3 months to 5 years, that have been applied magnetic imaging process under anesthesia, have been examined retrospectively. Results: Pentotal sodium has been applied to 53 patients and propofol...

  5. Comparison of Subset-Based Local and Finite Element-Based Global Digital Image Correlation

    KAUST Repository

    Pan, Bing

    2015-02-12

    Digital image correlation (DIC) techniques require an image matching algorithm to register the same physical points represented in different images. Subset-based local DIC and finite element-based (FE-based) global DIC are the two primary image matching methods that have been extensively investigated and regularly used in the field of experimental mechanics. Due to its straightforward implementation and high efficiency, subset-based local DIC has been used in almost all commercial DIC packages. However, it is argued by some researchers that FE-based global DIC offers better accuracy because of the enforced continuity between element nodes. We propose a detailed performance comparison between these different DIC algorithms both in terms of measurement accuracy and computational efficiency. Then, by measuring displacements of the same calculation points using the same calculation algorithms (e.g., correlation criterion, initial guess estimation, subpixel interpolation, optimization algorithm and convergence conditions) and identical calculation parameters (e.g., subset or element size), the performances of subset-based local DIC and two FE-based global DIC approaches are carefully compared in terms of measurement error and computational efficiency using both numerical tests and real experiments. A detailed examination of the experimental results reveals that, when subset (element) size is not very small and the local deformation within a subset (element) can be well approximated by the shape function used, standard subset-based local DIC approach not only provides better results in measured displacements, but also demonstrates much higher computation efficiency. However, several special merits of FE-based global DIC approaches are indicated.

  6. Comparison of Subset-Based Local and Finite Element-Based Global Digital Image Correlation

    KAUST Repository

    Pan, Bing; Wang, B.; Lubineau, Gilles; Moussawi, Ali

    2015-01-01

    Digital image correlation (DIC) techniques require an image matching algorithm to register the same physical points represented in different images. Subset-based local DIC and finite element-based (FE-based) global DIC are the two primary image matching methods that have been extensively investigated and regularly used in the field of experimental mechanics. Due to its straightforward implementation and high efficiency, subset-based local DIC has been used in almost all commercial DIC packages. However, it is argued by some researchers that FE-based global DIC offers better accuracy because of the enforced continuity between element nodes. We propose a detailed performance comparison between these different DIC algorithms both in terms of measurement accuracy and computational efficiency. Then, by measuring displacements of the same calculation points using the same calculation algorithms (e.g., correlation criterion, initial guess estimation, subpixel interpolation, optimization algorithm and convergence conditions) and identical calculation parameters (e.g., subset or element size), the performances of subset-based local DIC and two FE-based global DIC approaches are carefully compared in terms of measurement error and computational efficiency using both numerical tests and real experiments. A detailed examination of the experimental results reveals that, when subset (element) size is not very small and the local deformation within a subset (element) can be well approximated by the shape function used, standard subset-based local DIC approach not only provides better results in measured displacements, but also demonstrates much higher computation efficiency. However, several special merits of FE-based global DIC approaches are indicated.

  7. Simple and robust image-based autofocusing for digital microscopy.

    Science.gov (United States)

    Yazdanfar, Siavash; Kenny, Kevin B; Tasimi, Krenar; Corwin, Alex D; Dixon, Elizabeth L; Filkins, Robert J

    2008-06-09

    A simple image-based autofocusing scheme for digital microscopy is demonstrated that uses as few as two intermediate images to bring the sample into focus. The algorithm is adapted to a commercial inverted microscope and used to automate brightfield and fluorescence imaging of histopathology tissue sections.

  8. Determination of Surface Tension of Surfactant Solutions through Capillary Rise Measurements: An Image-Processing Undergraduate Laboratory Experiment

    Science.gov (United States)

    Huck-Iriart, Cristia´n; De-Candia, Ariel; Rodriguez, Javier; Rinaldi, Carlos

    2016-01-01

    In this work, we described an image processing procedure for the measurement of surface tension of the air-liquid interface using isothermal capillary action. The experiment, designed for an undergraduate course, is based on the analysis of a series of solutions with diverse surfactant concentrations at different ionic strengths. The objective of…

  9. Nuclear imaging of the fuel assembly in ignition experiments

    Energy Technology Data Exchange (ETDEWEB)

    Grim, G. P.; Guler, N.; Merrill, F. E.; Morgan, G. L.; Danly, C. R.; Volegov, P. L.; Wilde, C. H.; Wilson, D. C.; Clark, D. S.; Hinkel, D. E.; Jones, O. S.; Raman, K. S.; Izumi, N.; Fittinghoff, D. N.; Drury, O. B.; Alger, E. T.; Arnold, P. A.; Ashabranner, R. C.; Atherton, L. J.; Barrios, M. A.; Batha, S.; Bell, P. M.; Benedetti, L. R.; Berger, R. L.; Bernstein, L. A.; Berzins, L. V.; Betti, R.; Bhandarkar, S. D.; Bionta, R. M.; Bleuel, D. L.; Boehly, T. R.; Bond, E. J.; Bowers, M. W.; Bradley, D. K.; Brunton, G. K.; Buckles, R. A.; Burkhart, S. C.; Burr, R. F.; Caggiano, J. A.; Callahan, D. A.; Casey, D. T.; Castro, C.; Celliers, P. M.; Cerjan, C. J.; Chandler, G. A.; Choate, C.; Cohen, S. J.; Collins, G. W.; Cooper, G. W.; Cox, J. R.; Cradick, J. R.; Datte, P. S.; Dewald, E. L.; Di Nicola, P.; Di Nicola, J. M.; Divol, L.; Dixit, S. N.; Dylla-Spears, R.; Dzenitis, E. G.; Eckart, M. J.; Eder, D. C.; Edgell, D. H.; Edwards, M. J.; Eggert, J. H.; Ehrlich, R. B.; Erbert, G. V.; Fair, J.; Farley, D. R.; Felker, B.; Fortner, R. J.; Frenje, J. A.; Frieders, G.; Friedrich, S.; Gatu-Johnson, M.; Gibson, C. R.; Giraldez, E.; Glebov, V. Y.; Glenn, S. M.; Glenzer, S. H.; Gururangan, G.; Haan, S. W.; Hahn, K. D.; Hammel, B. A.; Hamza, A. V.; Hartouni, E. P.; Hatarik, R.; Hatchett, S. P.; Haynam, C.; Hermann, M. R.; Herrmann, H. W.; Hicks, D. G.; Holder, J. P.; Holunga, D. M.; Horner, J. B.; Hsing, W. W.; Huang, H.; Jackson, M. C.; Jancaitis, K. S.; Kalantar, D. H.; Kauffman, R. L.; Kauffman, M. I.; Khan, S. F.; Kilkenny, J. D.; Kimbrough, J. R.; Kirkwood, R.; Kline, J. L.; Knauer, J. P.; Knittel, K. M.; Koch, J. A.; Kohut, T. R.; Kozioziemski, B. J.; Krauter, K.; Krauter, G. W.; Kritcher, A. L.; Kroll, J.; Kyrala, G. A.; Fortune, K. N. La; LaCaille, G.; Lagin, L. J.; Land, T. A.; Landen, O. L.; Larson, D. W.; Latray, D. A.; Leeper, R. J.; Lewis, T. L.; LePape, S.; Lindl, J. D.; Lowe-Webb, R. R.; Ma, T.; MacGowan, B. J.; MacKinnon, A. J.; MacPhee, A. G.; Malone, R. M.; Malsbury, T. N.; Mapoles, E.; Marshall, C. D.; Mathisen, D. G.; McKenty, P.; McNaney, J. M.; Meezan, N. B.; Michel, P.; Milovich, J. L.; Moody, J. D.; Moore, A. S.; Moran, M. J.; Moreno, K.; Moses, E. I.; Munro, D. H.; Nathan, B. R.; Nelson, A. J.; Nikroo, A.; Olson, R. E.; Orth, C.; Pak, A. E.; Palma, E. S.; Parham, T. G.; Patel, P. K.; Patterson, R. W.; Petrasso, R. D.; Prasad, R.; Ralph, J. E.; Regan, S. P.; Rinderknecht, H.; Robey, H. F.; Ross, G. F.; Ruiz, C. L.; Seguin, F. H.; Salmonson, J. D.; Sangster, T. C.; Sater, J. D.; Saunders, R. L.; Schneider, M. B.; Schneider, D. H.; Shaw, M. J.; Simanovskaia, N.; Spears, B. K.; Springer, P. T.; Stoeckl, C.; Stoeffl, W.; Suter, L. J.; Thomas, C. A.; Tommasini, R.; Town, R. P.; Traille, A. J.; Wonterghem, B. Van; Wallace, R. J.; Weaver, S.; Weber, S. V.; Wegner, P. J.; Whitman, P. K.; Widmann, K.; Widmayer, C. C.; Wood, R. D.; Young, B. K.; Zacharias, R. A.; Zylstra, A.

    2013-05-01

    First results from the analysis of neutron image data collected on implosions of cryogenically layered deuterium-tritium capsules during the 2011-2012 National Ignition Campaign are reported. The data span a variety of experimental designs aimed at increasing the stagnation pressure of the central hotspot and areal density of the surrounding fuel assembly. Images of neutrons produced by deuterium–tritium fusion reactions in the hotspot are presented, as well as images of neutrons that scatter in the surrounding dense fuel assembly. The image data are compared with 1D and 2D model predictions, and consistency checked using other diagnostic data. The results indicate that the size of the fusing hotspot is consistent with the model predictions, as well as other imaging data, while the overall size of the fuel assembly, inferred from the scattered neutron images, is systematically smaller than models’ prediction. Preliminary studies indicate these differences are consistent with a significant fraction (20%–25%) of the initial deuterium-tritium fuel mass outside the compact fuel assembly, due either to low mode mass asymmetry or high mode 3D mix effects at the ablator-ice interface.

  10. Dual-source CT cardiac imaging: initial experience

    International Nuclear Information System (INIS)

    Johnson, Thorsten R.C.; Nikolaou, Konstantin; Wintersperger, Bernd J.; Rist, Carsten; Buhmann, Sonja; Reiser, Maximilian F.; Becker, Christoph R.; Leber, Alexander W.; Ziegler, Franz von; Knez, Andreas

    2006-01-01

    The relation of heart rate and image quality in the depiction of coronary arteries, heart valves and myocardium was assessed on a dual-source computed tomography system (DSCT). Coronary CT angiography was performed on a DSCT (Somatom Definition, Siemens) with high concentration contrast media (Iopromide, Ultravist 370, Schering) in 24 patients with heart rates between 44 and 92 beats per minute. Images were reconstructed over the whole cardiac cycle in 10% steps. Two readers independently assessed the image quality with regard to the diagnostic evaluation of right and left coronary artery, heart valves and left ventricular myocardium for the assessment of vessel wall changes, coronary stenoses, valve morphology and function and ventricular function on a three point grading scale. The image quality ratings at the optimal reconstruction interval were 1.24±0.42 for the right and 1.09±0.27 for the left coronary artery. A reconstruction of diagnostic systolic and diastolic images is possible for a wide range of heart rates, allowing also a functional evaluation of valves and myocardium. Dual-source CT offers very robust diagnostic image quality in a wide range of heart rates. The high temporal resolution now also makes a functional evaluation of the heart valves and myocardium possible. (orig.)

  11. Nuclear imaging of the fuel assembly in ignition experiments

    Energy Technology Data Exchange (ETDEWEB)

    Grim, G. P.; Guler, N.; Merrill, F. E.; Morgan, G. L.; Danly, C. R.; Volegov, P. L.; Wilde, C. H.; Wilson, D. C.; Batha, S.; Herrmann, H. W.; Kline, J. L.; Kyrala, G. A. [Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Clark, D. S.; Hinkel, D. E.; Jones, O. S.; Raman, K. S.; Izumi, N.; Fittinghoff, D. N.; Drury, O. B.; Alger, E. T. [Lawrence Livermore National Laboratory, Livermore, California 94551-0808 (United States); and others

    2013-05-15

    First results from the analysis of neutron image data collected on implosions of cryogenically layered deuterium-tritium capsules during the 2011-2012 National Ignition Campaign are reported. The data span a variety of experimental designs aimed at increasing the stagnation pressure of the central hotspot and areal density of the surrounding fuel assembly. Images of neutrons produced by deuterium–tritium fusion reactions in the hotspot are presented, as well as images of neutrons that scatter in the surrounding dense fuel assembly. The image data are compared with 1D and 2D model predictions, and consistency checked using other diagnostic data. The results indicate that the size of the fusing hotspot is consistent with the model predictions, as well as other imaging data, while the overall size of the fuel assembly, inferred from the scattered neutron images, is systematically smaller than models' prediction. Preliminary studies indicate these differences are consistent with a significant fraction (20%–25%) of the initial deuterium-tritium fuel mass outside the compact fuel assembly, due either to low mode mass asymmetry or high mode 3D mix effects at the ablator-ice interface.

  12. Scene matching based on non-linear pre-processing on reference image and sensed image

    Institute of Scientific and Technical Information of China (English)

    Zhong Sheng; Zhang Tianxu; Sang Nong

    2005-01-01

    To solve the heterogeneous image scene matching problem, a non-linear pre-processing method for the original images before intensity-based correlation is proposed. The result shows that the proper matching probability is raised greatly. Especially for the low S/N image pairs, the effect is more remarkable.

  13. Hurricane Imaging Radiometer Wind Speed and Rain Rate Retrievals during the 2010 GRIP Flight Experiment

    Science.gov (United States)

    Sahawneh, Saleem; Farrar, Spencer; Johnson, James; Jones, W. Linwood; Roberts, Jason; Biswas, Sayak; Cecil, Daniel

    2014-01-01

    Microwave remote sensing observations of hurricanes, from NOAA and USAF hurricane surveillance aircraft, provide vital data for hurricane research and operations, for forecasting the intensity and track of tropical storms. The current operational standard for hurricane wind speed and rain rate measurements is the Stepped Frequency Microwave Radiometer (SFMR), which is a nadir viewing passive microwave airborne remote sensor. The Hurricane Imaging Radiometer, HIRAD, will extend the nadir viewing SFMR capability to provide wide swath images of wind speed and rain rate, while flying on a high altitude aircraft. HIRAD was first flown in the Genesis and Rapid Intensification Processes, GRIP, NASA hurricane field experiment in 2010. This paper reports on geophysical retrieval results and provides hurricane images from GRIP flights. An overview of the HIRAD instrument and the radiative transfer theory based, wind speed/rain rate retrieval algorithm is included. Results are presented for hurricane wind speed and rain rate for Earl and Karl, with comparison to collocated SFMR retrievals and WP3D Fuselage Radar images for validation purposes.

  14. Content Based Retrieval System for Magnetic Resonance Images

    International Nuclear Information System (INIS)

    Trojachanets, Katarina

    2010-01-01

    The amount of medical images is continuously increasing as a consequence of the constant growth and development of techniques for digital image acquisition. Manual annotation and description of each image is impractical, expensive and time consuming approach. Moreover, it is an imprecise and insufficient way for describing all information stored in medical images. This induces the necessity for developing efficient image storage, annotation and retrieval systems. Content based image retrieval (CBIR) emerges as an efficient approach for digital image retrieval from large databases. It includes two phases. In the first phase, the visual content of the image is analyzed and the feature extraction process is performed. An appropriate descriptor, namely, feature vector is then associated with each image. These descriptors are used in the second phase, i.e. the retrieval process. With the aim to improve the efficiency and precision of the content based image retrieval systems, feature extraction and automatic image annotation techniques are subject of continuous researches and development. Including the classification techniques in the retrieval process enables automatic image annotation in an existing CBIR system. It contributes to more efficient and easier image organization in the system.Applying content based retrieval in the field of magnetic resonance is a big challenge. Magnetic resonance imaging is an image based diagnostic technique which is widely used in medical environment. According to this, the number of magnetic resonance images is enormously growing. Magnetic resonance images provide plentiful medical information, high resolution and specific nature. Thus, the capability of CBIR systems for image retrieval from large database is of great importance for efficient analysis of this kind of images. The aim of this thesis is to propose content based retrieval system architecture for magnetic resonance images. To provide the system efficiency, feature

  15. A New Images Hiding Scheme Based on Chaotic Sequences

    Institute of Scientific and Technical Information of China (English)

    LIU Nian-sheng; GUO Dong-hui; WU Bo-xi; Parr G

    2005-01-01

    We propose a data hidding technique in a still image. This technique is based on chaotic sequence in the transform domain of covert image. We use different chaotic random sequences multiplied by multiple sensitive images, respectively, to spread the spectrum of sensitive images. Multiple sensitive images are hidden in a covert image as a form of noise. The results of theoretical analysis and computer simulation show the new hiding technique have better properties with high security, imperceptibility and capacity for hidden information in comparison with the conventional scheme such as LSB (Least Significance Bit).

  16. Fibre laser based broadband THz imaging systems

    DEFF Research Database (Denmark)

    Eichhorn, Finn

    imaging techniques. This thesis exhibits that fiber technology can improve the robustness and the flexibility of terahertz imaging systems both by the use of fiber-optic light sources and the employment of optical fibers as light distribution medium. The main focus is placed on multi-element terahertz...

  17. IMAGE TYPE WATER METER CHARACTER RECOGNITION BASED ON EMBEDDED DSP

    OpenAIRE

    LIU Ying; HAN Yan-bin; ZHANG Yu-lin

    2015-01-01

    In the paper, we combined DSP processor with image processing algorithm and studied the method of water meter character recognition. We collected water meter image through camera at a fixed angle, and the projection method is used to recognize those digital images. The experiment results show that the method can recognize the meter characters accurately and artificial meter reading is replaced by automatic digital recognition, which improves working efficiency.

  18. A new hyperspectral image compression paradigm based on fusion

    Science.gov (United States)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  19. Model-Based Reconstructive Elasticity Imaging Using Ultrasound

    Directory of Open Access Journals (Sweden)

    Salavat R. Aglyamov

    2007-01-01

    Full Text Available Elasticity imaging is a reconstructive imaging technique where tissue motion in response to mechanical excitation is measured using modern imaging systems, and the estimated displacements are then used to reconstruct the spatial distribution of Young's modulus. Here we present an ultrasound elasticity imaging method that utilizes the model-based technique for Young's modulus reconstruction. Based on the geometry of the imaged object, only one axial component of the strain tensor is used. The numerical implementation of the method is highly efficient because the reconstruction is based on an analytic solution of the forward elastic problem. The model-based approach is illustrated using two potential clinical applications: differentiation of liver hemangioma and staging of deep venous thrombosis. Overall, these studies demonstrate that model-based reconstructive elasticity imaging can be used in applications where the geometry of the object and the surrounding tissue is somewhat known and certain assumptions about the pathology can be made.

  20. Fingerprint Image Enhancement Based on Second Directional Derivative of the Digital Image

    Directory of Open Access Journals (Sweden)

    Onnia Vesa

    2002-01-01

    Full Text Available This paper presents a novel approach of fingerprint image enhancement that relies on detecting the fingerprint ridges as image regions where the second directional derivative of the digital image is positive. A facet model is used in order to approximate the derivatives at each image pixel based on the intensity values of pixels located in a certain neighborhood. We note that the size of this neighborhood has a critical role in achieving accurate enhancement results. Using neighborhoods of various sizes, the proposed algorithm determines several candidate binary representations of the input fingerprint pattern. Subsequently, an output binary ridge-map image is created by selecting image zones, from the available binary image candidates, according to a MAP selection rule. Two public domain collections of fingerprint images are used in order to objectively assess the performance of the proposed fingerprint image enhancement approach.

  1. [AN EDUCATIONAL EXPERIENCE BASED ON CLICKERS].

    Science.gov (United States)

    García Rodríguez, Jose Juan; Lara Domínguez, Pilar A; Torres Pérez, Luis Francisco

    2015-05-01

    Active learning or self-learning increases the student's participation and commitment to his studies; these conditions are necessary to improve academic performance. An intervention has been designed based on the experience in the use of clickers in other universities, but without the actual technology. This work has been performed in the School of Nursing affiliated to the University of Malaga (UMA) on students enrolled in their second year of Degree in Adult Nursing Course I. Three sessions of multiple-choice questions were scheduled on the subject "distance learning" in which master classes were not taught. The answers were collected on paper templates. We wanted to determine the degree of relationship between the attendance of sessions and the results obtained by students in the final examination of the subject, as well as, the questions dedicated to assess the "distance learning" matter. The results support a significant statistical difference in the correct answers by students according to the number of sessions attended. These differences are highest among students who did not attend any session and those who attended the three planned sessions.

  2. COMPARISON AND EVALUATION OF CLUSTER BASED IMAGE SEGMENTATION TECHNIQUES

    OpenAIRE

    Hetangi D. Mehta*, Daxa Vekariya, Pratixa Badelia

    2017-01-01

    Image segmentation is the classification of an image into different groups. Numerous algorithms using different approaches have been proposed for image segmentation. A major challenge in segmentation evaluation comes from the fundamental conflict between generality and objectivity. A review is done on different types of clustering methods used for image segmentation. Also a methodology is proposed to classify and quantify different clustering algorithms based on their consistency in different...

  3. Bone surface enhancement in ultrasound images using a new Doppler-based acquisition/processing method

    Science.gov (United States)

    Yang, Xu; Tang, Songyuan; Tasciotti, Ennio; Righetti, Raffaella

    2018-01-01

    Ultrasound (US) imaging has long been considered as a potential aid in orthopedic surgeries. US technologies are safe, portable and do not use radiations. This would make them a desirable tool for real-time assessment of fractures and to monitor fracture healing. However, image quality of US imaging methods in bone applications is limited by speckle, attenuation, shadow, multiple reflections and other imaging artifacts. While bone surfaces typically appear in US images as somewhat ‘brighter’ than soft tissue, they are often not easily distinguishable from the surrounding tissue. Therefore, US imaging methods aimed at segmenting bone surfaces need enhancement in image contrast prior to segmentation to improve the quality of the detected bone surface. In this paper, we present a novel acquisition/processing technique for bone surface enhancement in US images. Inspired by elastography and Doppler imaging methods, this technique takes advantage of the difference between the mechanical and acoustic properties of bones and those of soft tissues to make the bone surface more easily distinguishable in US images. The objective of this technique is to facilitate US-based bone segmentation methods and improve the accuracy of their outcomes. The newly proposed technique is tested both in in vitro and in vivo experiments. The results of these preliminary experiments suggest that the use of the proposed technique has the potential to significantly enhance the detectability of bone surfaces in noisy ultrasound images.

  4. Bone surface enhancement in ultrasound images using a new Doppler-based acquisition/processing method.

    Science.gov (United States)

    Yang, Xu; Tang, Songyuan; Tasciotti, Ennio; Righetti, Raffaella

    2018-01-17

    Ultrasound (US) imaging has long been considered as a potential aid in orthopedic surgeries. US technologies are safe, portable and do not use radiations. This would make them a desirable tool for real-time assessment of fractures and to monitor fracture healing. However, image quality of US imaging methods in bone applications is limited by speckle, attenuation, shadow, multiple reflections and other imaging artifacts. While bone surfaces typically appear in US images as somewhat 'brighter' than soft tissue, they are often not easily distinguishable from the surrounding tissue. Therefore, US imaging methods aimed at segmenting bone surfaces need enhancement in image contrast prior to segmentation to improve the quality of the detected bone surface. In this paper, we present a novel acquisition/processing technique for bone surface enhancement in US images. Inspired by elastography and Doppler imaging methods, this technique takes advantage of the difference between the mechanical and acoustic properties of bones and those of soft tissues to make the bone surface more easily distinguishable in US images. The objective of this technique is to facilitate US-based bone segmentation methods and improve the accuracy of their outcomes. The newly proposed technique is tested both in in vitro and in vivo experiments. The results of these preliminary experiments suggest that the use of the proposed technique has the potential to significantly enhance the detectability of bone surfaces in noisy ultrasound images.

  5. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  6. New calibration technique for KCD-based megavoltage imaging

    Science.gov (United States)

    Samant, Sanjiv S.; Zheng, Wei; DiBianca, Frank A.; Zeman, Herbert D.; Laughter, Joseph S.

    1999-05-01

    In megavoltage imaging, current commercial electronic portal imaging devices (EPIDs), despite having the advantage of immediate digital imaging over film, suffer from poor image contrast and spatial resolution. The feasibility of using a kinestatic charge detector (KCD) as an EPID to provide superior image contrast and spatial resolution for portal imaging has already been demonstrated in a previous paper. The KCD system had the additional advantage of requiring an extremely low dose per acquired image, allowing for superior imaging to be reconstructed form a single linac pulse per image pixel. The KCD based images utilized a dose of two orders of magnitude less that for EPIDs and film. Compared with the current commercial EPIDs and film, the prototype KCD system exhibited promising image qualities, despite being handicapped by the use of a relatively simple image calibration technique, and the performance limits of medical linacs on the maximum linac pulse frequency and energy flux per pulse delivered. This image calibration technique fixed relative image pixel values based on a linear interpolation of extrema provided by an air-water calibration, and accounted only for channel-to-channel variations. The counterpart of this for area detectors is the standard flat fielding method. A comprehensive calibration protocol has been developed. The new technique additionally corrects for geometric distortions due to variations in the scan velocity, and timing artifacts caused by mis-synchronization between the linear accelerator and the data acquisition system (DAS). The role of variations in energy flux (2 - 3%) on imaging is demonstrated to be not significant for the images considered. The methodology is presented, and the results are discussed for simulated images. It also allows for significant improvements in the signal-to- noise ratio (SNR) by increasing the dose using multiple images without having to increase the linac pulse frequency or energy flux per pulse. The

  7. Image retrieval by information fusion based on scalable vocabulary tree and robust Hausdorff distance

    Science.gov (United States)

    Che, Chang; Yu, Xiaoyang; Sun, Xiaoming; Yu, Boyang

    2017-12-01

    In recent years, Scalable Vocabulary Tree (SVT) has been shown to be effective in image retrieval. However, for general images where the foreground is the object to be recognized while the background is cluttered, the performance of the current SVT framework is restricted. In this paper, a new image retrieval framework that incorporates a robust distance metric and information fusion is proposed, which improves the retrieval performance relative to the baseline SVT approach. First, the visual words that represent the background are diminished by using a robust Hausdorff distance between different images. Second, image matching results based on three image signature representations are fused, which enhances the retrieval precision. We conducted intensive experiments on small-scale to large-scale image datasets: Corel-9, Corel-48, and PKU-198, where the proposed Hausdorff metric and information fusion outperforms the state-of-the-art methods by about 13, 15, and 15%, respectively.

  8. An improved contrast enhancement algorithm for infrared images based on adaptive double plateaus histogram equalization

    Science.gov (United States)

    Li, Shuo; Jin, Weiqi; Li, Li; Li, Yiyang

    2018-05-01

    Infrared thermal images can reflect the thermal-radiation distribution of a particular scene. However, the contrast of the infrared images is usually low. Hence, it is generally necessary to enhance the contrast of infrared images in advance to facilitate subsequent recognition and analysis. Based on the adaptive double plateaus histogram equalization, this paper presents an improved contrast enhancement algorithm for infrared thermal images. In the proposed algorithm, the normalized coefficient of variation of the histogram, which characterizes the level of contrast enhancement, is introduced as feedback information to adjust the upper and lower plateau thresholds. The experiments on actual infrared images show that compared to the three typical contrast-enhancement algorithms, the proposed algorithm has better scene adaptability and yields better contrast-enhancement results for infrared images with more dark areas or a higher dynamic range. Hence, it has high application value in contrast enhancement, dynamic range compression, and digital detail enhancement for infrared thermal images.

  9. Transfer and conversion of images based on EIT in atom vapor.

    Science.gov (United States)

    Cao, Mingtao; Zhang, Liyun; Yu, Ya; Ye, Fengjuan; Wei, Dong; Guo, Wenge; Zhang, Shougang; Gao, Hong; Li, Fuli

    2014-05-01

    Transfer and conversion of images between different wavelengths or polarization has significant applications in optical communication and quantum information processing. We demonstrated the transfer of images based on electromagnetically induced transparency (EIT) in a rubidium vapor cell. In experiments, a 2D image generated by a spatial light modulator is used as a coupling field, and a plane wave served as a signal field. We found that the image carried by coupling field could be transferred to that carried by signal field, and the spatial patterns of transferred image are much better than that of the initial image. It also could be much smaller than that determined by the diffraction limit of the optical system. We also studied the subdiffraction propagation for the transferred image. Our results may have applications in quantum interference lithography and coherent Raman spectroscopy.

  10. SDL: Saliency-Based Dictionary Learning Framework for Image Similarity.

    Science.gov (United States)

    Sarkar, Rituparna; Acton, Scott T

    2018-02-01

    In image classification, obtaining adequate data to learn a robust classifier has often proven to be difficult in several scenarios. Classification of histological tissue images for health care analysis is a notable application in this context due to the necessity of surgery, biopsy or autopsy. To adequately exploit limited training data in classification, we propose a saliency guided dictionary learning method and subsequently an image similarity technique for histo-pathological image classification. Salient object detection from images aids in the identification of discriminative image features. We leverage the saliency values for the local image regions to learn a dictionary and respective sparse codes for an image, such that the more salient features are reconstructed with smaller error. The dictionary learned from an image gives a compact representation of the image itself and is capable of representing images with similar content, with comparable sparse codes. We employ this idea to design a similarity measure between a pair of images, where local image features of one image, are encoded with the dictionary learned from the other and vice versa. To effectively utilize the learned dictionary, we take into account the contribution of each dictionary atom in the sparse codes to generate a global image representation for image comparison. The efficacy of the proposed method was evaluated using three tissue data sets that consist of mammalian kidney, lung and spleen tissue, breast cancer, and colon cancer tissue images. From the experiments, we observe that our methods outperform the state of the art with an increase of 14.2% in the average classification accuracy over all data sets.

  11. Chromaticity based smoke removal in endoscopic images

    Science.gov (United States)

    Tchaka, Kevin; Pawar, Vijay M.; Stoyanov, Danail

    2017-02-01

    In minimally invasive surgery, image quality is a critical pre-requisite to ensure a surgeons ability to perform a procedure. In endoscopic procedures, image quality can deteriorate for a number of reasons such as fogging due to the temperature gradient after intra-corporeal insertion, lack of focus and due to smoke generated when using electro-cautery to dissect tissues without bleeding. In this paper we investigate the use of vision processing techniques to remove surgical smoke and improve the clarity of the image. We model the image formation process by introducing a haze medium to account for the degradation of visibility. For simplicity and computational efficiency we use an adapted dark-channel prior method combined with histogram equalization to remove smoke artifacts to recover the radiance image and enhance the contrast and brightness of the final result. Our initial results on images from robotic assisted procedures are promising and show that the proposed approach may be used to enhance image quality during surgery without additional suction devices. In addition, the processing pipeline may be used as an important part of a robust surgical vision pipeline that can continue working in the presence of smoke.

  12. Machine learning based global particle indentification algorithms at LHCb experiment

    CERN Multimedia

    Derkach, Denis; Likhomanenko, Tatiana; Rogozhnikov, Aleksei; Ratnikov, Fedor

    2017-01-01

    One of the most important aspects of data processing at LHC experiments is the particle identification (PID) algorithm. In LHCb, several different sub-detector systems provide PID information: the Ring Imaging CHerenkov (RICH) detector, the hadronic and electromagnetic calorimeters, and the muon chambers. To improve charged particle identification, several neural networks including a deep architecture and gradient boosting have been applied to data. These new approaches provide higher identification efficiencies than existing implementations for all charged particle types. It is also necessary to achieve a flat dependency between efficiencies and spectator variables such as particle momentum, in order to reduce systematic uncertainties during later stages of data analysis. For this purpose, "flat” algorithms that guarantee the flatness property for efficiencies have also been developed. This talk presents this new approach based on machine learning and its performance.

  13. Mechanical Damage Detection of Indonesia Local Citrus Based on Fluorescence Imaging

    Science.gov (United States)

    Siregar, T. H.; Ahmad, U.; Sutrisno; Maddu, A.

    2018-05-01

    Citrus experienced physical damage in peel will produce essential oils that contain polymethoxylated flavone. Polymethoxylated flavone is fluorescence substance; thus can be detected by fluorescence imaging. This study aims to study the fluorescence spectra characteristic and to determine the damage region in citrus peel based on fluorescence image. Pulung citrus from Batu district, East Java, as a famous citrus production area in Indonesia, was used in the experiment. It was observed that the image processing could detect the mechanical damage region. Fluorescence imaging can be used to classify the citrus into two categories, sound and defect citruses.

  14. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    Science.gov (United States)

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  15. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    Science.gov (United States)

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  16. Deep Learning- and Transfer Learning-Based Super Resolution Reconstruction from Single Medical Image

    Directory of Open Access Journals (Sweden)

    YiNan Zhang

    2017-01-01

    Full Text Available Medical images play an important role in medical diagnosis and research. In this paper, a transfer learning- and deep learning-based super resolution reconstruction method is introduced. The proposed method contains one bicubic interpolation template layer and two convolutional layers. The bicubic interpolation template layer is prefixed by mathematics deduction, and two convolutional layers learn from training samples. For saving training medical images, a SIFT feature-based transfer learning method is proposed. Not only can medical images be used to train the proposed method, but also other types of images can be added into training dataset selectively. In empirical experiments, results of eight distinctive medical images show improvement of image quality and time reduction. Further, the proposed method also produces slightly sharper edges than other deep learning approaches in less time and it is projected that the hybrid architecture of prefixed template layer and unfixed hidden layers has potentials in other applications.

  17. Cultural based preconceptions in aesthetic experience of architecture

    Directory of Open Access Journals (Sweden)

    Stevanović Vladimir

    2011-01-01

    Full Text Available On a broader scale, the aim of this paper is to examine theoretically the effects a cultural context has on the aesthetic experience of images existing in perceived reality. Minimalism in architecture, as direct subject of research, is a field of particularities in which we observe functioning of this correlation. Through the experiment with the similarity phenomenon, the paper follows specific manifestations of general formal principles and variability of meaning of minimalism in architecture in limited areas of cultural backgrounds of Serbia and Japan. The goal of the comparative analysis of the examples presented is to indicate the conditions that may lead to a possibly different aesthetic experience in two different cultural contexts. Attribution of different meanings to similar formal visual language of architecture raises questions concerning the system of values, which produces these meanings in their cultural and historical perspectives. The establishment of values can also be affected by preconceptions resulting from association of perceived similarities. Are the preconceptions in aesthetic reception of architecture conditionally affected by pragmatic needs, symbolic archetypes, cultural metaphors based on tradition or ideologically constructed dogmas? Confronting philosophical postulates of the Western and Eastern traditions with the transculturality theory of Wolfgang Welsch, the answers may become more available.

  18. Dynamic imaging through turbid media based on digital holography.

    Science.gov (United States)

    Li, Shiping; Zhong, Jingang

    2014-03-01

    Imaging through turbid media using visible or IR light instead of harmful x ray is still a challenging problem, especially in dynamic imaging. A method of dynamic imaging through turbid media using digital holography is presented. In order to match the coherence length between the dynamic object wave and the reference wave, a cw laser is used. To solve the problem of difficult focusing in imaging through turbid media, an autofocus technology is applied. To further enhance the image contrast, a spatial filtering technique is used. A description of digital holography and experiments of imaging the objects hidden in turbid media are presented. The experimental result shows that dynamic images of the objects can be achieved by the use of digital holography.

  19. Adaptive digital image processing in real time: First clinical experiences

    International Nuclear Information System (INIS)

    Andre, M.P.; Baily, N.A.; Hier, R.G.; Edwards, D.K.; Tainer, L.B.; Sartoris, D.J.

    1986-01-01

    The promise of computer image processing has generally not been realized in radiology, partly because the methods advanced to date have been expensive, time-consuming, or inconvenient for clinical use. The authors describe a low-cost system which performs complex image processing operations on-line at video rates. The method uses a combination of unsharp mask subtraction (for low-frequency suppression) and statistical differencing (which adjusts the gain at each point of the image on the basis of its variation from a local mean). The operator interactively adjusts aperture size, contrast gain, background subtraction, and spatial noise reduction. The system is being evaluated for on-line fluoroscopic enhancement, for which phantom measurements and clinical results, including lithotripsy, are presented. When used with a video camera, postprocessing of radiographs was advantageous in a variety of studies, including neonatal chest studies. Real-time speed allows use of the system in the reading room as a ''variable view box.''

  20. Global Seismic Imaging Based on Adjoint Tomography

    Science.gov (United States)

    Bozdag, E.; Lefebvre, M.; Lei, W.; Peter, D. B.; Smith, J. A.; Zhu, H.; Komatitsch, D.; Tromp, J.

    2013-12-01

    Our aim is to perform adjoint tomography at the scale of globe to image the entire planet. We have started elastic inversions with a global data set of 253 CMT earthquakes with moment magnitudes in the range 5.8 ≤ Mw ≤ 7 and used GSN stations as well as some local networks such as USArray, European stations, etc. Using an iterative pre-conditioned conjugate gradient scheme, we initially set the aim to obtain a global crustal and mantle model with confined transverse isotropy in the upper mantle. Global adjoint tomography has so far remained a challenge mainly due to computational limitations. Recent improvements in our 3D solvers (e.g., a GPU version) and access to high-performance computational centers (e.g., ORNL's Cray XK7 "Titan" system) now enable us to perform iterations with higher-resolution (T > 9 s) and longer-duration (200 min) simulations to accommodate high-frequency body waves and major-arc surface waves, respectively, which help improve data coverage. The remaining challenge is the heavy I/O traffic caused by the numerous files generated during the forward/adjoint simulations and the pre- and post-processing stages of our workflow. We improve the global adjoint tomography workflow by adopting the ADIOS file format for our seismic data as well as models, kernels, etc., to improve efficiency on high-performance clusters. Our ultimate aim is to use data from all available networks and earthquakes within the magnitude range of our interest (5.5 ≤ Mw ≤ 7) which requires a solid framework to manage big data in our global adjoint tomography workflow. We discuss the current status and future of global adjoint tomography based on our initial results as well as practical issues such as handling big data in inversions and on high-performance computing systems.

  1. Imaging spectrum in sclerotic myelomas: an experience of three cases

    International Nuclear Information System (INIS)

    Grover, S.B.; Dhar, A.

    2000-01-01

    The classic radiographic presentation of multiple myeloma is lytic skeletal lesions. Primary sclerotic manifestations are rare and occur only in 3 % of cases. The imaging spectrum in three cases of multiple myeloma with primary osteosclerosis is described. The first patient had spiculated sclerosis of the orbit, which is an uncommon site for myeloma. The second patient with POEMS syndrome had multiple, scattered, skeletal lesions with sclerotic margins. The third patient presented with a chest wall mass and had an expansile thick spiculated sclerosis in the rib. The wide imaging spectrum possible in sclerotic myelomas and their relevant differential diagnosis is emphasized. (orig.)

  2. Region-based multifocus image fusion for the precise acquisition of Pap smear images.

    Science.gov (United States)

    Tello-Mijares, Santiago; Bescós, Jesús

    2018-05-01

    A multifocus image fusion method to obtain a single focused image from a sequence of microscopic high-magnification Papanicolau source (Pap smear) images is presented. These images, captured each in a different position of the microscope lens, frequently show partially focused cells or parts of cells, which makes them unpractical for the direct application of image analysis techniques. The proposed method obtains a focused image with a high preservation of original pixels information while achieving a negligible visibility of the fusion artifacts. The method starts by identifying the best-focused image of the sequence; then, it performs a mean-shift segmentation over this image; the focus level of the segmented regions is evaluated in all the images of the sequence, and best-focused regions are merged in a single combined image; finally, this image is processed with an adaptive artifact removal process. The combination of a region-oriented approach, instead of block-based approaches, and a minimum modification of the value of focused pixels in the original images achieve a highly contrasted image with no visible artifacts, which makes this method especially convenient for the medical imaging domain. The proposed method is compared with several state-of-the-art alternatives over a representative dataset. The experimental results show that our proposal obtains the best and more stable quality indicators. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  3. Image-based automatic recognition of larvae

    Science.gov (United States)

    Sang, Ru; Yu, Guiying; Fan, Weijun; Guo, Tiantai

    2010-08-01

    As the main objects, imagoes have been researched in quarantine pest recognition in these days. However, pests in their larval stage are latent, and the larvae spread abroad much easily with the circulation of agricultural and forest products. It is presented in this paper that, as the new research objects, larvae are recognized by means of machine vision, image processing and pattern recognition. More visional information is reserved and the recognition rate is improved as color image segmentation is applied to images of larvae. Along with the characteristics of affine invariance, perspective invariance and brightness invariance, scale invariant feature transform (SIFT) is adopted for the feature extraction. The neural network algorithm is utilized for pattern recognition, and the automatic identification of larvae images is successfully achieved with satisfactory results.

  4. Model-based satellite image fusion

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Sveinsson, J. R.; Nielsen, Allan Aasbjerg

    2008-01-01

    A method is proposed for pixel-level satellite image fusion derived directly from a model of the imaging sensor. By design, the proposed method is spectrally consistent. It is argued that the proposed method needs regularization, as is the case for any method for this problem. A framework for pixel...... neighborhood regularization is presented. This framework enables the formulation of the regularization in a way that corresponds well with our prior assumptions of the image data. The proposed method is validated and compared with other approaches on several data sets. Lastly, the intensity......-hue-saturation method is revisited in order to gain additional insight of what implications the spectral consistency has for an image fusion method....

  5. Medical Imaging Lesion Detection Based on Unified Gravitational Fuzzy Clustering

    Directory of Open Access Journals (Sweden)

    Jean Marie Vianney Kinani

    2017-01-01

    Full Text Available We develop a swift, robust, and practical tool for detecting brain lesions with minimal user intervention to assist clinicians and researchers in the diagnosis process, radiosurgery planning, and assessment of the patient’s response to the therapy. We propose a unified gravitational fuzzy clustering-based segmentation algorithm, which integrates the Newtonian concept of gravity into fuzzy clustering. We first perform fuzzy rule-based image enhancement on our database which is comprised of T1/T2 weighted magnetic resonance (MR and fluid-attenuated inversion recovery (FLAIR images to facilitate a smoother segmentation. The scalar output obtained is fed into a gravitational fuzzy clustering algorithm, which separates healthy structures from the unhealthy. Finally, the lesion contour is automatically outlined through the initialization-free level set evolution method. An advantage of this lesion detection algorithm is its precision and its simultaneous use of features computed from the intensity properties of the MR scan in a cascading pattern, which makes the computation fast, robust, and self-contained. Furthermore, we validate our algorithm with large-scale experiments using clinical and synthetic brain lesion datasets. As a result, an 84%–93% overlap performance is obtained, with an emphasis on robustness with respect to different and heterogeneous types of lesion and a swift computation time.

  6. Robust histogram-based image retrieval

    Czech Academy of Sciences Publication Activity Database

    Höschl, Cyril; Flusser, Jan

    2016-01-01

    Roč. 69, č. 1 (2016), s. 72-81 ISSN 0167-8655 R&D Projects: GA ČR GA15-16928S Institutional support: RVO:67985556 Keywords : Image retrieval * Noisy image * Histogram * Convolution * Moments * Invariants Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.995, year: 2016 http://library.utia.cas.cz/separaty/2015/ZOI/hoschl-0452147.pdf

  7. Fast image matching algorithm based on projection characteristics

    Science.gov (United States)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  8. A novel secret image sharing scheme based on chaotic system

    Science.gov (United States)

    Li, Li; Abd El-Latif, Ahmed A.; Wang, Chuanjun; Li, Qiong; Niu, Xiamu

    2012-04-01

    In this paper, we propose a new secret image sharing scheme based on chaotic system and Shamir's method. The new scheme protects the shadow images with confidentiality and loss-tolerance simultaneously. In the new scheme, we generate the key sequence based on chaotic system and then encrypt the original image during the sharing phase. Experimental results and analysis of the proposed scheme demonstrate a better performance than other schemes and confirm a high probability to resist brute force attack.

  9. Image Making in Two Dimensional Art; Experiences with Straw and ...

    African Journals Online (AJOL)

    Image making in art is professionally referred to as bust in Sculpture andPortraiture in Painting. ... havebeen used to achieve these forms of art; like clay cement, marble, stone,different metals and, fibre glass in the three dimensional form; We also have Pencil, Charcoal Pastel and, Acrylic oil-paint in two dimensional form.

  10. Image Making in Two Dimensional Art; Experiences with Straw and ...

    African Journals Online (AJOL)

    Image making in art is professionally referred to as bust in Sculpture andPortraiture in Painting. It is an art form executed in three dimensional (3D)and two dimensional (2D) formats respectively. Uncountable materials havebeen used to achieve these forms of art; like clay cement, marble, stone,different metals and, fibre ...

  11. Clinical experience with 75Se selenomethylcholesterol adrenal imaging

    International Nuclear Information System (INIS)

    Shapiro, B.; Britton, K.E.; Hawkins, L.A.; Edwards, C.R.W.

    1981-01-01

    The results of quantitative adrenal imaging using 75 Se selenomethylcholesterol in sixty-two subjects are analysed. The adrenal area was localized by a renal scan, lateral views of which enabled adrenal depth to be estimated. The first nineteen cases were scanned with a rectilinear scanner and the remaining forty-three cases imaged with a gamma camera. Quantitation of adrenal uptake was performed on computer-stored static images obtained 7 and 14 days post-injection of 75 Se selenomethylcholesterol (3 and 6 days in the first ten cases studied). Normal uptake was found to be 0.07-0.30% of the administered dose. Overall predictive accuracy of the type of adrenal disorder of thirty-two patients with Cushing's syndrome was 90.6%. Overall predictive accuracy of the cause of Conn's syndrome in twenty-two cases was 86.4%. The mean uptake in the normal adrenal in cases of unilateral adenoma was 0.19% (range 0.07-0.30%). Causes of unsatisfactory adrenal imaging are examined. The procedure is recommended as the localizing and lateralizing technique of choice in Cushing's syndrome except where due to adrenal carcinoma, and as an important non-invasive technique in Conn's syndrome for the lateralization of adenoma. (author)

  12. Thermal imaging experiments on ANACONDA ion beam generator

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Weihua; Yatsui, Kiyoshi [Nagaoka University of Technology (Japan). Lab. of Beam Technology; Olson, C J; Davis, H A [Los Alamos National Laboratory, Los Alamos, NM (United States)

    1997-12-31

    The thermal imaging technique was used in two experimental measurements. First, the ion intensity distribution on the anode surface was observed from different angles by using a multi-pinhole camera. Second, the plume from a target intercepting the beam was visualized by observing the distribution of temperature increase on a thin plate hit by the plume. (author). 6 figs., 4 refs.

  13. Cardiac biplane strain imaging: initial in vivo experience.

    NARCIS (Netherlands)

    Lopata, R.G.P.; Nillesen, M.M.; Verrijp, C.N.; Singh, S.K.; Lammens, M.M.Y.; Laak, J.A.W.M. van der; Wetten, H.B. van; Thijssen, J.M.; Kapusta, L.; Korte, C.L. de

    2010-01-01

    In this study, first we propose a biplane strain imaging method using a commercial ultrasound system, yielding estimation of the strain in three orthogonal directions. Secondly, an animal model of a child's heart was introduced that is suitable to simulate congenital heart disease and was used to

  14. Voxel-based clustered imaging by multiparameter diffusion tensor images for glioma grading.

    Science.gov (United States)

    Inano, Rika; Oishi, Naoya; Kunieda, Takeharu; Arakawa, Yoshiki; Yamao, Yukihiro; Shibata, Sumiya; Kikuchi, Takayuki; Fukuyama, Hidenao; Miyamoto, Susumu

    2014-01-01

    Gliomas are the most common intra-axial primary brain tumour; therefore, predicting glioma grade would influence therapeutic strategies. Although several methods based on single or multiple parameters from diagnostic images exist, a definitive method for pre-operatively determining glioma grade remains unknown. We aimed to develop an unsupervised method using multiple parameters from pre-operative diffusion tensor images for obtaining a clustered image that could enable visual grading of gliomas. Fourteen patients with low-grade gliomas and 19 with high-grade gliomas underwent diffusion tensor imaging and three-dimensional T1-weighted magnetic resonance imaging before tumour resection. Seven features including diffusion-weighted imaging, fractional anisotropy, first eigenvalue, second eigenvalue, third eigenvalue, mean diffusivity and raw T2 signal with no diffusion weighting, were extracted as multiple parameters from diffusion tensor imaging. We developed a two-level clustering approach for a self-organizing map followed by the K-means algorithm to enable unsupervised clustering of a large number of input vectors with the seven features for the whole brain. The vectors were grouped by the self-organizing map as protoclusters, which were classified into the smaller number of clusters by K-means to make a voxel-based diffusion tensor-based clustered image. Furthermore, we also determined if the diffusion tensor-based clustered image was really helpful for predicting pre-operative glioma grade in a supervised manner. The ratio of each class in the diffusion tensor-based clustered images was calculated from the regions of interest manually traced on the diffusion tensor imaging space, and the common logarithmic ratio scales were calculated. We then applied support vector machine as a classifier for distinguishing between low- and high-grade gliomas. Consequently, the sensitivity, specificity, accuracy and area under the curve of receiver operating characteristic

  15. An Improved FCM Medical Image Segmentation Algorithm Based on MMTD

    Directory of Open Access Journals (Sweden)

    Ningning Zhou

    2014-01-01

    Full Text Available Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM is one of the popular clustering algorithms for medical image segmentation. But FCM is highly vulnerable to noise due to not considering the spatial information in image segmentation. This paper introduces medium mathematics system which is employed to process fuzzy information for image segmentation. It establishes the medium similarity measure based on the measure of medium truth degree (MMTD and uses the correlation of the pixel and its neighbors to define the medium membership function. An improved FCM medical image segmentation algorithm based on MMTD which takes some spatial features into account is proposed in this paper. The experimental results show that the proposed algorithm is more antinoise than the standard FCM, with more certainty and less fuzziness. This will lead to its practicable and effective applications in medical image segmentation.

  16. D Reconstruction from Uav-Based Hyperspectral Images

    Science.gov (United States)

    Liu, L.; Xu, L.; Peng, J.

    2018-04-01

    Reconstructing the 3D profile from a set of UAV-based images can obtain hyperspectral information, as well as the 3D coordinate of any point on the profile. Our images are captured from the Cubert UHD185 (UHD) hyperspectral camera, which is a new type of high-speed onboard imaging spectrometer. And it can get both hyperspectral image and panchromatic image simultaneously. The panchromatic image have a higher spatial resolution than hyperspectral image, but each hyperspectral image provides considerable information on the spatial spectral distribution of the object. Thus there is an opportunity to derive a high quality 3D point cloud from panchromatic image and considerable spectral information from hyperspectral image. The purpose of this paper is to introduce our processing chain that derives a database which can provide hyperspectral information and 3D position of each point. First, We adopt a free and open-source software, Visual SFM which is based on structure from motion (SFM) algorithm, to recover 3D point cloud from panchromatic image. And then get spectral information of each point from hyperspectral image by a self-developed program written in MATLAB. The production can be used to support further research and applications.

  17. Research on image complexity evaluation method based on color information

    Science.gov (United States)

    Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo

    2017-11-01

    In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.

  18. Semantic Document Image Classification Based on Valuable Text Pattern

    Directory of Open Access Journals (Sweden)

    Hossein Pourghassem

    2011-01-01

    Full Text Available Knowledge extraction from detected document image is a complex problem in the field of information technology. This problem becomes more intricate when we know, a negligible percentage of the detected document images are valuable. In this paper, a segmentation-based classification algorithm is used to analysis the document image. In this algorithm, using a two-stage segmentation approach, regions of the image are detected, and then classified to document and non-document (pure region regions in the hierarchical classification. In this paper, a novel valuable definition is proposed to classify document image in to valuable or invaluable categories. The proposed algorithm is evaluated on a database consisting of the document and non-document image that provide from Internet. Experimental results show the efficiency of the proposed algorithm in the semantic document image classification. The proposed algorithm provides accuracy rate of 98.8% for valuable and invaluable document image classification problem.

  19. An LG-graph-based early evaluation of segmented images

    International Nuclear Information System (INIS)

    Tsitsoulis, Athanasios; Bourbakis, Nikolaos

    2012-01-01

    Image segmentation is one of the first important parts of image analysis and understanding. Evaluation of image segmentation, however, is a very difficult task, mainly because it requires human intervention and interpretation. In this work, we propose a blind reference evaluation scheme based on regional local–global (RLG) graphs, which aims at measuring the amount and distribution of detail in images produced by segmentation algorithms. The main idea derives from the field of image understanding, where image segmentation is often used as a tool for scene interpretation and object recognition. Evaluation here derives from summarization of the structural information content and not from the assessment of performance after comparisons with a golden standard. Results show measurements for segmented images acquired from three segmentation algorithms, applied on different types of images (human faces/bodies, natural environments and structures (buildings)). (paper)

  20. Single image interpolation via adaptive nonlocal sparsity-based modeling.

    Science.gov (United States)

    Romano, Yaniv; Protter, Matan; Elad, Michael

    2014-07-01

    Single image interpolation is a central and extensively studied problem in image processing. A common approach toward the treatment of this problem in recent years is to divide the given image into overlapping patches and process each of them based on a model for natural image patches. Adaptive sparse representation modeling is one such promising image prior, which has been shown to be powerful in filling-in missing pixels in an image. Another force that such algorithms may use is the self-similarity that exists within natural images. Processing groups of related patches together exploits their correspondence, leading often times to improved results. In this paper, we propose a novel image interpolation method, which combines these two forces-nonlocal self-similarities and sparse representation modeling. The proposed method is contrasted with competitive and related algorithms, and demonstrated to achieve state-of-the-art results.

  1. Effects of image distortion correction on voxel-based morphometry

    International Nuclear Information System (INIS)

    Goto, Masami; Abe, Osamu; Kabasawa, Hiroyuki

    2012-01-01

    We aimed to show that correcting image distortion significantly affects brain volumetry using voxel-based morphometry (VBM) and to assess whether the processing of distortion correction reduces system dependency. We obtained contiguous sagittal T 1 -weighted images of the brain from 22 healthy participants using 1.5- and 3-tesla magnetic resonance (MR) scanners, preprocessed images using Statistical Parametric Mapping 5, and tested the relation between distortion correction and brain volume using VBM. Local brain volume significantly increased or decreased on corrected images compared with uncorrected images. In addition, the method used to correct image distortion for gradient nonlinearity produced fewer volumetric errors from MR system variation. This is the first VBM study to show more precise volumetry using VBM with corrected images. These results indicate that multi-scanner or multi-site imaging trials require correction for distortion induced by gradient nonlinearity. (author)

  2. Silhouette-based approach of 3D image reconstruction for automated image acquisition using robotic arm

    Science.gov (United States)

    Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.

    2017-06-01

    This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.

  3. Physics-based shape matching for intraoperative image guidance

    Energy Technology Data Exchange (ETDEWEB)

    Suwelack, Stefan, E-mail: suwelack@kit.edu; Röhl, Sebastian; Bodenstedt, Sebastian; Reichard, Daniel; Dillmann, Rüdiger; Speidel, Stefanie [Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Adenauerring 2, Karlsruhe 76131 (Germany); Santos, Thiago dos; Maier-Hein, Lena [Computer-assisted Interventions, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg 69120 (Germany); Wagner, Martin; Wünscher, Josephine; Kenngott, Hannes; Müller, Beat P. [General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 110, Heidelberg 69120 (Germany)

    2014-11-01

    method is able to accurately match partial surfaces. Finally, a phantom experiment demonstrates how the method can be combined with stereo endoscopic imaging to provide nonrigid registration during laparoscopic interventions. Conclusions: The PBSM approach for surface matching is fast, robust, and accurate. As the technique is based on a preoperative volumetric FE model, it naturally recovers the position of volumetric structures (e.g., tumors and vessels). It cannot only be used to recover soft-tissue deformations from intraoperative surface models but can also be combined with landmark data from volumetric imaging. In addition to applications in laparoscopic surgery, the method might prove useful in other areas that require soft-tissue registration from sparse intraoperative sensor data (e.g., radiation therapy)

  4. Image processing based detection of lung cancer on CT scan images

    Science.gov (United States)

    Abdillah, Bariqi; Bustamam, Alhadi; Sarwinda, Devvi

    2017-10-01

    In this paper, we implement and analyze the image processing method for detection of lung cancer. Image processing techniques are widely used in several medical problems for picture enhancement in the detection phase to support the early medical treatment. In this research we proposed a detection method of lung cancer based on image segmentation. Image segmentation is one of intermediate level in image processing. Marker control watershed and region growing approach are used to segment of CT scan image. Detection phases are followed by image enhancement using Gabor filter, image segmentation, and features extraction. From the experimental results, we found the effectiveness of our approach. The results show that the best approach for main features detection is watershed with masking method which has high accuracy and robust.

  5. PCANet-Based Structural Representation for Nonrigid Multimodal Medical Image Registration

    Directory of Open Access Journals (Sweden)

    Xingxing Zhu

    2018-05-01

    Full Text Available Nonrigid multimodal image registration remains a challenging task in medical image processing and analysis. The structural representation (SR-based registration methods have attracted much attention recently. However, the existing SR methods cannot provide satisfactory registration accuracy due to the utilization of hand-designed features for structural representation. To address this problem, the structural representation method based on the improved version of the simple deep learning network named PCANet is proposed for medical image registration. In the proposed method, PCANet is firstly trained on numerous medical images to learn convolution kernels for this network. Then, a pair of input medical images to be registered is processed by the learned PCANet. The features extracted by various layers in the PCANet are fused to produce multilevel features. The structural representation images are constructed for two input images based on nonlinear transformation of these multilevel features. The Euclidean distance between structural representation images is calculated and used as the similarity metrics. The objective function defined by the similarity metrics is optimized by L-BFGS method to obtain parameters of the free-form deformation (FFD model. Extensive experiments on simulated and real multimodal image datasets show that compared with the state-of-the-art registration methods, such as modality-independent neighborhood descriptor (MIND, normalized mutual information (NMI, Weber local descriptor (WLD, and the sum of squared differences on entropy images (ESSD, the proposed method provides better registration performance in terms of target registration error (TRE and subjective human vision.

  6. Clustering Batik Images using Fuzzy C-Means Algorithm Based on Log-Average Luminance

    Directory of Open Access Journals (Sweden)

    Ahmad Sanmorino

    2012-06-01

    Full Text Available Batik is a fabric or clothes that are made ​​with a special staining technique called wax-resist dyeing and is one of the cultural heritage which has high artistic value. In order to improve the efficiency and give better semantic to the image, some researchers apply clustering algorithm for managing images before they can be retrieved. Image clustering is a process of grouping images based on their similarity. In this paper we attempt to provide an alternative method of grouping batik image using fuzzy c-means (FCM algorithm based on log-average luminance of the batik. FCM clustering algorithm is an algorithm that works using fuzzy models that allow all data from all cluster members are formed with different degrees of membership between 0 and 1. Log-average luminance (LAL is the average value of the lighting in an image. We can compare different image lighting from one image to another using LAL. From the experiments that have been made, it can be concluded that fuzzy c-means algorithm can be used for batik image clustering based on log-average luminance of each image possessed.

  7. [Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].

    Science.gov (United States)

    Chen, Hao; Yu, Haizhong

    2014-04-01

    Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.

  8. Supervised learning of tools for content-based search of image databases

    Science.gov (United States)

    Delanoy, Richard L.

    1996-03-01

    A computer environment, called the Toolkit for Image Mining (TIM), is being developed with the goal of enabling users with diverse interests and varied computer skills to create search tools for content-based image retrieval and other pattern matching tasks. Search tools are generated using a simple paradigm of supervised learning that is based on the user pointing at mistakes of classification made by the current search tool. As mistakes are identified, a learning algorithm uses the identified mistakes to build up a model of the user's intentions, construct a new search tool, apply the search tool to a test image, display the match results as feedback to the user, and accept new inputs from the user. Search tools are constructed in the form of functional templates, which are generalized matched filters capable of knowledge- based image processing. The ability of this system to learn the user's intentions from experience contrasts with other existing approaches to content-based image retrieval that base searches on the characteristics of a single input example or on a predefined and semantically- constrained textual query. Currently, TIM is capable of learning spectral and textural patterns, but should be adaptable to the learning of shapes, as well. Possible applications of TIM include not only content-based image retrieval, but also quantitative image analysis, the generation of metadata for annotating images, data prioritization or data reduction in bandwidth-limited situations, and the construction of components for larger, more complex computer vision algorithms.

  9. Image processing system design for microcantilever-based optical readout infrared arrays

    Science.gov (United States)

    Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu

    2012-12-01

    Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.

  10. The patient experience of high technology medical imaging: A systematic review of the qualitative evidence

    International Nuclear Information System (INIS)

    Munn, Zachary; Jordan, Zoe

    2011-01-01

    Background: When presenting to an imaging department, the person who is to be imaged is often in a vulnerable state, and can experience the scan in a number of ways. It is the role of the radiographer to produce a high quality image and facilitate patient care throughout the imaging process. A qualitative systematic review was performed to synthesise the existent evidence on the patient experience of high technology medical imaging. Only papers relating to Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) were identified. Inclusion criteria: Studies that were of a qualitative design that explored the phenomenon of interest, the patient experience of high technology medical imaging. Participants included anyone who had undergone one of these procedures. Methods: A systematic search of medical and allied health databases was conducted. Articles identified during the search process that met the inclusion criteria were then critically appraised for methodological quality independently by two reviewers. Results: During the search and inclusion process, 15 studies were found that were deemed of suitable quality to be included in the review. From the 15 studies, 127 findings were extracted from the included studies. These were analysed in more detail to observe common themes, and then grouped into 33 categories. From these 33 categories, 11 synthesised findings were produced. The 11 synthesised findings highlight the diverse, unique and challenging ways in which people experience imaging with MRI and CT scanners. Conclusion: The results of the review demonstrate the diverse ways in which people experience medical imaging. All health professionals involved in imaging need to be aware of the different ways each patient may experience imaging.

  11. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    International Nuclear Information System (INIS)

    Dhou, S; Hurwitz, M; Cai, W; Rottmann, J; Williams, C; Wagar, M; Berbeco, R; Lewis, J H; Mishra, P; Li, R; Ionascu, D

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. (paper)

  12. ViCAR: An Adaptive and Landmark-Free Registration of Time Lapse Image Data from Microfluidics Experiments

    Directory of Open Access Journals (Sweden)

    Georges Hattab

    2017-05-01

    Full Text Available In order to understand gene function in bacterial life cycles, time lapse bioimaging is applied in combination with different marker protocols in so called microfluidics chambers (i.e., a multi-well plate. In one experiment, a series of T images is recorded for one visual field, with a pixel resolution of 60 nm/px. Any (semi-automatic analysis of the data is hampered by a strong image noise, low contrast and, last but not least, considerable irregular shifts during the acquisition. Image registration corrects such shifts enabling next steps of the analysis (e.g., feature extraction or tracking. Image alignment faces two obstacles in this microscopic context: (a highly dynamic structural changes in the sample (i.e., colony growth and (b an individual data set-specific sample environment which makes the application of landmarks-based alignments almost impossible. We present a computational image registration solution, we refer to as ViCAR: (Visual (Cues based (Adaptive (Registration, for such microfluidics experiments, consisting of (1 the detection of particular polygons (outlined and segmented ones, referred to as visual cues, (2 the adaptive retrieval of three coordinates throughout different sets of frames, and finally (3 an image registration based on the relation of these points correcting both rotation and translation. We tested ViCAR with different data sets and have found that it provides an effective spatial alignment thereby paving the way to extract temporal features pertinent to each resulting bacterial colony. By using ViCAR, we achieved an image registration with 99.9% of image closeness, based on the average rmsd of 4.10−2 pixels, and superior results compared to a state of the art algorithm.

  13. PET-based molecular imaging in neuroscience

    International Nuclear Information System (INIS)

    Jacobs, A.H.; Heiss, W.D.; Li, H.; Knoess, C.; Schaller, B.; Kracht, L.; Monfared, P.; Vollmar, S.; Bauer, B.; Wagner, R.; Graf, R.; Wienhard, K.; Winkeler, A.; Rueger, A.; Klein, M.; Hilker, R.; Galldiks, N.; Herholz, K.; Sobesky, J.

    2003-01-01

    Positron emission tomography (PET) allows non-invasive assessment of physiological, metabolic and molecular processes in humans and animals in vivo. Advances in detector technology have led to a considerable improvement in the spatial resolution of PET (1-2 mm), enabling for the first time investigations in small experimental animals such as mice. With the developments in radiochemistry and tracer technology, a variety of endogenously expressed and exogenously introduced genes can be analysed by PET. This opens up the exciting and rapidly evolving field of molecular imaging, aiming at the non-invasive localisation of a biological process of interest in normal and diseased cells in animal models and humans in vivo. The main and most intriguing advantage of molecular imaging is the kinetic analysis of a given molecular event in the same experimental subject over time. This will allow non-invasive characterisation and ''phenotyping'' of animal models of human disease at various disease stages, under certain pathophysiological stimuli and after therapeutic intervention. The potential broad applications of imaging molecular events in vivo lie in the study of cell biology, biochemistry, gene/protein function and regulation, signal transduction, transcriptional regulation and characterisation of transgenic animals. Most importantly, molecular imaging will have great implications for the identification of potential molecular therapeutic targets, in the development of new treatment strategies, and in their successful implementation into clinical application. Here, the potential impact of molecular imaging by PET in applications in neuroscience research with a special focus on neurodegeneration and neuro-oncology is reviewed. (orig.)

  14. Stereo Vision-Based High Dynamic Range Imaging Using Differently-Exposed Image Pair

    Directory of Open Access Journals (Sweden)

    Won-Jae Park

    2017-06-01

    Full Text Available In this paper, a high dynamic range (HDR imaging method based on the stereo vision system is presented. The proposed method uses differently exposed low dynamic range (LDR images captured from a stereo camera. The stereo LDR images are first converted to initial stereo HDR images using the inverse camera response function estimated from the LDR images. However, due to the limited dynamic range of the stereo LDR camera, the radiance values in under/over-exposed regions of the initial main-view (MV HDR image can be lost. To restore these radiance values, the proposed stereo matching and hole-filling algorithms are applied to the stereo HDR images. Specifically, the auxiliary-view (AV HDR image is warped by using the estimated disparity between initial the stereo HDR images and then effective hole-filling is applied to the warped AV HDR image. To reconstruct the final MV HDR, the warped and hole-filled AV HDR image is fused with the initial MV HDR image using the weight map. The experimental results demonstrate objectively and subjectively that the proposed stereo HDR imaging method provides better performance compared to the conventional method.

  15. A Novel Image Stream Cipher Based On Dynamic Substitution

    OpenAIRE

    Elsharkawi, A.; El-Sagheer, R. M.; Akah, H.; Taha, H.

    2016-01-01

    Recently, many chaos-based stream cipher algorithms have been developed. Traditional chaos stream cipher is based on XORing a generated secure random number sequence based on chaotic maps (e.g. logistic map, Bernoulli Map, Tent Map etc.) with the original image to get the encrypted image, This type of stream cipher seems to be vulnerable to chosen plaintext attacks. This paper introduces a new stream cipher algorithm based on dynamic substitution box. The new algorithm uses one substitution b...

  16. An integrative, experience-based theory of attentional control.

    Science.gov (United States)

    Wilder, Matthew H; Mozer, Michael C; Wickens, Christopher D

    2011-02-09

    Although diverse, theories of visual attention generally share the notion that attention is controlled by some combination of three distinct strategies: (1) exogenous cuing from locally contrasting primitive visual features, such as abrupt onsets or color singletons (e.g., L. Itti, C. Koch, & E. Neiber, 1998), (2) endogenous gain modulation of exogenous activations, used to guide attention to task-relevant features (e.g., V. Navalpakkam & L. Itti, 2007; J. Wolfe, 1994, 2007), and (3) endogenous prediction of likely locations of interest, based on task and scene gist (e.g., A. Torralba, A. Oliva, M. Castelhano, & J. Henderson, 2006). However, little work has been done to synthesize these disparate theories. In this work, we propose a unifying conceptualization in which attention is controlled along two dimensions: the degree of task focus and the contextual scale of operation. Previously proposed strategies-and their combinations-can be viewed as instances of this one mechanism. Thus, this theory serves not as a replacement for existing models but as a means of bringing them into a coherent framework. We present an implementation of this theory and demonstrate its applicability to a wide range of attentional phenomena. The model accounts for key results in visual search with synthetic images and makes reasonable predictions for human eye movements in search tasks involving real-world images. In addition, the theory offers an unusual perspective on attention that places a fundamental emphasis on the role of experience and task-related knowledge.

  17. Computer assisted treatments for image pattern data of laser plasma experiments

    International Nuclear Information System (INIS)

    Yaoita, Akira; Matsushima, Isao

    1987-01-01

    An image data processing system for laser-plasma experiments has been constructed. These image data are two dimensional images taken by X-ray, UV, infrared and visible light television cameras and also taken by streak cameras. They are digitized by frame memories. The digitized image data are stored in disk memories with the aid of a microcomputer. The data are processed by a host computer and stored in the files of the host computer and on magnetic tapes. In this paper, the over view of the image data processing system and some software for data handling in the host computer are reported. (author)

  18. Digital Correlation based on Wavelet Transform for Image Detection

    International Nuclear Information System (INIS)

    Barba, L; Vargas, L; Torres, C; Mattos, L

    2011-01-01

    In this work is presented a method for the optimization of digital correlators to improve the characteristic detection on images using wavelet transform as well as subband filtering. It is proposed an approach of wavelet-based image contrast enhancement in order to increase the performance of digital correlators. The multiresolution representation is employed to improve the high frequency content of images taken into account the input contrast measured for the original image. The energy of correlation peaks and discrimination level of several objects are improved with this technique. To demonstrate the potentiality in extracting characteristics using the wavelet transform, small objects inside reference images are detected successfully.

  19. A framework of region-based dynamic image fusion

    Institute of Scientific and Technical Information of China (English)

    WANG Zhong-hua; QIN Zheng; LIU Yu

    2007-01-01

    A new framework of region-based dynamic image fusion is proposed. First, the technique of target detection is applied to dynamic images (image sequences) to segment images into different targets and background regions. Then different fusion rules are employed in different regions so that the target information is preserved as much as possible. In addition, steerable non-separable wavelet frame transform is used in the process of multi-resolution analysis, so the system achieves favorable characters of orientation and invariant shift. Compared with other image fusion methods, experimental results showed that the proposed method has better capabilities of target recognition and preserves clear background information.

  20. A SAR IMAGE REGISTRATION METHOD BASED ON SIFT ALGORITHM

    Directory of Open Access Journals (Sweden)

    W. Lu

    2017-09-01

    Full Text Available In order to improve the stability and rapidity of synthetic aperture radar (SAR images matching, an effective method was presented. Firstly, the adaptive smoothing filtering was employed for image denoising in image processing based on Wallis filtering to avoid the follow-up noise is amplified. Secondly, feature points were extracted by a simplified SIFT algorithm. Finally, the exact matching of the images was achieved with these points. Compared with the existing methods, it not only maintains the richness of features, but a-lso reduces the noise of the image. The simulation results show that the proposed algorithm can achieve better matching effect.

  1. HDR Image Quality Enhancement Based on Spatially Variant Retinal Response

    Directory of Open Access Journals (Sweden)

    Horiuchi Takahiko

    2010-01-01

    Full Text Available There is a growing demand for being able to display high dynamic range (HDR images on low dynamic range (LDR devices. Tone mapping is a process for enhancing HDR image quality on an LDR device by converting the tonal values of the original image from HDR to LDR. This paper proposes a new tone mapping algorithm for enhancing image quality by deriving a spatially-variant operator for imitating S-potential response in human retina, which efficiently improves local contrasts while conserving good global appearance. The proposed tone mapping operator is studied from a system construction point of view. It is found that the operator is regarded as a natural extension of the Retinex algorithm by adding a global adaptation process to the local adaptation. The feasibility of the proposed algorithm is examined in detail on experiments using standard HDR images and real HDR scene images, comparing with conventional tone mapping algorithms.

  2. Single image super-resolution based on convolutional neural networks

    Science.gov (United States)

    Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia

    2018-03-01

    We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.

  3. Computed Tomography Image Origin Identification Based on Original Sensor Pattern Noise and 3-D Image Reconstruction Algorithm Footprints.

    Science.gov (United States)

    Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou

    2017-07-01

    In this paper, we focus on the "blind" identification of the computed tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT scanner based on an original sensor pattern noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its three-dimensional (3-D) image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train a support vector machine (SVM) based classifier to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than sensor pattern noise (SPN) based strategy proposed for general public camera devices.

  4. An Image Morphing Technique Based on Optimal Mass Preserving Mapping

    Science.gov (United States)

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2013-01-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128

  5. Multispectral image pansharpening based on the contourlet transform

    Energy Technology Data Exchange (ETDEWEB)

    Amro, Israa; Mateos, Javier, E-mail: iamro@correo.ugr.e, E-mail: jmd@decsai.ugr.e [Departamento de Ciencias de la Computacion e I.A., Universidad de Granada, 18071 Granada (Spain)

    2010-02-01

    Pansharpening is a technique that fuses the information of a low resolution multispectral image (MS) and a high resolution panchromatic image (PAN), usually remote sensing images, to provide a high resolution multispectral image. In the literature, this task has been addressed from different points of view being one of the most popular the wavelets based algorithms. Recently, the contourlet transform has been proposed. This transform combines the advantages of the wavelets transform with a more efficient directional information representation. In this paper we propose a new pansharpening method based on contourlets, compare with its wavelet counterpart and assess its performance numerically and visually.

  6. Ultrasoft x-ray imaging system for the National Spherical Torus Experiment

    Science.gov (United States)

    Stutman, D.; Finkenthal, M.; Soukhanovskii, V.; May, M. J.; Moos, H. W.; Kaita, R.

    1999-01-01

    A spectrally resolved ultrasoft x-ray imaging system, consisting of arrays of high resolution (the National Spherical Torus Experiment. Initially, three poloidal arrays of diodes filtered for C 1s-np emission will be implemented for fast tomographic imaging of the colder start-up plasmas. Later on, mirrors tuned to the C Lyα emission will be added in order to enable the arrays to "see" the periphery through the hot core and to study magnetohydrodynamic activity and impurity transport in this region. We also discuss possible core diagnostics, based on tomographic imaging of the Lyα emission from the plume of recombined, low Z impurity ions left by neutral beams or fueling pellets. The arrays can also be used for radiated power measurements and to map the distribution of high Z impurities injected for transport studies. The performance of the proposed system is illustrated with results from test channels on the CDX-U spherical torus at Princeton Plasma Physics Laboratory.

  7. GOTCHA experience report: three-dimensional SAR imaging with complete circular apertures

    Science.gov (United States)

    Ertin, Emre; Austin, Christian D.; Sharma, Samir; Moses, Randolph L.; Potter, Lee C.

    2007-04-01

    We study circular synthetic aperture radar (CSAR) systems collecting radar backscatter measurements over a complete circular aperture of 360 degrees. This study is motivated by the GOTCHA CSAR data collection experiment conducted by the Air Force Research Laboratory (AFRL). Circular SAR provides wide-angle information about the anisotropic reflectivity of the scattering centers in the scene, and also provides three dimensional information about the location of the scattering centers due to a non planar collection geometry. Three dimensional imaging results with single pass circular SAR data reveals that the 3D resolution of the system is poor due to the limited persistence of the reflectors in the scene. We present results on polarimetric processing of CSAR data and illustrate reasoning of three dimensional shape from multi-view layover using prior information about target scattering mechanisms. Next, we discuss processing of multipass (CSAR) data and present volumetric imaging results with IFSAR and three dimensional backprojection techniques on the GOTCHA data set. We observe that the volumetric imaging with GOTCHA data is degraded by aliasing and high sidelobes due to nonlinear flightpaths and sparse and unequal sampling in elevation. We conclude with a model based technique that resolves target features and enhances the volumetric imagery by extrapolating the phase history data using the estimated model.

  8. Magnetic resonance spectroscopy imaging in the diagnosis of prostate cancer: initial experience

    International Nuclear Information System (INIS)

    Melo, Homero Jose de Farias e; Abdala, Nitamar; Goldman, Suzan Menasce; Szejnfeld, Jacob

    2009-01-01

    Objective: to report an experiment involving the introduction of a protocol utilizing commercially available three-dimensional 1H magnetic resonance spectroscopy imaging (3D 1H MRSI) method in patients diagnosed with prostatic tumors under suspicion of neoplasm. Materials and methods: forty-one patients in the age range between 51 and 80 years (mean, 67 years) were prospectively evaluated. The patients were divided into two groups: patients with one or more biopsies negative for cancer and high specific-prostatic antigen levels (group A), and patients with cancer confirmed by biopsy (group B). The determination of the target area (group A) or the known cancer extent (group B) was based on magnetic resonance imaging and MRSI studies. Results: the specificity of MRSI in the diagnosis of prostate cancer was lower than the specificity reported in the literature (about 47%). On the other hand, for tumor staging, it corresponded to the specificity reported in the literature. Conclusion: the introduction and standardization of 3D 1H MRSI has allowed the obtention of a presumable diagnosis of prostate cancer, by a combined analysis of magnetic resonance imaging and metabolic data from 3D 1H MRSI. (author)

  9. An Effective Combined Feature For Web Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    H.M.R.B Herath

    2015-08-01

    Full Text Available Abstract Technology advances as well as the emergence of large scale multimedia applications and the revolution of the World Wide Web has changed the world into a digital age. Anybody can use their mobile phone to take a photo at any time anywhere and upload that image to ever growing image databases. Development of effective techniques for visual and multimedia retrieval systems is one of the most challenging and important directions of the future research. This paper proposes an effective combined feature for web based image retrieval. Frequently used colour and texture features are explored in order to develop a combined feature for this purpose. Widely used three colour features Colour moments Colour coherence vector and Colour Correlogram and three texture features Grey Level Co-occurrence matrix Tamura features and Gabor filter were analyzed for their performance. Precision and Recall were used to evaluate the performance of each of these techniques. By comparing precision and recall values the methods that performed best were taken and combined to form a hybrid feature. The developed combined feature was evaluated by developing a web based CBIR system. A web crawler was used to first crawl through Web sites and images found in those sites are downloaded and the combined feature representation technique was used to extract image features. The test results indicated that this web system can be used to index web images with the combined feature representation schema and to find similar images. Random image retrievals using the web system shows that the combined feature can be used to retrieve images belonging to the general image domain. Accuracy of the retrieval can be noted high for natural images like outdoor scenes images of flowers etc. Also images which have a similar colour and texture distribution were retrieved as similar even though the images were belonging to deferent semantic categories. This can be ideal for an artist who wants

  10. Object recognition based on Google's reverse image search and image similarity

    Science.gov (United States)

    Horváth, András.

    2015-12-01

    Image classification is one of the most challenging tasks in computer vision and a general multiclass classifier could solve many different tasks in image processing. Classification is usually done by shallow learning for predefined objects, which is a difficult task and very different from human vision, which is based on continuous learning of object classes and one requires years to learn a large taxonomy of objects which are not disjunct nor independent. In this paper I present a system based on Google image similarity algorithm and Google image database, which can classify a large set of different objects in a human like manner, identifying related classes and taxonomies.

  11. Mid-infrared upconversion based hyperspectral imaging

    DEFF Research Database (Denmark)

    Junaid, Saher; Tomko, Jan; Semtsiv, Mykhaylo P.

    2018-01-01

    quantum cascade laser illumination. AgGaS2 is used as the nonlinear medium for sum frequency generation using a 1064 nm mixing laser. Angular scanning of the nonlinear crystal provides broad spectral coverage at every spatial position in the image. This study demonstrates the retrieval of series...

  12. Small angle X-ray scattering experiments with three-dimensional imaging gas detectors

    International Nuclear Information System (INIS)

    La Monaca, A.; Iannuzzi, M.; Messi, R.

    1985-01-01

    Measurements of small angle X-ray scattering of lupolen - R, dry collagen and dry cornea are presented. The experiments have been performed with synchrotron radiation and a new three-dimensional imaging drif-chamber gas detector

  13. Adaptive radiotherapy based on contrast enhanced cone beam CT imaging

    International Nuclear Information System (INIS)

    Soevik, Aaste; Skogmo, Hege K.; Roedal, Jan; Lervaag, Christoffer; Eilertsen, Karsten; Malinen, Eirik

    2010-01-01

    Cone beam CT (CBCT) imaging has become an integral part of radiation therapy, with images typically used for offline or online patient setup corrections based on bony anatomy co-registration. Ideally, the co-registration should be based on tumor localization. However, soft tissue contrast in CBCT images may be limited. In the present work, contrast enhanced CBCT (CECBCT) images were used for tumor visualization and treatment adaptation. Material and methods. A spontaneous canine maxillary tumor was subjected to repeated cone beam CT imaging during fractionated radiotherapy (10 fractions in total). At five of the treatment fractions, CECBCT images, employing an iodinated contrast agent, were acquired, as well as pre-contrast CBCT images. The tumor was clearly visible in post-contrast minus pre-contrast subtraction images, and these contrast images were used to delineate gross tumor volumes. IMRT dose plans were subsequently generated. Four different strategies were explored: 1) fully adapted planning based on each CECBCT image series, 2) planning based on images acquired at the first treatment fraction and patient repositioning following bony anatomy co-registration, 3) as for 2), but with patient repositioning based on co-registering contrast images, and 4) a strategy with no patient repositioning or treatment adaptation. The equivalent uniform dose (EUD) and tumor control probability (TCP) calculations to estimate treatment outcome for each strategy. Results. Similar translation vectors were found when bony anatomy and contrast enhancement co-registration were compared. Strategy 1 gave EUDs closest to the prescription dose and the highest TCP. Strategies 2 and 3 gave EUDs and TCPs close to that of strategy 1, with strategy 3 being slightly better than strategy 2. Even greater benefits from strategies 1 and 3 are expected with increasing tumor movement or deformation during treatment. The non-adaptive strategy 4 was clearly inferior to all three adaptive strategies

  14. Extracting flat-field images from scene-based image sequences using phase correlation

    Energy Technology Data Exchange (ETDEWEB)

    Caron, James N., E-mail: Caron@RSImd.com [Research Support Instruments, 4325-B Forbes Boulevard, Lanham, Maryland 20706 (United States); Montes, Marcos J. [Naval Research Laboratory, Code 7231, 4555 Overlook Avenue, SW, Washington, DC 20375 (United States); Obermark, Jerome L. [Naval Research Laboratory, Code 8231, 4555 Overlook Avenue, SW, Washington, DC 20375 (United States)

    2016-06-15

    Flat-field image processing is an essential step in producing high-quality and radiometrically calibrated images. Flat-fielding corrects for variations in the gain of focal plane array electronics and unequal illumination from the system optics. Typically, a flat-field image is captured by imaging a radiometrically uniform surface. The flat-field image is normalized and removed from the images. There are circumstances, such as with remote sensing, where a flat-field image cannot be acquired in this manner. For these cases, we developed a phase-correlation method that allows the extraction of an effective flat-field image from a sequence of scene-based displaced images. The method uses sub-pixel phase correlation image registration to align the sequence to estimate the static scene. The scene is removed from sequence producing a sequence of misaligned flat-field images. An average flat-field image is derived from the realigned flat-field sequence.

  15. Optical image hiding based on chaotic vibration of deformable moiré grating

    Science.gov (United States)

    Lu, Guangqing; Saunoriene, Loreta; Aleksiene, Sandra; Ragulskis, Minvydas

    2018-03-01

    Image hiding technique based on chaotic vibration of deformable moiré grating is presented in this paper. The embedded secret digital image is leaked in a form of a pattern of time-averaged moiré fringes when the deformable cover grating vibrates according to a chaotic law of motion with a predefined set of parameters. Computational experiments are used to demonstrate the features and the applicability of the proposed scheme.

  16. Improved image retrieval based on fuzzy colour feature vector

    Science.gov (United States)

    Ben-Ahmeida, Ahlam M.; Ben Sasi, Ahmed Y.

    2013-03-01

    One of Image indexing techniques is the Content-Based Image Retrieval which is an efficient way for retrieving images from the image database automatically based on their visual contents such as colour, texture, and shape. In this paper will be discuss how using content-based image retrieval (CBIR) method by colour feature extraction and similarity checking. By dividing the query image and all images in the database into pieces and extract the features of each part separately and comparing the corresponding portions in order to increase the accuracy in the retrieval. The proposed approach is based on the use of fuzzy sets, to overcome the problem of curse of dimensionality. The contribution of colour of each pixel is associated to all the bins in the histogram using fuzzy-set membership functions. As a result, the Fuzzy Colour Histogram (FCH), outperformed the Conventional Colour Histogram (CCH) in image retrieving, due to its speedy results, where were images represented as signatures that took less size of memory, depending on the number of divisions. The results also showed that FCH is less sensitive and more robust to brightness changes than the CCH with better retrieval recall values.

  17. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments

    OpenAIRE

    Gorgolewski, Krzysztof J.; Auer, Tibor; Calhoun, Vince D.; Craddock, R. Cameron; Das, Samir; Duff, Eugene P.; Flandin, Guillaume; Ghosh, Satrajit S.; Glatard, Tristan; Halchenko, Yaroslav O.; Handwerker, Daniel A.; Hanke, Michael; Keator, David; Li, Xiangrui; Michael, Zachary

    2016-01-01

    International audience; The development of magnetic resonance imaging (MRI) techniques has defined modern neuroimaging. Since its inception, tens of thousands of studies using techniques such as functional MRI and diffusion weighted imaging have allowed for the non-invasive study of the brain. Despite the fact that MRI is routinely used to obtain data for neuroscience research, there has been no widely adopted standard for organizing and describing the data collected in an imaging experiment....

  18. Real-time Image Processing for Microscopy-based Label-free Imaging Flow Cytometry in a Microfluidic Chip.

    Science.gov (United States)

    Heo, Young Jin; Lee, Donghyeon; Kang, Junsu; Lee, Keondo; Chung, Wan Kyun

    2017-09-14

    Imaging flow cytometry (IFC) is an emerging technology that acquires single-cell images at high-throughput for analysis of a cell population. Rich information that comes from high sensitivity and spatial resolution of a single-cell microscopic image is beneficial for single-cell analysis in various biological applications. In this paper, we present a fast image-processing pipeline (R-MOD: Real-time Moving Object Detector) based on deep learning for high-throughput microscopy-based label-free IFC in a microfluidic chip. The R-MOD pipeline acquires all single-cell images of cells in flow, and identifies the acquired images as a real-time process with minimum hardware that consists of a microscope and a high-speed camera. Experiments show that R-MOD has the fast and reliable accuracy (500 fps and 93.3% mAP), and is expected to be used as a powerful tool for biomedical and clinical applications.

  19. Multiscale Distance Coherence Vector Algorithm for Content-Based Image Retrieval

    Science.gov (United States)

    Jiexian, Zeng; Xiupeng, Liu

    2014-01-01

    Multiscale distance coherence vector algorithm for content-based image retrieval (CBIR) is proposed due to the same descriptor with different shapes and the shortcomings of antinoise performance of the distance coherence vector algorithm. By this algorithm, the image contour curve is evolved by Gaussian function first, and then the distance coherence vector is, respectively, extracted from the contour of the original image and evolved images. Multiscale distance coherence vector was obtained by reasonable weight distribution of the distance coherence vectors of evolved images contour. This algorithm not only is invariable to translation, rotation, and scaling transformation but also has good performance of antinoise. The experiment results show us that the algorithm has a higher recall rate and precision rate for the retrieval of images polluted by noise. PMID:24883416

  20. Slice image pretreatment for cone-beam computed tomography based on adaptive filter

    International Nuclear Information System (INIS)

    Huang Kuidong; Zhang Dinghua; Jin Yanfang

    2009-01-01

    According to the noise properties and the serial slice image characteristics in Cone-Beam Computed Tomography (CBCT) system, a slice image pretreatment for CBCT based on adaptive filter was proposed. The judging criterion for the noise is established firstly. All pixels are classified into two classes: adaptive center weighted modified trimmed mean (ACWMTM) filter is used for the pixels corrupted by Gauss noise and adaptive median (AM) filter is used for the pixels corrupted by impulse noise. In ACWMTM filtering algorithm, the estimated Gauss noise standard deviation in the current slice image with offset window is replaced by the estimated standard deviation in the adjacent slice image to the current with the corresponding window, so the filtering accuracy of the serial images is improved. The pretreatment experiment on CBCT slice images of wax model of hollow turbine blade shows that the method makes a good performance both on eliminating noises and on protecting details. (authors)

  1. Cardiac biplane strain imaging: initial in vivo experience

    Energy Technology Data Exchange (ETDEWEB)

    Lopata, R G P; Nillesen, M M; Thijssen, J M; De Korte, C L [Clinical Physics Laboratory, Radboud University Nijmegen Medical Centre, Nijmegen (Netherlands); Verrijp, C N; Lammens, M M Y; Van der Laak, J A W M [Department of Pathology, Radboud University Nijmegen Medical Centre, Nijmegen (Netherlands); Singh, S K; Van Wetten, H B [Department of Cardiothoracic Surgery, Radboud University Nijmegen Medical Centre, Nijmegen (Netherlands); Kapusta, L [Pediatric Cardiology, Department of Pediatrics, Radboud University Nijmegen Medical Centre, Nijmegen (Netherlands)], E-mail: R.Lopata@cukz.umcn.nl

    2010-02-21

    In this study, first we propose a biplane strain imaging method using a commercial ultrasound system, yielding estimation of the strain in three orthogonal directions. Secondly, an animal model of a child's heart was introduced that is suitable to simulate congenital heart disease and was used to test the method in vivo. The proposed approach can serve as a framework to monitor the development of cardiac hypertrophy and fibrosis. A 2D strain estimation technique using radio frequency (RF) ultrasound data was applied. Biplane image acquisition was performed at a relatively low frame rate (<100 Hz) using a commercial platform with an RF interface. For testing the method in vivo, biplane image sequences of the heart were recorded during the cardiac cycle in four dogs with an aortic stenosis. Initial results reveal the feasibility of measuring large radial, circumferential and longitudinal cumulative strain (up to 70%) at a frame rate of 100 Hz. Mean radial strain curves of a manually segmented region-of-interest in the infero-lateral wall show excellent correlation between the measured strain curves acquired in two perpendicular planes. Furthermore, the results show the feasibility and reproducibility of assessing radial, circumferential and longitudinal strains simultaneously. In this preliminary study, three beagles developed an elevated pressure gradient over the aortic valve ({delta}p: 100-200 mmHg) and myocardial hypertrophy. One dog did not develop any sign of hypertrophy ({delta}p = 20 mmHg). Initial strain (rate) results showed that the maximum strain (rate) decreased with increasing valvular stenosis (-50%), which is in accordance with previous studies. Histological findings corroborated these results and showed an increase in fibrotic tissue for the hearts with larger pressure gradients (100, 200 mmHg), as well as lower strain and strain rate values.

  2. Cardiac biplane strain imaging: initial in vivo experience

    International Nuclear Information System (INIS)

    Lopata, R G P; Nillesen, M M; Thijssen, J M; De Korte, C L; Verrijp, C N; Lammens, M M Y; Van der Laak, J A W M; Singh, S K; Van Wetten, H B; Kapusta, L

    2010-01-01

    In this study, first we propose a biplane strain imaging method using a commercial ultrasound system, yielding estimation of the strain in three orthogonal directions. Secondly, an animal model of a child's heart was introduced that is suitable to simulate congenital heart disease and was used to test the method in vivo. The proposed approach can serve as a framework to monitor the development of cardiac hypertrophy and fibrosis. A 2D strain estimation technique using radio frequency (RF) ultrasound data was applied. Biplane image acquisition was performed at a relatively low frame rate (<100 Hz) using a commercial platform with an RF interface. For testing the method in vivo, biplane image sequences of the heart were recorded during the cardiac cycle in four dogs with an aortic stenosis. Initial results reveal the feasibility of measuring large radial, circumferential and longitudinal cumulative strain (up to 70%) at a frame rate of 100 Hz. Mean radial strain curves of a manually segmented region-of-interest in the infero-lateral wall show excellent correlation between the measured strain curves acquired in two perpendicular planes. Furthermore, the results show the feasibility and reproducibility of assessing radial, circumferential and longitudinal strains simultaneously. In this preliminary study, three beagles developed an elevated pressure gradient over the aortic valve (Δp: 100-200 mmHg) and myocardial hypertrophy. One dog did not develop any sign of hypertrophy (Δp = 20 mmHg). Initial strain (rate) results showed that the maximum strain (rate) decreased with increasing valvular stenosis (-50%), which is in accordance with previous studies. Histological findings corroborated these results and showed an increase in fibrotic tissue for the hearts with larger pressure gradients (100, 200 mmHg), as well as lower strain and strain rate values.

  3. Enhancements to the Image Analysis Tool for Core Punch Experiments and Simulations (vs. 2014)

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, John Edward [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Unal, Cetin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-11-06

    A previous paper (Hogden & Unal, 2012, Image Analysis Tool for Core Punch Experiments and Simulations) described an image processing computer program developed at Los Alamos National Laboratory. This program has proven useful so developement has been continued. In this paper we describe enhacements to the program as of 2014.

  4. P/Halley the model comet, in view of the imaging experiment aboard the VEGA spacecraft

    International Nuclear Information System (INIS)

    Szegoe, K.

    1989-07-01

    In this paper those results of the VEGA imaging experiments are summarized which probably have general validity for any comet. Shape, size, surface structure, jet activity, rotation patterns are considered in this respect. It is pointed out that imaging data provide indispensable information to the understanding of cometary activity. (author) 27 refs

  5. Light-leaking region segmentation of FOG fiber based on quality evaluation of infrared image

    Science.gov (United States)

    Liu, Haoting; Wang, Wei; Gao, Feng; Shan, Lianjie; Ma, Yuzhou; Ge, Wenqian

    2014-07-01

    To improve the assembly reliability of Fiber Optic Gyroscope (FOG), a light leakage detection system and method is developed. First, an agile movement control platform is designed to implement the pose control of FOG optical path component in 6 Degrees of Freedom (DOF). Second, an infrared camera is employed to capture the working state images of corresponding fibers in optical path component after the manual assembly of FOG; therefore the entire light transmission process of key sections in light-path can be recorded. Third, an image quality evaluation based region segmentation method is developed for the light leakage images. In contrast to the traditional methods, the image quality metrics, including the region contrast, the edge blur, and the image noise level, are firstly considered to distinguish the image characters of infrared image; then the robust segmentation algorithms, including graph cut and flood fill, are all developed for region segmentation according to the specific image quality. Finally, after the image segmentation of light leakage region, the typical light-leaking type, such as the point defect, the wedge defect, and the surface defect can be identified. By using the image quality based method, the applicability of our proposed system can be improved dramatically. Many experiment results have proved the validity and effectiveness of this method.

  6. Semi-supervised learning based probabilistic latent semantic analysis for automatic image annotation

    Institute of Scientific and Technical Information of China (English)

    Tian Dongping

    2017-01-01

    In recent years, multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas, especially for automatic image annotation, whose purpose is to provide an efficient and effective searching environment for users to query their images more easily.In this paper, a semi-supervised learning based probabilistic latent semantic analysis ( PL-SA) model for automatic image annotation is presenred.Since it' s often hard to obtain or create la-beled images in large quantities while unlabeled ones are easier to collect, a transductive support vector machine ( TSVM) is exploited to enhance the quality of the training image data.Then, differ-ent image features with different magnitudes will result in different performance for automatic image annotation.To this end, a Gaussian normalization method is utilized to normalize different features extracted from effective image regions segmented by the normalized cuts algorithm so as to reserve the intrinsic content of images as complete as possible.Finally, a PLSA model with asymmetric mo-dalities is constructed based on the expectation maximization( EM) algorithm to predict a candidate set of annotations with confidence scores.Extensive experiments on the general-purpose Corel5k dataset demonstrate that the proposed model can significantly improve performance of traditional PL-SA for the task of automatic image annotation.

  7. Retrieval Architecture with Classified Query for Content Based Image Recognition

    Directory of Open Access Journals (Sweden)

    Rik Das

    2016-01-01

    Full Text Available The consumer behavior has been observed to be largely influenced by image data with increasing familiarity of smart phones and World Wide Web. Traditional technique of browsing through product varieties in the Internet with text keywords has been gradually replaced by the easy accessible image data. The importance of image data has portrayed a steady growth in application orientation for business domain with the advent of different image capturing devices and social media. The paper has described a methodology of feature extraction by image binarization technique for enhancing identification and retrieval of information using content based image recognition. The proposed algorithm was tested on two public datasets, namely, Wang dataset and Oliva and Torralba (OT-Scene dataset with 3688 images on the whole. It has outclassed the state-of-the-art techniques in performance measure and has shown statistical significance.

  8. FUZZY BASED CONTRAST STRETCHING FOR MEDICAL IMAGE ENHANCEMENT

    Directory of Open Access Journals (Sweden)

    T.C. Raja Kumar

    2011-07-01

    Full Text Available Contrast Stretching is an important part in medical image processing applications. Contrast is the difference between two adjacent pixels. Fuzzy statistical values are analyzed and better results are produced in the spatial domain of the input image. The histogram mapping produces the resultant image with less impulsive noise and smooth nature. The probabilities of gray values are generated and the fuzzy set is determined from the position of the input image pixel. The result indicates the good performance of the proposed fuzzy based stretching. The inverse transform of the real values are mapped with the input image to generate the fuzzy statistics. This approach gives a flexible image enhancement for medical images in the presence of noises.

  9. Evaluation of an image-based tracking workflow using a passive marker and resonant micro-coil fiducials for automatic image plane alignment in interventional MRI.

    Science.gov (United States)

    Neumann, M; Breton, E; Cuvillon, L; Pan, L; Lorenz, C H; de Mathelin, M

    2012-01-01

    In this paper, an original workflow is presented for MR image plane alignment based on tracking in real-time MR images. A test device consisting of two resonant micro-coils and a passive marker is proposed for detection using image-based algorithms. Micro-coils allow for automated initialization of the object detection in dedicated low flip angle projection images; then the passive marker is tracked in clinical real-time MR images, with alternation between two oblique orthogonal image planes along the test device axis; in case the passive marker is lost in real-time images, the workflow is reinitialized. The proposed workflow was designed to minimize dedicated acquisition time to a single dedicated acquisition in the ideal case (no reinitialization required). First experiments have shown promising results for test-device tracking precision, with a mean position error of 0.79 mm and a mean orientation error of 0.24°.

  10. Correction method and software for image distortion and nonuniform response in charge-coupled device-based x-ray detectors utilizing x-ray image intensifier

    International Nuclear Information System (INIS)

    Ito, Kazuki; Kamikubo, Hironari; Yagi, Naoto; Amemiya, Yoshiyuki

    2005-01-01

    An on-site method of correcting the image distortion and nonuniform response of a charge-coupled device (CCD)-based X-ray detector was developed using the response of the imaging plate as a reference. The CCD-based X-ray detector consists of a beryllium-windowed X-ray image intensifier (Be-XRII) and a CCD as the image sensor. An image distortion of 29% was improved to less than 1% after the correction. In the correction of nonuniform response due to image distortion, subpixel approximation was performed for the redistribution of pixel values. The optimal number of subpixels was also discussed. In an experiment with polystyrene (PS) latex, it was verified that the correction of both image distortion and nonuniform response worked properly. The correction for the 'contrast reduction' problem was also demonstrated for an isotropic X-ray scattering pattern from the PS latex. (author)

  11. Optical image encryption scheme with multiple light paths based on compressive ghost imaging

    Science.gov (United States)

    Zhu, Jinan; Yang, Xiulun; Meng, Xiangfeng; Wang, Yurong; Yin, Yongkai; Sun, Xiaowen; Dong, Guoyan

    2018-02-01

    An optical image encryption method with multiple light paths is proposed based on compressive ghost imaging. In the encryption process, M random phase-only masks (POMs) are generated by means of logistic map algorithm, and these masks are then uploaded to the spatial light modulator (SLM). The collimated laser light is divided into several beams by beam splitters as it passes through the SLM, and the light beams illuminate the secret images, which are converted into sparse images by discrete wavelet transform beforehand. Thus, the secret images are simultaneously encrypted into intensity vectors by ghost imaging. The distances between the SLM and secret images vary and can be used as the main keys with original POM and the logistic map algorithm coefficient in the decryption process. In the proposed method, the storage space can be significantly decreased and the security of the system can be improved. The feasibility, security and robustness of the method are further analysed through computer simulations.

  12. Image registration assessment in radiotherapy image guidance based on control chart monitoring.

    Science.gov (United States)

    Xia, Wenyao; Breen, Stephen L

    2018-04-01

    Image guidance with cone beam computed tomography in radiotherapy can guarantee the precision and accuracy of patient positioning prior to treatment delivery. During the image guidance process, operators need to take great effort to evaluate the image guidance quality before correcting a patient's position. This work proposes an image registration assessment method based on control chart monitoring to reduce the effort taken by the operator. According to the control chart plotted by daily registration scores of each patient, the proposed method can quickly detect both alignment errors and image quality inconsistency. Therefore, the proposed method can provide a clear guideline for the operators to identify unacceptable image quality and unacceptable image registration with minimal effort. Experimental results demonstrate that by using control charts from a clinical database of 10 patients undergoing prostate radiotherapy, the proposed method can quickly identify out-of-control signals and find special cause of out-of-control registration events.

  13. Concave omnidirectional imaging device for cylindrical object based on catadioptric panoramic imaging

    Science.gov (United States)

    Wu, Xiaojun; Wu, Yumei; Wen, Peizhi

    2018-03-01

    To obtain information on the outer surface of a cylinder object, we propose a catadioptric panoramic imaging system based on the principle of uniform spatial resolution for vertical scenes. First, the influence of the projection-equation coefficients on the spatial resolution and astigmatism of the panoramic system are discussed, respectively. Through parameter optimization, we obtain the appropriate coefficients for the projection equation, and so the imaging quality of the entire imaging system can reach an optimum value. Finally, the system projection equation is calibrated, and an undistorted rectangular panoramic image is obtained using the cylindrical-surface projection expansion method. The proposed 360-deg panoramic-imaging device overcomes the shortcomings of existing surface panoramic-imaging methods, and it has the advantages of low cost, simple structure, high imaging quality, and small distortion, etc. The experimental results show the effectiveness of the proposed method.

  14. Image Blocking Encryption Algorithm Based on Laser Chaos Synchronization

    Directory of Open Access Journals (Sweden)

    Shu-Ying Wang

    2016-01-01

    Full Text Available In view of the digital image transmission security, based on laser chaos synchronization and Arnold cat map, a novel image encryption scheme is proposed. Based on pixel values of plain image a parameter is generated to influence the secret key. Sequences of the drive system and response system are pretreated by the same method and make image blocking encryption scheme for plain image. Finally, pixels position are scrambled by general Arnold transformation. In decryption process, the chaotic synchronization accuracy is fully considered and the relationship between the effect of synchronization and decryption is analyzed, which has characteristics of high precision, higher efficiency, simplicity, flexibility, and better controllability. The experimental results show that the encryption algorithm image has high security and good antijamming performance.

  15. Lossless Image Compression Based on Multiple-Tables Arithmetic Coding

    Directory of Open Access Journals (Sweden)

    Rung-Ching Chen

    2009-01-01

    Full Text Available This paper is intended to present a lossless image compression method based on multiple-tables arithmetic coding (MTAC method to encode a gray-level image f. First, the MTAC method employs a median edge detector (MED to reduce the entropy rate of f. The gray levels of two adjacent pixels in an image are usually similar. A base-switching transformation approach is then used to reduce the spatial redundancy of the image. The gray levels of some pixels in an image are more common than those of others. Finally, the arithmetic encoding method is applied to reduce the coding redundancy of the image. To promote high performance of the arithmetic encoding method, the MTAC method first classifies the data and then encodes each cluster of data using a distinct code table. The experimental results show that, in most cases, the MTAC method provides a higher efficiency in use of storage space than the lossless JPEG2000 does.

  16. Individual Building Extraction from TerraSAR-X Images Based on Ontological Semantic Analysis

    Directory of Open Access Journals (Sweden)

    Rong Gui

    2016-08-01

    Full Text Available Accurate building information plays a crucial role for urban planning, human settlements and environmental management. Synthetic aperture radar (SAR images, which deliver images with metric resolution, allow for analyzing and extracting detailed information on urban areas. In this paper, we consider the problem of extracting individual buildings from SAR images based on domain ontology. By analyzing a building scattering model with different orientations and structures, the building ontology model is set up to express multiple characteristics of individual buildings. Under this semantic expression framework, an object-based SAR image segmentation method is adopted to provide homogeneous image objects, and three categories of image object features are extracted. Semantic rules are implemented by organizing image object features, and the individual building objects expression based on an ontological semantic description is formed. Finally, the building primitives are used to detect buildings among the available image objects. Experiments on TerraSAR-X images of Foshan city, China, with a spatial resolution of 1.25 m × 1.25 m, have shown the total extraction rates are above 84%. The results indicate the ontological semantic method can exactly extract flat-roof and gable-roof buildings larger than 250 pixels with different orientations.

  17. Automatic Depth Extraction from 2D Images Using a Cluster-Based Learning Framework.

    Science.gov (United States)

    Herrera, Jose L; Del-Blanco, Carlos R; Garcia, Narciso

    2018-07-01

    There has been a significant increase in the availability of 3D players and displays in the last years. Nonetheless, the amount of 3D content has not experimented an increment of such magnitude. To alleviate this problem, many algorithms for converting images and videos from 2D to 3D have been proposed. Here, we present an automatic learning-based 2D-3D image conversion approach, based on the key hypothesis that color images with similar structure likely present a similar depth structure. The presented algorithm estimates the depth of a color query image using the prior knowledge provided by a repository of color + depth images. The algorithm clusters this database attending to their structural similarity, and then creates a representative of each color-depth image cluster that will be used as prior depth map. The selection of the appropriate prior depth map corresponding to one given color query image is accomplished by comparing the structural similarity in the color domain between the query image and the database. The comparison is based on a K-Nearest Neighbor framework that uses a learning procedure to build an adaptive combination of image feature descriptors. The best correspondences determine the cluster, and in turn the associated prior depth map. Finally, this prior estimation is enhanced through a segmentation-guided filtering that obtains the final depth map estimation. This approach has been tested using two publicly available databases, and compared with several state-of-the-art algorithms in order to prove its efficiency.

  18. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  19. Advanced image based methods for structural integrity monitoring: Review and prospects

    Science.gov (United States)

    Farahani, Behzad V.; Sousa, Pedro José; Barros, Francisco; Tavares, Paulo J.; Moreira, Pedro M. G. P.

    2018-02-01

    There is a growing trend in engineering to develop methods for structural integrity monitoring and characterization of in-service mechanical behaviour of components. The fast growth in recent years of image processing techniques and image-based sensing for experimental mechanics, brought about a paradigm change in phenomena sensing. Hence, several widely applicable optical approaches are playing a significant role in support of experiment. The current review manuscript describes advanced image based methods for structural integrity monitoring, and focuses on methods such as Digital Image Correlation (DIC), Thermoelastic Stress Analysis (TSA), Electronic Speckle Pattern Interferometry (ESPI) and Speckle Pattern Shearing Interferometry (Shearography). These non-contact full-field techniques rely on intensive image processing methods to measure mechanical behaviour, and evolve even as reviews such as this are being written, which justifies a special effort to keep abreast of this progress.

  20. A Novel Image Encryption Algorithm Based on DNA Encoding and Spatiotemporal Chaos

    Directory of Open Access Journals (Sweden)

    Chunyan Song

    2015-10-01

    Full Text Available DNA computing based image encryption is a new, promising field. In this paper, we propose a novel image encryption scheme based on DNA encoding and spatiotemporal chaos. In particular, after the plain image is primarily diffused with the bitwise Exclusive-OR operation, the DNA mapping rule is introduced to encode the diffused image. In order to enhance the encryption, the spatiotemporal chaotic system is used to confuse the rows and columns of the DNA encoded image. The experiments demonstrate that the proposed encryption algorithm is of high key sensitivity and large key space, and it can resist brute-force attack, entropy attack, differential attack, chosen-plaintext attack, known-plaintext attack and statistical attack.

  1. Study of photoconductor-based radiological image sensors

    International Nuclear Information System (INIS)

    Beaumont, Francois

    1989-01-01

    Because of the evolution of medical imaging techniques to digital Systems, it is necessary to replace radiological film which has many drawbacks, by a detector quite as efficient and quickly giving a digitizable signal. The purpose of this thesis is to find new X-ray digital imaging processes using photoconductor materials such as amorphous selenium. After reviewing the principle of direct radiology and functions to be served by the X-ray sensor (i.e. detection, memory, assignment, visualization), we explain specification. We especially show the constraints due to the object to be radiographed (condition of minimal exposure), and to the reading signal (electronic noise detection associated with a reading frequency). As a result of this study, a first photoconductor sensor could be designed. Its principle is based on photo-carrier trapping at dielectric-photoconductor structure interface. The reading System needs the scanning of a laser beam upon the sensor surface. The dielectric-photoconductor structure enabled us to estimate the possibilities offered by the sensor and to build a complete x-ray imaging System. The originality of thermo-dielectric sensor, that was next studied, is to allow a thermal assignment reading. The chosen System consists in varying the ferroelectric polymer capacity whose dielectric permittivity is weak at room temperature. The thermo-dielectric material was studied by thermal or Joule effect stimulation. During our experiments, trapping was found in a sensor made of amorphous selenium between two electrodes. This new effect was performed and enabled us to expose a first interpretation. Eventually, the comparison of these new sensor concepts with radiological film shows the advantage of the proposed solution. (author) [fr

  2. A Digital Image Denoising Algorithm Based on Gaussian Filtering and Bilateral Filtering

    Directory of Open Access Journals (Sweden)

    Piao Weiying

    2018-01-01

    Full Text Available Bilateral filtering has been applied in the area of digital image processing widely, but in the high gradient region of the image, bilateral filtering may generate staircase effect. Bilateral filtering can be regarded as one particular form of local mode filtering, according to above analysis, an mixed image de-noising algorithm is proposed based on Gaussian filter and bilateral filtering. First of all, it uses Gaussian filter to filtrate the noise image and get the reference image, then to take both the reference image and noise image as the input for range kernel function of bilateral filter. The reference image can provide the image’s low frequency information, and noise image can provide image’s high frequency information. Through the competitive experiment on both the method in this paper and traditional bilateral filtering, the experimental result showed that the mixed de-noising algorithm can effectively overcome staircase effect, and the filtrated image was more smooth, its textural features was also more close to the original image, and it can achieve higher PSNR value, but the amount of calculation of above two algorithms are basically the same.

  3. THE IMAGE REGISTRATION OF FOURIER-MELLIN BASED ON THE COMBINATION OF PROJECTION AND GRADIENT PREPROCESSING

    Directory of Open Access Journals (Sweden)

    D. Gao

    2017-09-01

    Full Text Available Image registration is one of the most important applications in the field of image processing. The method of Fourier Merlin transform, which has the advantages of high precision and good robustness to change in light and shade, partial blocking, noise influence and so on, is widely used. However, not only this method can’t obtain the unique mutual power pulse function for non-parallel image pairs, even part of image pairs also can’t get the mutual power function pulse. In this paper, an image registration method based on Fourier-Mellin transformation in the view of projection-gradient preprocessing is proposed. According to the projection conformational equation, the method calculates the matrix of image projection transformation to correct the tilt image; then, gradient preprocessing and Fourier-Mellin transformation are performed on the corrected image to obtain the registration parameters. Eventually, the experiment results show that the method makes the image registration of Fourier-Mellin transformation not only applicable to the registration of the parallel image pairs, but also to the registration of non-parallel image pairs. What’s more, the better registration effect can be obtained

  4. Imaging system design and image interpolation based on CMOS image sensor

    Science.gov (United States)

    Li, Yu-feng; Liang, Fei; Guo, Rui

    2009-11-01

    An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.

  5. Independent component analysis based filtering for penumbral imaging

    International Nuclear Information System (INIS)

    Chen Yenwei; Han Xianhua; Nozaki, Shinya

    2004-01-01

    We propose a filtering based on independent component analysis (ICA) for Poisson noise reduction. In the proposed filtering, the image is first transformed to ICA domain and then the noise components are removed by a soft thresholding (shrinkage). The proposed filter, which is used as a preprocessing of the reconstruction, has been successfully applied to penumbral imaging. Both simulation results and experimental results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters

  6. Prototype Theory Based Feature Representation for PolSAR Images

    OpenAIRE

    Huang Xiaojing; Yang Xiangli; Huang Pingping; Yang Wen

    2016-01-01

    This study presents a new feature representation approach for Polarimetric Synthetic Aperture Radar (PolSAR) image based on prototype theory. First, multiple prototype sets are generated using prototype theory. Then, regularized logistic regression is used to predict similarities between a test sample and each prototype set. Finally, the PolSAR image feature representation is obtained by ensemble projection. Experimental results of an unsupervised classification of PolSAR images show that our...

  7. Image degradation characteristics and restoration based on regularization for diffractive imaging

    Science.gov (United States)

    Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun

    2017-11-01

    The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.

  8. Detection rates in pediatric diagnostic imaging: a picture archive and communication system compared with a web-based imaging system

    International Nuclear Information System (INIS)

    McDonald, L.; Cramer, B.; Barrett, B.

    2006-01-01

    This prospective study assesses whether there are differences in accuracy of interpretation of diagnostic images among users of a picture archive and communication system (PACS) diagnostic workstation, compared with a less costly Web-based imaging system on a personal computer (PC) with a high resolution monitor. One hundred consecutive pediatric chest or abdomen and skeletal X-rays were selected from hospital inpatient and outpatient studies over a 5-month interval. They were classified as normal (n = 32), obviously abnormal (n = 33), or having subtle abnormal findings (n = 35) by 2 senior radiologists who reached a consensus for each individual case. Subsequently, 5 raters with varying degrees of experience independently viewed and interpreted the cases as normal or abnormal. Raters viewed each image 1 month apart on a PACS and on the Web-based PC imaging system. There was no relation between accuracy of detection and the system used to evaluate X-ray images (P = 0.92). The total percentage of incorrect interpretations on the Web-based PC imaging system was 23.2%, compared with 23.6% on the PACS (P = 0.92). For all raters combined, the overall difference in proportion assessed incorrectly on the PACS, compared with the PC system, was not significant at 0.4% (95%CI, -3.5% to 4.3%). The high-resolution Web-based imaging system via PC is an adequate alternative to a PACS clinical workstation. Accordingly, the provision of a more extensive network of workstations throughout the hospital setting could have potentially significant cost savings. (author)

  9. Fast Depiction Invariant Visual Similarity for Content Based Image Retrieval Based on Data-driven Visual Similarity using Linear Discriminant Analysis

    Science.gov (United States)

    Wihardi, Y.; Setiawan, W.; Nugraha, E.

    2018-01-01

    On this research we try to build CBIRS based on Learning Distance/Similarity Function using Linear Discriminant Analysis (LDA) and Histogram of Oriented Gradient (HoG) feature. Our method is invariant to depiction of image, such as similarity of image to image, sketch to image, and painting to image. LDA can decrease execution time compared to state of the art method, but it still needs an improvement in term of accuracy. Inaccuracy in our experiment happen because we did not perform sliding windows search and because of low number of negative samples as natural-world images.

  10. Cytology 3D structure formation based on optical microscopy images

    Science.gov (United States)

    Pronichev, A. N.; Polyakov, E. V.; Shabalova, I. P.; Djangirova, T. V.; Zaitsev, S. M.

    2017-01-01

    The article the article is devoted to optimization of the parameters of imaging of biological preparations in optical microscopy using a multispectral camera in visible range of electromagnetic radiation. A model for the image forming of virtual preparations was proposed. The optimum number of layers was determined for the object scan in depth and holistic perception of its switching according to the results of the experiment.

  11. Cytology 3D structure formation based on optical microscopy images

    International Nuclear Information System (INIS)

    Pronichev, A N; Polyakov, E V; Zaitsev, S M; Shabalova, I P; Djangirova, T V

    2017-01-01

    The article the article is devoted to optimization of the parameters of imaging of biological preparations in optical microscopy using a multispectral camera in visible range of electromagnetic radiation. A model for the image forming of virtual preparations was proposed. The optimum number of layers was determined for the object scan in depth and holistic perception of its switching according to the results of the experiment. (paper)

  12. Cosmic AntiParticle Ring Imaging Cerenkov Experiment

    CERN Multimedia

    2002-01-01

    %RE2A \\\\ \\\\ %title \\\\ \\\\The CAPRICE experiment studies antimatter and light nuclei in the cosmic rays as well as muons in the atmosphere. The experiment is performed with the spectrometer shown in the figure which is lifted by a balloon to an altitude of 35-40 km. At this altitude less than half a percent of the atmosphere is above the 2 ton spectrometer which makes it possible to study the cosmic ray flux without too much background from atmospherically produced particles. The spectrometer includes time-of-flight scintillators, a gaseous RICH counter, a drift chamber tracker and a silicon electromagnetic calorimeter. The important feature of the spectrometer is to discriminate between different particles.\\\\ \\\\ The experiment aims at measuring the flux of the antiparticles (antiprotons and positrons) above about 5 GeV and relate the fluxes to models including exotic production of antiparticles like dark matter supersymmetric particles. The flux of muons is measured during descent of the balloon through the at...

  13. Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.

    Science.gov (United States)

    Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan

    2016-04-28

    This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.

  14. Image-based characterization of foamed polymeric tissue scaffolds

    International Nuclear Information System (INIS)

    Mather, Melissa L; Morgan, Stephen P; Crowe, John A; White, Lisa J; Shakesheff, Kevin M; Tai, Hongyun; Howdle, Steven M; Kockenberger, Walter

    2008-01-01

    Tissue scaffolds are integral to many regenerative medicine therapies, providing suitable environments for tissue regeneration. In order to assess their suitability, methods to routinely and reproducibly characterize scaffolds are needed. Scaffold structures are typically complex, and thus their characterization is far from trivial. The work presented in this paper is centred on the application of the principles of scaffold characterization outlined in guidelines developed by ASTM International. Specifically, this work demonstrates the capabilities of different imaging modalities and analysis techniques used to characterize scaffolds fabricated from poly(lactic-co-glycolic acid) using supercritical carbon dioxide. Three structurally different scaffolds were used. The scaffolds were imaged using: scanning electron microscopy, micro x-ray computed tomography, magnetic resonance imaging and terahertz pulsed imaging. In each case two-dimensional images were obtained from which scaffold properties were determined using image processing. The findings of this work highlight how the chosen imaging modality and image-processing technique can influence the results of scaffold characterization. It is concluded that in order to obtain useful results from image-based scaffold characterization, an imaging methodology providing sufficient contrast and resolution must be used along with robust image segmentation methods to allow intercomparison of results

  15. Sparse representation based image interpolation with nonlocal autoregressive modeling.

    Science.gov (United States)

    Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming

    2013-04-01

    Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.

  16. RELATIVE PANORAMIC CAMERA POSITION ESTIMATION FOR IMAGE-BASED VIRTUAL REALITY NETWORKS IN INDOOR ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    M. Nakagawa

    2017-09-01

    Full Text Available Image-based virtual reality (VR is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  17. Image segmentation-based robust feature extraction for color image watermarking

    Science.gov (United States)

    Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen

    2018-04-01

    This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.

  18. Adaptive Image Transmission Scheme over Wavelet-Based OFDM System

    Institute of Scientific and Technical Information of China (English)

    GAOXinying; YUANDongfeng; ZHANGHaixia

    2005-01-01

    In this paper an adaptive image transmission scheme is proposed over Wavelet-based OFDM (WOFDM) system with Unequal error protection (UEP) by the design of non-uniform signal constellation in MLC. Two different data division schemes: byte-based and bitbased, are analyzed and compared. Different bits are protected unequally according to their different contribution to the image quality in bit-based data division scheme, which causes UEP combined with this scheme more powerful than that with byte-based scheme. Simulation results demonstrate that image transmission by UEP with bit-based data division scheme presents much higher PSNR values and surprisingly better image quality. Furthermore, by considering the tradeoff of complexity and BER performance, Haar wavelet with the shortest compactly supported filter length is the most suitable one among orthogonal Daubechies wavelet series in our proposed system.

  19. Cardiac biplane strain imaging: initial in vivo experience

    Science.gov (United States)

    Lopata, R. G. P.; Nillesen, M. M.; Verrijp, C. N.; Singh, S. K.; Lammens, M. M. Y.; van der Laak, J. A. W. M.; van Wetten, H. B.; Thijssen, J. M.; Kapusta, L.; de Korte, C. L.

    2010-02-01

    In this study, first we propose a biplane strain imaging method using a commercial ultrasound system, yielding estimation of the strain in three orthogonal directions. Secondly, an animal model of a child's heart was introduced that is suitable to simulate congenital heart disease and was used to test the method in vivo. The proposed approach can serve as a framework to monitor the development of cardiac hypertrophy and fibrosis. A 2D strain estimation technique using radio frequency (RF) ultrasound data was applied. Biplane image acquisition was performed at a relatively low frame rate (dogs with an aortic stenosis. Initial results reveal the feasibility of measuring large radial, circumferential and longitudinal cumulative strain (up to 70%) at a frame rate of 100 Hz. Mean radial strain curves of a manually segmented region-of-interest in the infero-lateral wall show excellent correlation between the measured strain curves acquired in two perpendicular planes. Furthermore, the results show the feasibility and reproducibility of assessing radial, circumferential and longitudinal strains simultaneously. In this preliminary study, three beagles developed an elevated pressure gradient over the aortic valve (Δp: 100-200 mmHg) and myocardial hypertrophy. One dog did not develop any sign of hypertrophy (Δp = 20 mmHg). Initial strain (rate) results showed that the maximum strain (rate) decreased with increasing valvular stenosis (-50%), which is in accordance with previous studies. Histological findings corroborated these results and showed an increase in fibrotic tissue for the hearts with larger pressure gradients (100, 200 mmHg), as well as lower strain and strain rate values.

  20. Body Image in a Sexual Context : The Relationship between Body Image and Sexual Experiences

    NARCIS (Netherlands)

    van den Brink, F.

    2017-01-01

    Given the large sociocultural emphasis on appearance and the widespread incidence of a negative body image in current society, scientific understanding of its potential psychological and physical health consequences, including sexual problems, is now of particular importance. The value of

  1. Digital image intensifier radiography: first experiences with the DSI (Digital Spot Imaging)

    International Nuclear Information System (INIS)

    Rueckforth, J.; Wein, B.; Stargardt, A.; Guenther, R.W.

    1995-01-01

    We performed a comparative study of digitally and conventionally acquired images in gastrointestinal examinations. Radiation dose and spatial resolution were determined in a water phantom. In 676 examinations with either conventional or digital imaging (system: Diagnost 76, DSI) the number of images and the duration of the fluoroscopy time were compared. 101 examinations with digital as well as conventional documentation were evaluated by using 5 criteria describing the diagnostic performance. The entrance dose of the DSI is 12% to 36% of the film/screen system and the spatial resolution of the DSI may be better than that of a film/screen system with a speed of 200. The fluoroscopy time shows no significant difference between DSI and the film/screen technique. In 2 of 4 examination modes significantly more images were produced by the DSI. With exception of the criterion of edge sharpness, DSI yields a significantly inferior assessment compared with the film/screen technique. (orig./MG) [de

  2. Sociocultural Experiences of Bulimic and Non-Bulimic Adolescents in a School-Based Chinese Sample

    Science.gov (United States)

    Jackson, Todd; Chen, Hong

    2010-01-01

    From a large school-based sample (N = 3,084), 49 Mainland Chinese adolescents (31 girls, 18 boys) who endorsed all DSM-IV criteria for bulimia nervosa (BN) or sub-threshold BN and 49 matched controls (31 girls, 18 boys) completed measures of demographics and sociocultural experiences related to body image. Compared to less symptomatic peers, those…

  3. Down image recognition based on deep convolutional neural network

    Directory of Open Access Journals (Sweden)

    Wenzhu Yang

    2018-06-01

    Full Text Available Since of the scale and the various shapes of down in the image, it is difficult for traditional image recognition method to correctly recognize the type of down image and get the required recognition accuracy, even for the Traditional Convolutional Neural Network (TCNN. To deal with the above problems, a Deep Convolutional Neural Network (DCNN for down image classification is constructed, and a new weight initialization method is proposed. Firstly, the salient regions of a down image were cut from the image using the visual saliency model. Then, these salient regions of the image were used to train a sparse autoencoder and get a collection of convolutional filters, which accord with the statistical characteristics of dataset. At last, a DCNN with Inception module and its variants was constructed. To improve the recognition accuracy, the depth of the network is deepened. The experiment results indicate that the constructed DCNN increases the recognition accuracy by 2.7% compared to TCNN, when recognizing the down in the images. The convergence rate of the proposed DCNN with the new weight initialization method is improved by 25.5% compared to TCNN. Keywords: Deep convolutional neural network, Weight initialization, Sparse autoencoder, Visual saliency model, Image recognition

  4. Negotiating the Client-Based Capstone Experience

    Science.gov (United States)

    Reifenberg, Steve; Long, Sean

    2017-01-01

    Many graduate programs for professionals (public policy, public administration, business, international affairs, and others) use client-based experiential learning projects, often termed "capstones," in which students combine theory and practice to benefit an outside client. Increasingly, undergraduate programs use client-based capstones…

  5. Zero-Base Budgeting:; An Institutional Experience.

    Science.gov (United States)

    Alexander, Donald L.; Anderson, Roger C.

    Zero-base budgeting as it is used at Allegany College is described. Zero-based budgeting is defined as a budgeting and planning approach that requires the examination of every item in a budget request as if the request were being proposed for the first time. Budgets (decision packages) are first made up for decision units (i.e., a course for the…

  6. A simple polarized-based diffused reflectance colour imaging system

    African Journals Online (AJOL)

    A simple polarized-based diffuse reflectance imaging system has been developed. The system is designed for both in vivo and in vitro imaging of agricultural specimen in the visible region. The system uses a commercial web camera and a halogen lamp that makes it relatively simple and less expensive for diagnostic ...

  7. Efficient Image Blur in Web-Based Applications

    DEFF Research Database (Denmark)

    Kraus, Martin

    2010-01-01

    Scripting languages require the use of high-level library functions to implement efficient image processing; thus, real-time image blur in web-based applications is a challenging task unless specific library functions are available for this purpose. We present a pyramid blur algorithm, which can ...

  8. Geographic Object-Based Image Analysis: Towards a new paradigm

    NARCIS (Netherlands)

    Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.A.|info:eu-repo/dai/nl/224281216; Queiroz Feitosa, R.; van der Meer, F.D.|info:eu-repo/dai/nl/138940908; van der Werff, H.M.A.; van Coillie, F.; Tiede, A.

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis – GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature

  9. Reconfigurable pipelined sensing for image-based control

    NARCIS (Netherlands)

    Medina, R.; Stuijk, S.; Goswami, D.; Basten, T.

    2016-01-01

    Image-based control systems are becoming common in domains such as robotics, healthcare and industrial automation. Coping with a long sample period because of the latency of the image processing algorithm is an open challenge. Modern multi-core platforms allow to address this challenge by pipelining

  10. A World Wide Web Region-Based Image Search Engine

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper the development of an intelligent image content-based search engine for the World Wide Web is presented. This system will offer a new form of media representation and access of content available in WWW. Information Web Crawlers continuously traverse the Internet and collect images...

  11. Luminescence imaging strategies for drone-based PV array inspection

    DEFF Research Database (Denmark)

    Benatto, Gisele Alves dos Reis; Riedel, Nicholas; Mantel, Claire

    2017-01-01

    The goal of this work is to perform outdoor defect detection imaging that will be used in a fast, accurate and automatic drone-based survey system for PV power plants. The imaging development focuses on techniques that do not require electrical contact, permitting automatic drone inspections...

  12. Sampling in image space for vision based SLAM

    NARCIS (Netherlands)

    Booij, O.; Zivkovic, Z.; Kröse, B.

    2008-01-01

    Loop closing in vision based SLAM applications is a difficult task. Comparing new image data with all previous image data acquired for the map is practically impossible because of the high computational costs. This problem is part of the bigger problem to acquire local geometric constraints from

  13. A Constrained Algorithm Based NMFα for Image Representation

    Directory of Open Access Journals (Sweden)

    Chenxue Yang

    2014-01-01

    Full Text Available Nonnegative matrix factorization (NMF is a useful tool in learning a basic representation of image data. However, its performance and applicability in real scenarios are limited because of the lack of image information. In this paper, we propose a constrained matrix decomposition algorithm for image representation which contains parameters associated with the characteristics of image data sets. Particularly, we impose label information as additional hard constraints to the α-divergence-NMF unsupervised learning algorithm. The resulted algorithm is derived by using Karush-Kuhn-Tucker (KKT conditions as well as the projected gradient and its monotonic local convergence is proved by using auxiliary functions. In addition, we provide a method to select the parameters to our semisupervised matrix decomposition algorithm in the experiment. Compared with the state-of-the-art approaches, our method with the parameters has the best classification accuracy on three image data sets.

  14. Image registration based on virtual frame sequence analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chen, H.; Ng, W.S. [Nanyang Technological University, Computer Integrated Medical Intervention Laboratory, School of Mechanical and Aerospace Engineering, Singapore (Singapore); Shi, D. (Nanyang Technological University, School of Computer Engineering, Singapore, Singpore); Wee, S.B. [Tan Tock Seng Hospital, Department of General Surgery, Singapore (Singapore)

    2007-08-15

    This paper is to propose a new framework for medical image registration with large nonrigid deformations, which still remains one of the biggest challenges for image fusion and further analysis in many medical applications. Registration problem is formulated as to recover a deformation process with the known initial state and final state. To deal with large nonlinear deformations, virtual frames are proposed to be inserted to model the deformation process. A time parameter is introduced and the deformation between consecutive frames is described with a linear affine transformation. Experiments are conducted with simple geometric deformation as well as complex deformations presented in MRI and ultrasound images. All the deformations are characterized with nonlinearity. The positive results demonstrated the effectiveness of this algorithm. The framework proposed in this paper is feasible to register medical images with large nonlinear deformations and is especially useful for sequential images. (orig.)

  15. Wavelet-based compression of pathological images for telemedicine applications

    Science.gov (United States)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  16. Personal identification based on blood vessels of retinal fundus images

    Science.gov (United States)

    Fukuta, Keisuke; Nakagawa, Toshiaki; Hayashi, Yoshinori; Hatanaka, Yuji; Hara, Takeshi; Fujita, Hiroshi

    2008-03-01

    Biometric technique has been implemented instead of conventional identification methods such as password in computer, automatic teller machine (ATM), and entrance and exit management system. We propose a personal identification (PI) system using color retinal fundus images which are unique to each individual. The proposed procedure for identification is based on comparison of an input fundus image with reference fundus images in the database. In the first step, registration between the input image and the reference image is performed. The step includes translational and rotational movement. The PI is based on the measure of similarity between blood vessel images generated from the input and reference images. The similarity measure is defined as the cross-correlation coefficient calculated from the pixel values. When the similarity is greater than a predetermined threshold, the input image is identified. This means both the input and the reference images are associated to the same person. Four hundred sixty-two fundus images including forty-one same-person's image pairs were used for the estimation of the proposed technique. The false rejection rate and the false acceptance rate were 9.9×10 -5% and 4.3×10 -5%, respectively. The results indicate that the proposed method has a higher performance than other biometrics except for DNA. To be used for practical application in the public, the device which can take retinal fundus images easily is needed. The proposed method is applied to not only the PI but also the system which warns about misfiling of fundus images in medical facilities.

  17. Opportunities in Participatory Science and Citizen Science with MRO's High Resolution Imaging Science Experiment: A Virtual Science Team Experience

    Science.gov (United States)

    Gulick, Ginny

    2009-09-01

    We report on the accomplishments of the HiRISE EPO program over the last two and a half years of science operations. We have focused primarily on delivering high impact science opportunities through our various participatory science and citizen science websites. Uniquely, we have invited students from around the world to become virtual HiRISE team members by submitting target suggestions via our HiRISE Quest Image challenges using HiWeb the team's image suggestion facility web tools. When images are acquired, students analyze their returned images, write a report and work with a HiRISE team member to write a image caption for release on the HiRISE website (http://hirise.lpl.arizona.edu). Another E/PO highlight has been our citizen scientist effort, HiRISE Clickworkers (http://clickworkers.arc.nasa.gov/hirise). Clickworkers enlists volunteers to identify geologic features (e.g., dunes, craters, wind streaks, gullies, etc.) in the HiRISE images and help generate searchable image databases. In addition, the large image sizes and incredible spatial resolution of the HiRISE camera can tax the capabilities of the most capable computers, so we have also focused on enabling typical users to browse, pan and zoom the HiRISE images using our HiRISE online image viewer (http://marsoweb.nas.nasa.gov/HiRISE/hirise_images/). Our educational materials available on the HiRISE EPO web site (http://hirise.seti.org/epo) include an assortment of K through college level, standards-based activity books, a K through 3 coloring/story book, a middle school level comic book, and several interactive educational games, including Mars jigsaw puzzles, crosswords, word searches and flash cards.

  18. Imaging microchannel plate detectors for XUV sky survey experiments

    International Nuclear Information System (INIS)

    Barstow, M.A.; Fraser, G.W.; Milward, S.R.

    1986-01-01

    Attention is given to the development of microchannel plate detectors for the Wide Field Camera (WFC) XUV (50-300 A) sky survey experiment on Rosat. A novel feature of the detector design is that the microchannel plates and their resistive anode readout are curved to the same radius as the WFC telescope focal surface. It is shown that curving the channel plates is not detrimental to gain uniformity. The paper describes the design of a curved resistive anode readout element and contrasts the present measurements of spatial resolution, global and local uniformity and temperature coefficient of resistance with the poor performance recently ascribed to resistive anodes in the literature. 18 references

  19. Imperceptible reversible watermarking of radiographic images based on quantum noise masking.

    Science.gov (United States)

    Pan, Wei; Bouslimi, Dalel; Karasad, Mohamed; Cozic, Michel; Coatrieux, Gouenou

    2018-07-01

    Advances in information and communication technologies boost the sharing and remote access to medical images. Along with this evolution, needs in terms of data security are also increased. Watermarking can contribute to better protect images by dissimulating into their pixels some security attributes (e.g., digital signature, user identifier). But, to take full advantage of this technology in healthcare, one key problem to address is to ensure that the image distortion induced by the watermarking process does not endanger the image diagnosis value. To overcome this issue, reversible watermarking is one solution. It allows watermark removal with the exact recovery of the image. Unfortunately, reversibility does not mean that imperceptibility constraints are relaxed. Indeed, once the watermark removed, the image is unprotected. It is thus important to ensure the invisibility of reversible watermark in order to ensure a permanent image protection. We propose a new fragile reversible watermarking scheme for digital radiographic images, the main originality of which stands in masking a reversible watermark into the image quantum noise (the dominant noise in radiographic images). More clearly, in order to ensure the watermark imperceptibility, our scheme differentiates the image black background, where message embedding is conducted into pixel gray values with the well-known histogram shifting (HS) modulation, from the anatomical object, where HS is applied to wavelet detail coefficients, masking the watermark with the image quantum noise. In order to maintain the watermark embedder and reader synchronized in terms of image partitioning and insertion domain, our scheme makes use of different classification processes that are invariant to message embedding. We provide the theoretical performance limits of our scheme into the image quantum noise in terms of image distortion and message size (i.e. capacity). Experiments conducted on more than 800 12 bits radiographic images

  20. Despeckling Polsar Images Based on Relative Total Variation Model

    Science.gov (United States)

    Jiang, C.; He, X. F.; Yang, L. J.; Jiang, J.; Wang, D. Y.; Yuan, Y.

    2018-04-01

    Relatively total variation (RTV) algorithm, which can effectively decompose structure information and texture in image, is employed in extracting main structures of the image. However, applying the RTV directly to polarimetric SAR (PolSAR) image filtering will not preserve polarimetric information. A new RTV approach based on the complex Wishart distribution is proposed considering the polarimetric properties of PolSAR. The proposed polarization RTV (PolRTV) algorithm can be used for PolSAR image filtering. The L-band Airborne SAR (AIRSAR) San Francisco data is used to demonstrate the effectiveness of the proposed algorithm in speckle suppression, structural information preservation, and polarimetric property preservation.

  1. Content-based image retrieval with ontological ranking

    Science.gov (United States)

    Tsai, Shen-Fu; Tsai, Min-Hsuan; Huang, Thomas S.

    2010-02-01

    Images are a much more powerful medium of expression than text, as the adage says: "One picture is worth a thousand words." It is because compared with text consisting of an array of words, an image has more degrees of freedom and therefore a more complicated structure. However, the less limited structure of images presents researchers in the computer vision community a tough task of teaching machines to understand and organize images, especially when a limit number of learning examples and background knowledge are given. The advance of internet and web technology in the past decade has changed the way human gain knowledge. People, hence, can exchange knowledge with others by discussing and contributing information on the web. As a result, the web pages in the internet have become a living and growing source of information. One is therefore tempted to wonder whether machines can learn from the web knowledge base as well. Indeed, it is possible to make computer learn from the internet and provide human with more meaningful knowledge. In this work, we explore this novel possibility on image understanding applied to semantic image search. We exploit web resources to obtain links from images to keywords and a semantic ontology constituting human's general knowledge. The former maps visual content to related text in contrast to the traditional way of associating images with surrounding text; the latter provides relations between concepts for machines to understand to what extent and in what sense an image is close to the image search query. With the aid of these two tools, the resulting image search system is thus content-based and moreover, organized. The returned images are ranked and organized such that semantically similar images are grouped together and given a rank based on the semantic closeness to the input query. The novelty of the system is twofold: first, images are retrieved not only based on text cues but their actual contents as well; second, the grouping

  2. Contour extraction of echocardiographic images based on pre-processing

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, Zinah Rajab; Rahmat, Rahmita Wirza; Abdullah, Lili Nurliyana [Department of Multimedia, Faculty of Computer Science and Information Technology, Department of Computer and Communication Systems Engineering, Faculty of Engineering University Putra Malaysia 43400 Serdang, Selangor (Malaysia); Zamrin, D M [Department of Surgery, Faculty of Medicine, National University of Malaysia, 56000 Cheras, Kuala Lumpur (Malaysia); Saripan, M Iqbal

    2011-02-15

    In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.

  3. Contour extraction of echocardiographic images based on pre-processing

    International Nuclear Information System (INIS)

    Hussein, Zinah Rajab; Rahmat, Rahmita Wirza; Abdullah, Lili Nurliyana; Zamrin, D M; Saripan, M Iqbal

    2011-01-01

    In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.

  4. Improving Docking Performance Using Negative Image-Based Rescoring.

    Science.gov (United States)

    Kurkinen, Sami T; Niinivehmas, Sanna; Ahinko, Mira; Lätti, Sakari; Pentikäinen, Olli T; Postila, Pekka A

    2018-01-01

    Despite the large computational costs of molecular docking, the default scoring functions are often unable to recognize the active hits from the inactive molecules in large-scale virtual screening experiments. Thus, even though a correct binding pose might be sampled during the docking, the active compound or its biologically relevant pose is not necessarily given high enough score to arouse the attention. Various rescoring and post-processing approaches have emerged for improving the docking performance. Here, it is shown that the very early enrichment (number of actives scored higher than 1% of the highest ranked decoys) can be improved on average 2.5-fold or even 8.7-fold by comparing the docking-based ligand conformers directly against the target protein's cavity shape and electrostatics. The similarity comparison of the conformers is performed without geometry optimization against the negative image of the target protein's ligand-binding cavity using the negative image-based (NIB) screening protocol. The viability of the NIB rescoring or the R-NiB, pioneered in this study, was tested with 11 target proteins using benchmark libraries. By focusing on the shape/electrostatics complementarity of the ligand-receptor association, the R-NiB is able to improve the early enrichment of docking essentially without adding to the computing cost. By implementing consensus scoring, in which the R-NiB and the original docking scoring are weighted for optimal outcome, the early enrichment is improved to a level that facilitates effective drug discovery. Moreover, the use of equal weight from the original docking scoring and the R-NiB scoring improves the yield in most cases.

  5. Image Fusion-Based Land Cover Change Detection Using Multi-Temporal High-Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Biao Wang

    2017-08-01

    Full Text Available Change detection is usually treated as a problem of explicitly detecting land cover transitions in satellite images obtained at different times, and helps with emergency response and government management. This study presents an unsupervised change detection method based on the image fusion of multi-temporal images. The main objective of this study is to improve the accuracy of unsupervised change detection from high-resolution multi-temporal images. Our method effectively reduces change detection errors, since spatial displacement and spectral differences between multi-temporal images are evaluated. To this end, a total of four cross-fused images are generated with multi-temporal images, and the iteratively reweighted multivariate alteration detection (IR-MAD method—a measure for the spectral distortion of change information—is applied to the fused images. In this experiment, the land cover change maps were extracted using multi-temporal IKONOS-2, WorldView-3, and GF-1 satellite images. The effectiveness of the proposed method compared with other unsupervised change detection methods is demonstrated through experimentation. The proposed method achieved an overall accuracy of 80.51% and 97.87% for cases 1 and 2, respectively. Moreover, the proposed method performed better when differentiating the water area from the vegetation area compared to the existing change detection methods. Although the water area beneath moderate and sparse vegetation canopy was captured, vegetation cover and paved regions of the water body were the main sources of omission error, and commission errors occurred primarily in pixels of mixed land use and along the water body edge. Nevertheless, the proposed method, in conjunction with high-resolution satellite imagery, offers a robust and flexible approach to land cover change mapping that requires no ancillary data for rapid implementation.

  6. User Throughput-Based Quality of Experience

    DEFF Research Database (Denmark)

    Hiranandani, Bhavesh; Sarkar, Mahasweta; Mihovska, Albena

    of the users Quality of Experience (QoE). Today, most of the airlines have started providing in-flight wi-fi services, which allow their passengers to use Internet services to send and receive e-mails, and stream video from various online service providers while on board the flight. Statistics show that more...... than 50% of the passengers use the provided wi-fi service to stream video, therefore, their perception of the video service will be determining for the service provider’s performance. One easy way to evaluate the perceived video streaming (i.e. QoE) is by estimating the frequency of stalls. In our...

  7. Image-Based Fine-Scale Infrastructure Monitoring

    Science.gov (United States)

    Detchev, Ivan Denislavov

    Monitoring the physical health of civil infrastructure systems is an important task that must be performed frequently in order to ensure their serviceability and sustainability. Additionally, laboratory experiments where individual system components are tested on the fine-scale level provide essential information during the structural design process. This type of inspection, i.e., measurements of deflections and/or cracks, has traditionally been performed with instrumentation that requires access to, or contact with, the structural element being tested; performs deformation measurements in only one dimension or direction; and/or provides no permanent visual record. To avoid the downsides of such instrumentation, this dissertation proposes a remote sensing approach based on a photogrammetric system capable of three-dimensional reconstruction. The proposed system is low-cost, consists of off-the-shelf components, and is capable of reconstructing objects or surfaces with homogeneous texture. The scientific contributions of this research work address the drawbacks in currently existing literature. Methods for in-situ multi-camera system calibration and system stability analysis are proposed in addition to methods for deflection/displacement monitoring, and crack detection and characterization in three dimensions. The mathematical model for the system calibration is based on a single or multiple reference camera(s) and built-in relative orientation constraints where the interior orientation and the mounting parameters for all cameras are explicitly estimated. The methods for system stability analysis can be used to comprehensively check for the cumulative impact of any changes in the system parameters. They also provide a quantitative measure of this impact on the reconstruction process in terms of image space units. Deflection/displacement monitoring of dynamic surfaces in three dimensions is achieved with the system by performing an innovative sinusoidal fitting

  8. A camac based data acquisition system for flat-panel image array readout

    International Nuclear Information System (INIS)

    Morton, E.J.; Antonuk, L.E.; Berry, J.E.; Huang, W.; Mody, P.; Yorkston, J.; Longo, M.J.

    1993-01-01

    A readout system has been developed to facilitate the digitization and subsequent display of image data from two-dimensional, pixellated, flat-panel, amorphous silicon imaging arrays. These arrays have been designed specifically for medical x-ray imaging applications. The readout system is based on hardware and software developed for various experiments at CERN and Fermi National Accelerator Laboratory. Additional analog signal processing and digital control electronics were constructed specifically for this application. The authors report on the form of the resulting data acquisition system, discuss aspects of its performance, and consider the compromises which were involved in its design

  9. Remote Sensing Image Fusion Based on the Combination Grey Absolute Correlation Degree and IHS Transform

    Directory of Open Access Journals (Sweden)

    Hui LIN

    2014-12-01

    Full Text Available An improved fusion algorithm for multi-source remote sensing images with high spatial resolution and multi-spectral capacity is proposed based on traditional IHS fusion and grey correlation analysis. Firstly, grey absolute correlation degree is used to discriminate non-edge pixels and edge pixels in high-spatial resolution images, by which the weight of intensity component is identified in order to combine it with high-spatial resolution image. Therefore, image fusion is achieved using IHS inverse transform. The proposed method is applied to ETM+ multi-spectral images and panchromatic image, and Quickbird’s multi-spectral images and panchromatic image respectively. The experiments prove that the fusion method proposed in the paper can efficiently preserve spectral information of the original multi-spectral images while enhancing spatial resolution greatly. By comparison and analysis, the proposed fusion algorithm is better than traditional IHS fusion and fusion method based on grey correlation analysis and IHS transform.

  10. A novel image encryption scheme based on spatial chaos map

    International Nuclear Information System (INIS)

    Sun Fuyan; Liu Shutang; Li Zhongqin; Lue Zongwang

    2008-01-01

    In recent years, the chaos-based cryptographic algorithms have suggested some new and efficient ways to develop secure image encryption techniques, but the drawbacks of small key space and weak security in one-dimensional chaotic cryptosystems are obvious. In this paper, spatial chaos system are used for high degree security image encryption while its speed is acceptable. The proposed algorithm is described in detail. The basic idea is to encrypt the image in space with spatial chaos map pixel by pixel, and then the pixels are confused in multiple directions of space. Using this method one cycle, the image becomes indistinguishable in space due to inherent properties of spatial chaotic systems. Several experimental results, key sensitivity tests, key space analysis, and statistical analysis show that the approach for image cryptosystems provides an efficient and secure way for real time image encryption and transmission from the cryptographic viewpoint

  11. Image based method for aberration measurement of lithographic tools

    Science.gov (United States)

    Xu, Shuang; Tao, Bo; Guo, Yongxing; Li, Gongfa

    2018-01-01

    Information of lens aberration of lithographic tools is important as it directly affects the intensity distribution in the image plane. Zernike polynomials are commonly used for a mathematical description of lens aberrations. Due to the advantage of lower cost and easier implementation of tools, image based measurement techniques have been widely used. Lithographic tools are typically partially coherent systems that can be described by a bilinear model, which entails time consuming calculations and does not lend a simple and intuitive relationship between lens aberrations and the resulted images. Previous methods for retrieving lens aberrations in such partially coherent systems involve through-focus image measurements and time-consuming iterative algorithms. In this work, we propose a method for aberration measurement in lithographic tools, which only requires measuring two images of intensity distribution. Two linear formulations are derived in matrix forms that directly relate the measured images to the unknown Zernike coefficients. Consequently, an efficient non-iterative solution is obtained.

  12. Medical image security using modified chaos-based cryptography approach

    Science.gov (United States)

    Talib Gatta, Methaq; Al-latief, Shahad Thamear Abd

    2018-05-01

    The progressive development in telecommunication and networking technologies have led to the increased popularity of telemedicine usage which involve storage and transfer of medical images and related information so security concern is emerged. This paper presents a method to provide the security to the medical images since its play a major role in people healthcare organizations. The main idea in this work based on the chaotic sequence in order to provide efficient encryption method that allows reconstructing the original image from the encrypted image with high quality and minimum distortion in its content and doesn’t effect in human treatment and diagnosing. Experimental results prove the efficiency of the proposed method using some of statistical measures and robust correlation between original image and decrypted image.

  13. Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis

    Science.gov (United States)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-01-01

    To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.

  14. Entropy-Based Block Processing for Satellite Image Registration

    Directory of Open Access Journals (Sweden)

    Ikhyun Lee

    2012-11-01

    Full Text Available Image registration is an important task in many computer vision applications such as fusion systems, 3D shape recovery and earth observation. Particularly, registering satellite images is challenging and time-consuming due to limited resources and large image size. In such scenario, state-of-the-art image registration methods such as scale-invariant feature transform (SIFT may not be suitable due to high processing time. In this paper, we propose an algorithm based on block processing via entropy to register satellite images. The performance of the proposed method is evaluated using different real images. The comparative analysis shows that it not only reduces the processing time but also enhances the accuracy.

  15. Image Inpainting Based on Coherence Transport with Adapted Distance Functions

    KAUST Repository

    Mä rz, Thomas

    2011-01-01

    We discuss an extension of our method image inpainting based on coherence transport. For the latter method the pixels of the inpainting domain have to be serialized into an ordered list. Until now, to induce the serialization we have used

  16. X-ray detectors based on image sensors

    International Nuclear Information System (INIS)

    Costa, A.P.R.

    1983-01-01

    X-ray detectors based on image sensors are described and a comparison is made between the advantages and the disadvantages of such a kind of detectors with the position sensitive detectors. (L.C.) [pt

  17. Hardware Realization of Chaos Based Symmetric Image Encryption

    KAUST Repository

    Barakat, Mohamed L.

    2012-01-01

    This thesis presents a novel work on hardware realization of symmetric image encryption utilizing chaos based continuous systems as pseudo random number generators. Digital implementation of chaotic systems results in serious degradations

  18. Iris Image Classification Based on Hierarchical Visual Codebook.

    Science.gov (United States)

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  19. Smart Images Search based on Visual Features Fusion

    International Nuclear Information System (INIS)

    Saad, M.H.

    2013-01-01

    Image search engines attempt to give fast and accurate access to the wide range of the huge amount images available on the Internet. There have been a number of efforts to build search engines based on the image content to enhance search results. Content-Based Image Retrieval (CBIR) systems have achieved a great interest since multimedia files, such as images and videos, have dramatically entered our lives throughout the last decade. CBIR allows automatically extracting target images according to objective visual contents of the image itself, for example its shapes, colors and textures to provide more accurate ranking of the results. The recent approaches of CBIR differ in terms of which image features are extracted to be used as image descriptors for matching process. This thesis proposes improvements of the efficiency and accuracy of CBIR systems by integrating different types of image features. This framework addresses efficient retrieval of images in large image collections. A comparative study between recent CBIR techniques is provided. According to this study; image features need to be integrated to provide more accurate description of image content and better image retrieval accuracy. In this context, this thesis presents new image retrieval approaches that provide more accurate retrieval accuracy than previous approaches. The first proposed image retrieval system uses color, texture and shape descriptors to form the global features vector. This approach integrates the yc b c r color histogram as a color descriptor, the modified Fourier descriptor as a shape descriptor and modified Edge Histogram as a texture descriptor in order to enhance the retrieval results. The second proposed approach integrates the global features vector, which is used in the first approach, with the SURF salient point technique as local feature. The nearest neighbor matching algorithm with a proposed similarity measure is applied to determine the final image rank. The second approach

  20. A data grid for imaging-based clinical trials

    Science.gov (United States)

    Zhou, Zheng; Chao, Sander S.; Lee, Jasper; Liu, Brent; Documet, Jorge; Huang, H. K.

    2007-03-01

    Clinical trials play a crucial role in testing new drugs or devices in modern medicine. Medical imaging has also become an important tool in clinical trials because images provide a unique and fast diagnosis with visual observation and quantitative assessment. A typical imaging-based clinical trial consists of: 1) A well-defined rigorous clinical trial protocol, 2) a radiology core that has a quality control mechanism, a biostatistics component, and a server for storing and distributing data and analysis results; and 3) many field sites that generate and send image studies to the radiology core. As the number of clinical trials increases, it becomes a challenge for a radiology core servicing multiple trials to have a server robust enough to administrate and quickly distribute information to participating radiologists/clinicians worldwide. The Data Grid can satisfy the aforementioned requirements of imaging based clinical trials. In this paper, we present a Data Grid architecture for imaging-based clinical trials. A Data Grid prototype has been implemented in the Image Processing and Informatics (IPI) Laboratory at the University of Southern California to test and evaluate performance in storing trial images and analysis results for a clinical trial. The implementation methodology and evaluation protocol of the Data Grid are presented.