WorldWideScience

Sample records for conductivity imaging based

  1. Improving Conductivity Image Quality Using Block Matrix-based Multiple Regularization (BMMR Technique in EIT: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Tushar Kanti Bera

    2011-06-01

    Full Text Available A Block Matrix based Multiple Regularization (BMMR technique is proposed for improving conductivity image quality in EIT. The response matrix (JTJ has been partitioned into several sub-block matrices and the highest eigenvalue of each sub-block matrices has been chosen as regularization parameter for the nodes contained by that sub-block. Simulated boundary data are generated for circular domain with circular inhomogeneity and the conductivity images are reconstructed in a Model Based Iterative Image Reconstruction (MoBIIR algorithm. Conductivity images are reconstructed with BMMR technique and the results are compared with the Single-step Tikhonov Regularization (STR and modified Levenberg-Marquardt Regularization (LMR methods. It is observed that the BMMR technique reduces the projection error and solution error and improves the conductivity reconstruction in EIT. Result show that the BMMR method also improves the image contrast and inhomogeneity conductivity profile and hence the reconstructed image quality is enhanced. ;doi:10.5617/jeb.170 J Electr Bioimp, vol. 2, pp. 33-47, 2011

  2. A review of anisotropic conductivity models of brain white matter based on diffusion tensor imaging.

    Science.gov (United States)

    Wu, Zhanxiong; Liu, Yang; Hong, Ming; Yu, Xiaohui

    2018-06-01

    The conductivity of brain tissues is not only essential for electromagnetic source estimation (ESI), but also a key reflector of the brain functional changes. Different from the other brain tissues, the conductivity of whiter matter (WM) is highly anisotropic and a tensor is needed to describe it. The traditional electrical property imaging methods, such as electrical impedance tomography (EIT) and magnetic resonance electrical impedance tomography (MREIT), usually fail to image the anisotropic conductivity tensor of WM with high spatial resolution. The diffusion tensor imaging (DTI) is a newly developed technique that can fulfill this purpose. This paper reviews the existing anisotropic conductivity models of WM based on the DTI and discusses their advantages and disadvantages, as well as identifies opportunities for future research on this subject. It is crucial to obtain the linear conversion coefficient between the eigenvalues of anisotropic conductivity tensor and diffusion tensor, since they share the same eigenvectors. We conclude that the electrochemical model is suitable for ESI analysis because the conversion coefficient can be directly obtained from the concentration of ions in extracellular liquid and that the volume fraction model is appropriate to study the influence of WM structural changes on electrical conductivity. Graphical abstract ᅟ.

  3. MR-based conductivity imaging using multiple receiver coils.

    Science.gov (United States)

    Lee, Joonsung; Shin, Jaewook; Kim, Dong-Hyun

    2016-08-01

    To propose a signal combination method for MR-based tissue conductivity mapping using a standard clinical scanner with multiple receiver coils. The theory of the proposed method is presented with two practical approaches, a coil-specific approach and a subject-specific approach. Conductivity maps were reconstructed using the transceive phase of the combined signal. The sensitivities of the coefficients used for signal combination were analyzed and the method was compared with other signal combination methods. For validation, multiple receiver brain coils and multiple receiver breast coils were used in phantom, in vivo brain, and in vivo breast studies. The variation among the conductivity estimates was conductivity estimates. MR-based tissue conductivity mapping is feasible when using a standard clinical MR scanner with multiple receiver coils. The proposed method reduces systematic errors in phase-based conductivity mapping that can occur due to the inhomogeneous magnitude of the combined receive profile. Magn Reson Med 76:530-539, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  4. Software Toolbox for Low-Frequency Conductivity and Current Density Imaging Using MRI.

    Science.gov (United States)

    Sajib, Saurav Z K; Katoch, Nitish; Kim, Hyung Joong; Kwon, Oh In; Woo, Eung Je

    2017-11-01

    Low-frequency conductivity and current density imaging using MRI includes magnetic resonance electrical impedance tomography (MREIT), diffusion tensor MREIT (DT-MREIT), conductivity tensor imaging (CTI), and magnetic resonance current density imaging (MRCDI). MRCDI and MREIT provide current density and isotropic conductivity images, respectively, using current-injection phase MRI techniques. DT-MREIT produces anisotropic conductivity tensor images by incorporating diffusion weighted MRI into MREIT. These current-injection techniques are finding clinical applications in diagnostic imaging and also in transcranial direct current stimulation (tDCS), deep brain stimulation (DBS), and electroporation where treatment currents can function as imaging currents. To avoid adverse effects of nerve and muscle stimulations due to injected currents, conductivity tensor imaging (CTI) utilizes B1 mapping and multi-b diffusion weighted MRI to produce low-frequency anisotropic conductivity tensor images without injecting current. This paper describes numerical implementations of several key mathematical functions for conductivity and current density image reconstructions in MRCDI, MREIT, DT-MREIT, and CTI. To facilitate experimental studies of clinical applications, we developed a software toolbox for these low-frequency conductivity and current density imaging methods. This MR-based conductivity imaging (MRCI) toolbox includes 11 toolbox functions which can be used in the MATLAB environment. The MRCI toolbox is available at http://iirc.khu.ac.kr/software.html . Its functions were tested by using several experimental datasets, which are provided together with the toolbox. Users of the toolbox can focus on experimental designs and interpretations of reconstructed images instead of developing their own image reconstruction softwares. We expect more toolbox functions to be added from future research outcomes. Low-frequency conductivity and current density imaging using MRI includes

  5. Magnetoacoustic microscopic imaging of conductive objects and nanoparticles distribution

    Science.gov (United States)

    Liu, Siyu; Zhang, Ruochong; Luo, Yunqi; Zheng, Yuanjin

    2017-09-01

    Magnetoacoustic tomography has been demonstrated as a powerful and low-cost multi-wave imaging modality. However, due to limited spatial resolution and detection efficiency of magnetoacoustic signal, full potential of the magnetoacoustic imaging remains to be tapped. Here we report a high-resolution magnetoacoustic microscopy method, where magnetic stimulation is provided by a compact solenoid resonance coil connected with a matching network, and acoustic reception is realized by using a high-frequency focused ultrasound transducer. Scanning the magnetoacoustic microscopy system perpendicularly to the acoustic axis of the focused transducer would generate a two-dimensional microscopic image with acoustically determined lateral resolution. It is analyzed theoretically and demonstrated experimentally that magnetoacoustic generation in this microscopic system depends on the conductivity profile of conductive objects and localized distribution of superparamagnetic iron magnetic nanoparticles, based on two different but related implementations. The lateral resolution is characterized. Directional nature of magnetoacoustic vibration and imaging sensitivity for mapping magnetic nanoparticles are also discussed. The proposed microscopy system offers a high-resolution method that could potentially map intrinsic conductivity distribution in biological tissue and extraneous magnetic nanoparticles.

  6. Anisotropic conductivity imaging with MREIT using equipotential projection algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Degirmenci, Evren [Department of Electrical and Electronics Engineering, Mersin University, Mersin (Turkey); Eyueboglu, B Murat [Department of Electrical and Electronics Engineering, Middle East Technical University, 06531, Ankara (Turkey)

    2007-12-21

    Magnetic resonance electrical impedance tomography (MREIT) combines magnetic flux or current density measurements obtained by magnetic resonance imaging (MRI) and surface potential measurements to reconstruct images of true conductivity with high spatial resolution. Most of the biological tissues have anisotropic conductivity; therefore, anisotropy should be taken into account in conductivity image reconstruction. Almost all of the MREIT reconstruction algorithms proposed to date assume isotropic conductivity distribution. In this study, a novel MREIT image reconstruction algorithm is proposed to image anisotropic conductivity. Relative anisotropic conductivity values are reconstructed iteratively, using only current density measurements without any potential measurement. In order to obtain true conductivity values, only either one potential or conductivity measurement is sufficient to determine a scaling factor. The proposed technique is evaluated on simulated data for isotropic and anisotropic conductivity distributions, with and without measurement noise. Simulation results show that the images of both anisotropic and isotropic conductivity distributions can be reconstructed successfully.

  7. Electronic structure classifications using scanning tunneling microscopy conductance imaging

    International Nuclear Information System (INIS)

    Horn, K.M.; Swartzentruber, B.S.; Osbourn, G.C.; Bouchard, A.; Bartholomew, J.W.

    1998-01-01

    The electronic structure of atomic surfaces is imaged by applying multivariate image classification techniques to multibias conductance data measured using scanning tunneling microscopy. Image pixels are grouped into classes according to shared conductance characteristics. The image pixels, when color coded by class, produce an image that chemically distinguishes surface electronic features over the entire area of a multibias conductance image. Such open-quotes classedclose quotes images reveal surface features not always evident in a topograph. This article describes the experimental technique used to record multibias conductance images, how image pixels are grouped in a mathematical, classification space, how a computed grouping algorithm can be employed to group pixels with similar conductance characteristics in any number of dimensions, and finally how the quality of the resulting classed images can be evaluated using a computed, combinatorial analysis of the full dimensional space in which the classification is performed. copyright 1998 American Institute of Physics

  8. Massively parallel electrical conductivity imaging of the subsurface: Applications to hydrocarbon exploration

    Science.gov (United States)

    Newman, Gregory A.; Commer, Michael

    2009-07-01

    Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/L supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.

  9. Massively parallel electrical conductivity imaging of the subsurface: Applications to hydrocarbon exploration

    International Nuclear Information System (INIS)

    Newman, Gregory A; Commer, Michael

    2009-01-01

    Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/L supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.

  10. Multishot echo-planar MREIT for fast imaging of conductivity, current density, and electric field distributions.

    Science.gov (United States)

    Chauhan, Munish; Vidya Shankar, Rohini; Ashok Kumar, Neeta; Kodibagkar, Vikram D; Sadleir, Rosalind

    2018-01-01

    Magnetic resonance electrical impedance tomography (MREIT) sequences typically use conventional spin or gradient echo-based acquisition methods for reconstruction of conductivity and current density maps. Use of MREIT in functional and electroporation studies requires higher temporal resolution and faster sequences. Here, single and multishot echo planar imaging (EPI) based MREIT sequences were evaluated to see whether high-quality MREIT phase data could be obtained for rapid reconstruction of current density, conductivity, and electric fields. A gel phantom with an insulating inclusion was used as a test object. Ghost artifact, geometric distortion, and MREIT correction algorithms were applied to the data. The EPI-MREIT-derived phase-projected current density and conductivity images were compared with simulations and spin-echo images as a function of EPI shot number. Good agreement among measures in simulated, spin echo, and EPI data was achieved. Current density errors were stable and below 9% as the shot number decreased from 64 to 2, but increased for single-shot images. Conductivity reconstruction relative contrast ratios were stable as the shot number decreased. The derived electric fields also agreed with the simulated data. The EPI methods can be combined successfully with MREIT reconstruction algorithms to achieve fast imaging of current density, conductivity, and electric field. Magn Reson Med 79:71-82, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  11. Anisotropic conductivity tensor imaging in MREIT using directional diffusion rate of water molecules

    International Nuclear Information System (INIS)

    Kwon, Oh In; Jeong, Woo Chul; Sajib, Saurav Z K; Kim, Hyung Joong; Woo, Eung Je

    2014-01-01

    Magnetic resonance electrical impedance tomography (MREIT) is an emerging method to visualize electrical conductivity and/or current density images at low frequencies (below 1 KHz). Injecting currents into an imaging object, one component of the induced magnetic flux density is acquired using an MRI scanner for isotropic conductivity image reconstructions. Diffusion tensor MRI (DT-MRI) measures the intrinsic three-dimensional diffusion property of water molecules within a tissue. It characterizes the anisotropic water transport by the effective diffusion tensor. Combining the DT-MRI and MREIT techniques, we propose a novel direct method for absolute conductivity tensor image reconstructions based on a linear relationship between the water diffusion tensor and the electrical conductivity tensor. We first recover the projected current density, which is the best approximation of the internal current density one can obtain from the measured single component of the induced magnetic flux density. This enables us to estimate a scale factor between the diffusion tensor and the conductivity tensor. Combining these values at all pixels with the acquired diffusion tensor map, we can quantitatively recover the anisotropic conductivity tensor map. From numerical simulations and experimental verifications using a biological tissue phantom, we found that the new method overcomes the limitations of each method and successfully reconstructs both the direction and magnitude of the conductivity tensor for both the anisotropic and isotropic regions. (paper)

  12. Electrostatic images for underwater anisotropic conductive half spaces

    International Nuclear Information System (INIS)

    Flykt, M.; Lindell, I.; Eloranta, E.

    1998-01-01

    A static image principle makes it possible to derive analytical solutions to some basic geometries for DC fields. The underwater environment is especially difficult both from the theoretical and practical point of view. However, there are increasing demands that also the underwater geological formations should be studied in detail. The traditional image of a point source lies at the mirror point of the original. When anisotropic media is involved, however, the image location can change and the image source may be a continues, sector-like distribution. In this paper some theoretical considerations are carried out in the case where the lower half space can have a very general anisotropy in terms of electrical conductivity, while the upper half space is assumed isotropic. The reflection potential field is calculated for different values of electrical conductivity. (orig.)

  13. Magnetic resonance electrical impedance tomography (MREIT): conductivity and current density imaging

    International Nuclear Information System (INIS)

    Seo, Jin Keun; Kwon, Ohin; Woo, Eung Je

    2005-01-01

    This paper reviews the latest impedance imaging technique called Magnetic Resonance Electrical Impedance Tomography (MREIT) providing information on electrical conductivity and current density distributions inside an electrically conducting domain such as the human body. The motivation for this research is explained by discussing conductivity changes related with physiological and pathological events, electromagnetic source imaging and electromagnetic stimulations. We briefly summarize the related technique of Electrical Impedance Tomography (EIT) that deals with cross-sectional image reconstructions of conductivity distributions from boundary measurements of current-voltage data. Noting that EIT suffers from the ill-posed nature of the corresponding inverse problem, we introduce MREIT as a new conductivity imaging modality providing images with better spatial resolution and accuracy. MREIT utilizes internal information on the induced magnetic field in addition to the boundary current-voltage measurements to produce three-dimensional images of conductivity and current density distributions. Mathematical theory, algorithms, and experimental methods of current MREIT research are described. With numerous potential applications in mind, future research directions in MREIT are proposed

  14. Efficient Imaging and Real-Time Display of Scanning Ion Conductance Microscopy Based on Block Compressive Sensing

    Science.gov (United States)

    Li, Gongxin; Li, Peng; Wang, Yuechao; Wang, Wenxue; Xi, Ning; Liu, Lianqing

    2014-07-01

    Scanning Ion Conductance Microscopy (SICM) is one kind of Scanning Probe Microscopies (SPMs), and it is widely used in imaging soft samples for many distinctive advantages. However, the scanning speed of SICM is much slower than other SPMs. Compressive sensing (CS) could improve scanning speed tremendously by breaking through the Shannon sampling theorem, but it still requires too much time in image reconstruction. Block compressive sensing can be applied to SICM imaging to further reduce the reconstruction time of sparse signals, and it has another unique application that it can achieve the function of image real-time display in SICM imaging. In this article, a new method of dividing blocks and a new matrix arithmetic operation were proposed to build the block compressive sensing model, and several experiments were carried out to verify the superiority of block compressive sensing in reducing imaging time and real-time display in SICM imaging.

  15. MREIT conductivity imaging of the postmortem canine abdomen using CoReHA

    International Nuclear Information System (INIS)

    Jeon, Kiwan; Lee, Chang-Ock; Minhas, Atul S; Kim, Young Tae; Jeong, Woo Chul; Kim, Hyung Joong; Woo, Eung Je; Kang, Byeong Teck; Park, Hee Myung; Seo, Jin Keun

    2009-01-01

    Magnetic resonance electrical impedance tomography (MREIT) is a new bio-imaging modality providing cross-sectional conductivity images from measurements of internal magnetic flux densities produced by externally injected currents. Recent experimental results of postmortem and in vivo imaging of the canine brain demonstrated its feasibility by showing conductivity images with meaningful contrast among different brain tissues. MREIT image reconstructions involve a series of data processing steps such as k-space data handling, phase unwrapping, image segmentation, meshing, modelling, finite element computation, denoising and so on. To facilitate experimental studies, we need a software tool that automates these data processing steps. In this paper, we summarize such an MREIT software package called CoReHA (conductivity reconstructor using harmonic algorithms). Performing imaging experiments of the postmortem canine abdomen, we demonstrate how CoReHA can be utilized in MREIT. The abdomen with a relatively large field of view and various organs imposes new technical challenges when it is chosen as an imaging domain. Summarizing a few improvements in the experimental MREIT technique, we report our first conductivity images of the postmortem canine abdomen. Illustrating reconstructed conductivity images, we discuss how they discern different organs including the kidney, spleen, stomach and small intestine. We elaborate, as an example, that conductivity images of the kidney show clear contrast among cortex, internal medulla, renal pelvis and urethra. We end this paper with a brief discussion on future work using different animal models

  16. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...

  17. Anisotropic Conductivity Tensor Imaging of In Vivo Canine Brain Using DT-MREIT.

    Science.gov (United States)

    Jeong, Woo Chul; Sajib, Saurav Z K; Katoch, Nitish; Kim, Hyung Joong; Kwon, Oh In; Woo, Eung Je

    2017-01-01

    We present in vivo images of anisotropic electrical conductivity tensor distributions inside canine brains using diffusion tensor magnetic resonance electrical impedance tomography (DT-MREIT). The conductivity tensor is represented as a product of an ion mobility tensor and a scale factor of ion concentrations. Incorporating directional mobility information from water diffusion tensors, we developed a stable process to reconstruct anisotropic conductivity tensor images from measured magnetic flux density data using an MRI scanner. Devising a new image reconstruction algorithm, we reconstructed anisotropic conductivity tensor images of two canine brains with a pixel size of 1.25 mm. Though the reconstructed conductivity values matched well in general with those measured by using invasive probing methods, there were some discrepancies as well. The degree of white matter anisotropy was 2 to 4.5, which is smaller than previous findings of 5 to 10. The reconstructed conductivity value of the cerebrospinal fluid was about 1.3 S/m, which is smaller than previous measurements of about 1.8 S/m. Future studies of in vivo imaging experiments with disease models should follow this initial trial to validate clinical significance of DT-MREIT as a new diagnostic imaging modality. Applications in modeling and simulation studies of bioelectromagnetic phenomena including source imaging and electrical stimulation are also promising.

  18. Software optimization for electrical conductivity imaging in polycrystalline diamond cutters

    Energy Technology Data Exchange (ETDEWEB)

    Bogdanov, G.; Ludwig, R. [Department of Electrical and Computer Engineering, Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609 (United States); Wiggins, J.; Bertagnolli, K. [US Synthetic, 1260 South 1600 West, Orem, UT 84058 (United States)

    2014-02-18

    We previously reported on an electrical conductivity imaging instrument developed for measurements on polycrystalline diamond cutters. These cylindrical cutters for oil and gas drilling feature a thick polycrystalline diamond layer on a tungsten carbide substrate. The instrument uses electrical impedance tomography to profile the conductivity in the diamond table. Conductivity images must be acquired quickly, on the order of 5 sec per cutter, to be useful in the manufacturing process. This paper reports on successful efforts to optimize the conductivity reconstruction routine, porting major portions of it to NVIDIA GPUs, including a custom CUDA kernel for Jacobian computation.

  19. Evaluating conducting network based transparent electrodes from geometrical considerations

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Ankush [Chemistry and Physics of Materials Unit, Jawaharlal Nehru Centre for Advanced Scientific Research, 560064 Bangalore (India); Kulkarni, G. U., E-mail: guk@cens.res.in [Centre for Nano and Soft Matter Sciences, 560013 Bangalore (India)

    2016-01-07

    Conducting nanowire networks have been developed as viable alternative to existing indium tin oxide based transparent electrode (TE). The nature of electrical conduction and process optimization for electrodes have gained much from the theoretical models based on percolation transport using Monte Carlo approach and applying Kirchhoff's law on individual junctions and loops. While most of the literature work pertaining to theoretical analysis is focussed on networks obtained from conducting rods (mostly considering only junction resistance), hardly any attention has been paid to those made using template based methods, wherein the structure of network is neither similar to network obtained from conducting rods nor similar to well periodic geometry. Here, we have attempted an analytical treatment based on geometrical arguments and applied image analysis on practical networks to gain deeper insight into conducting networked structure particularly in relation to sheet resistance and transmittance. Many literature examples reporting networks with straight or curvilinear wires with distributions in wire width and length have been analysed by treating the networks as two dimensional graphs and evaluating the sheet resistance based on wire density and wire width. The sheet resistance values from our analysis compare well with the experimental values. Our analysis on various examples has revealed that low sheet resistance is achieved with high wire density and compactness with straight rather than curvilinear wires and with narrower wire width distribution. Similarly, higher transmittance for given sheet resistance is possible with narrower wire width but of higher thickness, minimal curvilinearity, and maximum connectivity. For the purpose of evaluating active fraction of the network, the algorithm was made to distinguish and quantify current carrying backbone regions as against regions containing only dangling or isolated wires. The treatment can be helpful in

  20. Evaluating conducting network based transparent electrodes from geometrical considerations

    International Nuclear Information System (INIS)

    Kumar, Ankush; Kulkarni, G. U.

    2016-01-01

    Conducting nanowire networks have been developed as viable alternative to existing indium tin oxide based transparent electrode (TE). The nature of electrical conduction and process optimization for electrodes have gained much from the theoretical models based on percolation transport using Monte Carlo approach and applying Kirchhoff's law on individual junctions and loops. While most of the literature work pertaining to theoretical analysis is focussed on networks obtained from conducting rods (mostly considering only junction resistance), hardly any attention has been paid to those made using template based methods, wherein the structure of network is neither similar to network obtained from conducting rods nor similar to well periodic geometry. Here, we have attempted an analytical treatment based on geometrical arguments and applied image analysis on practical networks to gain deeper insight into conducting networked structure particularly in relation to sheet resistance and transmittance. Many literature examples reporting networks with straight or curvilinear wires with distributions in wire width and length have been analysed by treating the networks as two dimensional graphs and evaluating the sheet resistance based on wire density and wire width. The sheet resistance values from our analysis compare well with the experimental values. Our analysis on various examples has revealed that low sheet resistance is achieved with high wire density and compactness with straight rather than curvilinear wires and with narrower wire width distribution. Similarly, higher transmittance for given sheet resistance is possible with narrower wire width but of higher thickness, minimal curvilinearity, and maximum connectivity. For the purpose of evaluating active fraction of the network, the algorithm was made to distinguish and quantify current carrying backbone regions as against regions containing only dangling or isolated wires. The treatment can be helpful in

  1. Imaging of Conductive Hearing Loss With a Normal Tympanic Membrane.

    Science.gov (United States)

    Curtin, Hugh D

    2016-01-01

    This article presents an approach to imaging conductive hearing loss in patients with normal tympanic membranes and discusses entities that should be checked as the radiologist evaluates this potentially complicated issue. Conductive hearing loss in a patient with a normal tympanic membrane is a complicated condition that requires a careful imaging approach. Imaging should focus on otosclerosis, and possible mimics and potential surgical considerations should be evaluated. The radiologist should examine the ossicular chain and the round window and keep in mind that a defect in the superior semicircular canal can disturb the hydraulic integrity of the labyrinth.

  2. Graphite nanoplatelets and carbon nanotubes based polyethylene composites: Electrical conductivity and morphology

    International Nuclear Information System (INIS)

    Haznedar, Galip; Cravanzola, Sara; Zanetti, Marco; Scarano, Domenica; Zecchina, Adriano; Cesano, Federico

    2013-01-01

    Graphite nanoplatelets (GNPs) and/or multiwalled-carbon nanotubes (MWCNTs)/low density polyethylene (LDPE) composites have been obtained either via melt-mixing or solvent assisted methods. Electrical properties of samples obtained through the above mentioned methods are compared and the conductance values as function of filler fraction are discussed. The corresponding percolation thresholds are evaluated. Conductivity maps images are acquired under low-potentials scanning electron microscopy (0.3 KV) and the relationship between the obtained conductivity images and electric properties is highlighted. The synergistic role of CNTs (1D) and GNPs (2D) in improving the conductive properties of the polymer composites has been shown. - Highlights: • Graphite nanoplatelets (GNPs) and GNPs/MWCNT LDPE composites. • Low potential SEM conductivity maps. • Conducting paths between 1D and 2D C-structures (synergistic effect) are obtained. • Composites based on hybrid 1D/2D combinations show lower percolation thresholds

  3. Phase Image Analysis in Conduction Disturbance Patients

    Energy Technology Data Exchange (ETDEWEB)

    Kwark, Byeng Su; Choi, Si Wan; Kang, Seung Sik; Park, Ki Nam; Lee, Kang Wook; Jeon, Eun Seok; Park, Chong Hun [Chung Nam University Hospital, Daejeon (Korea, Republic of)

    1994-03-15

    It is known that the normal His-Purkinje system provides for nearly synchronous activation of right (RV) and left (LV) ventricles. When His-Purkinje conduction is abnormal, the resulting sequence of ventricular contraction must be correspondingly abnormal. These abnormal mechanical consequences were difficult to demonstrate because of the complexity and the rapidity of its events. To determine the relationship of the phase changes and the abnormalities of ventricular conduction, we performed phase image analysis of Tc-RBC gated blood pool scintigrams in patients with intraventricular conduction disturbances (24 complete left bundle branch block (C-LBBB), 15 complete right bundle branch block (C-RBBB), 13 Wolff-Parkinson-White syndrome (WPW), 10 controls). The results were as follows; 1) The ejection fraction (EF), peak ejection rate (PER), and peak filling rate (PFR) of LV in gated blood pool scintigraphy (GBPS) were significantly lower in patients with C-LBBB than in controls (44.4 +- 13.9% vs 69.9 +- 4.2%, 2.48 +- 0.98 vs 3.51 +- 0,62, 1.76 +- 0.71 vs 3.38 +- 0.92, respectively, p<0.05). 2) In the phase angle analysis of LV, Standard deviation (SD), width of half maximum of phase angle (FWHM), and range of phase angle were significantly increased in patients with C-LBBB than in controls (20.6 + 18.1 vs S.6 + I.8, 22. 5 + 9.2 vs 16.0 + 3.9, 95.7 + 31.7 vs 51.3 + 5.4, respectively, p<0.05). 3) There was no significant difference in EF, PER, PFR between patients with the WolffParkinson-White syndrome and controls. 4) Standard deviation and range of phase angle were significantly higher in patients with WPW syndrome than in controls (10.6 + 2.6 vs 8.6 + 1.8, p<0.05, 69.8 + 11.7 vs 51.3 + 5 4, p<0.001, respectively), however, there was no difference between the two groups in full width of half maximum. 5) Phase image analysis revealed relatively uniform phase across the both ventriles in patients with normal conduction, but markedly delayed phase in the left ventricle

  4. Image Re-Ranking Based on Topic Diversity.

    Science.gov (United States)

    Qian, Xueming; Lu, Dan; Wang, Yaxiong; Zhu, Li; Tang, Yuan Yan; Wang, Meng

    2017-08-01

    Social media sharing Websites allow users to annotate images with free tags, which significantly contribute to the development of the web image retrieval. Tag-based image search is an important method to find images shared by users in social networks. However, how to make the top ranked result relevant and with diversity is challenging. In this paper, we propose a topic diverse ranking approach for tag-based image retrieval with the consideration of promoting the topic coverage performance. First, we construct a tag graph based on the similarity between each tag. Then, the community detection method is conducted to mine the topic community of each tag. After that, inter-community and intra-community ranking are introduced to obtain the final retrieved results. In the inter-community ranking process, an adaptive random walk model is employed to rank the community based on the multi-information of each topic community. Besides, we build an inverted index structure for images to accelerate the searching process. Experimental results on Flickr data set and NUS-Wide data sets show the effectiveness of the proposed approach.

  5. Phase Image Analysis in Conduction Disturbance Patients

    International Nuclear Information System (INIS)

    Kwark, Byeng Su; Choi, Si Wan; Kang, Seung Sik; Park, Ki Nam; Lee, Kang Wook; Jeon, Eun Seok; Park, Chong Hun

    1994-01-01

    It is known that the normal His-Purkinje system provides for nearly synchronous activation of right (RV) and left (LV) ventricles. When His-Purkinje conduction is abnormal, the resulting sequence of ventricular contraction must be correspondingly abnormal. These abnormal mechanical consequences were difficult to demonstrate because of the complexity and the rapidity of its events. To determine the relationship of the phase changes and the abnormalities of ventricular conduction, we performed phase image analysis of Tc-RBC gated blood pool scintigrams in patients with intraventricular conduction disturbances (24 complete left bundle branch block (C-LBBB), 15 complete right bundle branch block (C-RBBB), 13 Wolff-Parkinson-White syndrome (WPW), 10 controls). The results were as follows; 1) The ejection fraction (EF), peak ejection rate (PER), and peak filling rate (PFR) of LV in gated blood pool scintigraphy (GBPS) were significantly lower in patients with C-LBBB than in controls (44.4 ± 13.9% vs 69.9 ± 4.2%, 2.48 ± 0.98 vs 3.51 ± 0,62, 1.76 ± 0.71 vs 3.38 ± 0.92, respectively, p<0.05). 2) In the phase angle analysis of LV, Standard deviation (SD), width of half maximum of phase angle (FWHM), and range of phase angle were significantly increased in patients with C-LBBB than in controls (20.6 + 18.1 vs S.6 + I.8, 22. 5 + 9.2 vs 16.0 + 3.9, 95.7 + 31.7 vs 51.3 + 5.4, respectively, p<0.05). 3) There was no significant difference in EF, PER, PFR between patients with the WolffParkinson-White syndrome and controls. 4) Standard deviation and range of phase angle were significantly higher in patients with WPW syndrome than in controls (10.6 + 2.6 vs 8.6 + 1.8, p<0.05, 69.8 + 11.7 vs 51.3 + 5 4, p<0.001, respectively), however, there was no difference between the two groups in full width of half maximum. 5) Phase image analysis revealed relatively uniform phase across the both ventriles in patients with normal conduction, but markedly delayed phase in the left ventricle

  6. Method of imaging the electrical conductivity distribution of a subsurface

    Science.gov (United States)

    Johnson, Timothy C.

    2017-09-26

    A method of imaging electrical conductivity distribution of a subsurface containing metallic structures with known locations and dimensions is disclosed. Current is injected into the subsurface to measure electrical potentials using multiple sets of electrodes, thus generating electrical resistivity tomography measurements. A numeric code is applied to simulate the measured potentials in the presence of the metallic structures. An inversion code is applied that utilizes the electrical resistivity tomography measurements and the simulated measured potentials to image the subsurface electrical conductivity distribution and remove effects of the subsurface metallic structures with known locations and dimensions.

  7. In vivo electrical conductivity imaging of a canine brain using a 3 T MREIT system

    International Nuclear Information System (INIS)

    Kim, Hyung Joong; Oh, Tong In; Kim, Young Tae; Lee, Byung Il; Woo, Eung Je; Lee, Soo Yeol; Seo, Jin Keun; Kwon, Ohin; Park, Chunjae; Kang, Byeong Teck; Park, Hee Myung

    2008-01-01

    Magnetic resonance electrical impedance tomography (MREIT) aims at producing high-resolution cross-sectional conductivity images of an electrically conducting object such as the human body. Following numerous phantom imaging experiments, the most recent study demonstrated successful conductivity image reconstructions of postmortem canine brains using a 3 T MREIT system with 40 mA imaging currents. Here, we report the results of in vivo animal imaging experiments using 5 mA imaging currents. To investigate any change of electrical conductivity due to brain ischemia, canine brains having a regional ischemic model were scanned along with separate scans of canine brains having no disease model. Reconstructed multi-slice conductivity images of in vivo canine brains with a pixel size of 1.4 mm showed a clear contrast between white and gray matter and also between normal and ischemic regions. We found that the conductivity value of an ischemic region decreased by about 10–14%. In a postmortem brain, conductivity values of white and gray matter decreased by about 4–8% compared to those in a live brain. Accumulating more experience of in vivo animal imaging experiments, we plan to move to human experiments. One of the important goals of our future work is the reduction of the imaging current to a level that a human subject can tolerate. The ability to acquire high-resolution conductivity images will find numerous clinical applications not supported by other medical imaging modalities. Potential applications in biology, chemistry and material science are also expected

  8. Iterative electromagnetic Born inversion applied to earth conductivity imaging

    Science.gov (United States)

    Alumbaugh, D. L.

    1993-08-01

    This thesis investigates the use of a fast imaging technique to deduce the spatial conductivity distribution in the earth from low frequency (less than 1 MHz), cross well electromagnetic (EM) measurements. The theory embodied in this work is the extension of previous strategies and is based on the Born series approximation to solve both the forward and inverse problem. Nonlinear integral equations are employed to derive the series expansion which accounts for the scattered magnetic fields that are generated by inhomogeneities embedded in either a homogenous or a layered earth. A sinusoidally oscillating, vertically oriented magnetic dipole is employed as a source, and it is assumed that the scattering bodies are azimuthally symmetric about the source dipole axis. The use of this model geometry reduces the 3-D vector problem to a more manageable 2-D scalar form. The validity of the cross well EM method is tested by applying the imaging scheme to two sets of field data. Images of the data collected at the Devine, Texas test site show excellent correlation with the well logs. Unfortunately there is a drift error present in the data that limits the accuracy of the results. A more complete set of data collected at the Richmond field station in Richmond, California demonstrates that cross well EM can be successfully employed to monitor the position of an injected mass of salt water. Both the data and the resulting images clearly indicate the plume migrates toward the north-northwest. The plausibility of these conclusions is verified by applying the imaging code to synthetic data generated by a 3-D sheet model.

  9. Basic setup for breast conductivity imaging using magnetic resonance electrical impedance tomography

    International Nuclear Information System (INIS)

    Lee, Byung Il; Oh, Suk Hoon; Kim, Tae-Seong; Woo, Eung Je; Lee, Soo Yeol; Kwon, Ohin; Seo, Jin Keun

    2006-01-01

    We present a new medical imaging technique for breast imaging, breast MREIT, in which magnetic resonance electrical impedance tomography (MREIT) is utilized to get high-resolution conductivity and current density images of the breast. In this work, we introduce the basic imaging setup of the breast MREIT technique with an investigation of four different imaging configurations of current-injection electrode positions and pathways through computer simulation studies. Utilizing the preliminary findings of a best breast MREIT configuration, additional numerical simulation studies have been carried out to validate breast MREIT at different levels of SNR. Finally, we have performed an experimental validation with a breast phantom on a 3.0 T MREIT system. The presented results strongly suggest that breast MREIT with careful imaging setups could be a potential imaging technique for human breast which may lead to early detection of breast cancer via improved differentiation of cancerous tissues in high-resolution conductivity images

  10. Feasibility of Imaging Tissue Electrical Conductivity by Switching Field Gradients with MRI.

    Science.gov (United States)

    Gibbs, Eric; Liu, Chunlei

    2015-12-01

    Tissue conductivity is a biophysical marker of tissue structure and physiology. Present methods of measuring tissue conductivity are limited. Electrical impedance tomography, and magnetic resonance electrical impedance tomography rely on passing external current through the object being imaged, which prevents its use in most human imaging. Recently, the RF field used for MR excitation has been used to non-invasively measure tissue conductivity. This technique is promising, but conductivity at higher frequencies is less sensitive to tissue structure. Measuring tissue conductivity non-invasively at low frequencies remains elusive. It has been proposed that eddy currents generated during the rise and decay of gradient pulses could act as a current source to map low-frequency conductivity. This work centers on a gradient echo pulse sequence that uses large gradients prior to excitation to create eddy currents. The electric and magnetic fields during a gradient pulse are simulated by a finite-difference time-domain simulation. The sequence is also tested with a phantom and an animal MRI scanner equipped with gradients of high gradient strengths and slew rate. The simulation demonstrates that eddy currents in materials with conductivity similar to biological tissue decay with a half-life on the order of nanoseconds and any eddy currents generated prior to excitation decay completely before influencing the RF signal. Gradient-induced eddy currents can influence phase accumulation after excitation but the effect is too small to image. The animal scanner images show no measurable phase accumulation. Measuring low-frequency conductivity by gradient-induced eddy currents is presently unfeasible.

  11. Microwave imaging for conducting scatterers by hybrid particle swarm optimization with simulated annealing

    International Nuclear Information System (INIS)

    Mhamdi, B.; Grayaa, K.; Aguili, T.

    2011-01-01

    In this paper, a microwave imaging technique for reconstructing the shape of two-dimensional perfectly conducting scatterers by means of a stochastic optimization approach is investigated. Based on the boundary condition and the measured scattered field derived by transverse magnetic illuminations, a set of nonlinear integral equations is obtained and the imaging problem is reformulated in to an optimization problem. A hybrid approximation algorithm, called PSO-SA, is developed in this work to solve the scattering inverse problem. In the hybrid algorithm, particle swarm optimization (PSO) combines global search and local search for finding the optimal results assignment with reasonable time and simulated annealing (SA) uses certain probability to avoid being trapped in a local optimum. The hybrid approach elegantly combines the exploration ability of PSO with the exploitation ability of SA. Reconstruction results are compared with exact shapes of some conducting cylinders; and good agreements with the original shapes are observed.

  12. Monotonicity-based electrical impedance tomography for lung imaging

    Science.gov (United States)

    Zhou, Liangdong; Harrach, Bastian; Seo, Jin Keun

    2018-04-01

    This paper presents a monotonicity-based spatiotemporal conductivity imaging method for continuous regional lung monitoring using electrical impedance tomography (EIT). The EIT data (i.e. the boundary current-voltage data) can be decomposed into pulmonary, cardiac and other parts using their different periodic natures. The time-differential current-voltage operator corresponding to the lung ventilation can be viewed as either semi-positive or semi-negative definite owing to monotonic conductivity changes within the lung regions. We used these monotonicity constraints to improve the quality of lung EIT imaging. We tested the proposed methods in numerical simulations, phantom experiments and human experiments.

  13. Global auroral conductance distribution due to electron and proton precipitation from IMAGE-FUV observations

    Directory of Open Access Journals (Sweden)

    V. Coumans

    2004-04-01

    Full Text Available The Far Ultraviolet (FUV imaging system on board the IMAGE satellite provides a global view of the north auroral region in three spectral channels, including the SI12 camera sensitive to Doppler shifted Lyman-α emission. FUV images are used to produce instantaneous maps of electron mean energy and energy fluxes for precipitated protons and electrons. We describe a method to calculate ionospheric Hall and Pedersen conductivities induced by auroral proton and electron ionization based on a model of interaction of auroral particles with the atmosphere. Different assumptions on the energy spectral distribution for electrons and protons are compared. Global maps of ionospheric conductances due to instantaneous observation of precipitating protons are calculated. The contribution of auroral protons in the total conductance induced by both types of auroral particles is also evaluated and the importance of proton precipitation is evaluated. This method is well adapted to analyze the time evolution of ionospheric conductances due to precipitating particles over the auroral region or in particular sectors. Results are illustrated with conductance maps of the north polar region obtained during four periods with different activity levels. It is found that the proton contribution to conductance is relatively higher during quiet periods than during substorms. The proton contribution is higher in the period before the onset and strongly decreases during the expansion phase of substorms. During a substorm which occurred on 28 April 2001, a region of strong proton precipitation is observed with SI12 around 14:00MLT at ~75° MLAT. Calculation of conductances in this sector shows that neglecting the protons contribution would produce a large error. We discuss possible effects of the proton precipitation on electron precipitation in auroral arcs. The increase in the ionospheric conductivity, induced by a former proton precipitation can reduce the potential drop

  14. Global auroral conductance distribution due to electron and proton precipitation from IMAGE-FUV observations

    Directory of Open Access Journals (Sweden)

    V. Coumans

    2004-04-01

    Full Text Available The Far Ultraviolet (FUV imaging system on board the IMAGE satellite provides a global view of the north auroral region in three spectral channels, including the SI12 camera sensitive to Doppler shifted Lyman-α emission. FUV images are used to produce instantaneous maps of electron mean energy and energy fluxes for precipitated protons and electrons. We describe a method to calculate ionospheric Hall and Pedersen conductivities induced by auroral proton and electron ionization based on a model of interaction of auroral particles with the atmosphere. Different assumptions on the energy spectral distribution for electrons and protons are compared. Global maps of ionospheric conductances due to instantaneous observation of precipitating protons are calculated. The contribution of auroral protons in the total conductance induced by both types of auroral particles is also evaluated and the importance of proton precipitation is evaluated. This method is well adapted to analyze the time evolution of ionospheric conductances due to precipitating particles over the auroral region or in particular sectors. Results are illustrated with conductance maps of the north polar region obtained during four periods with different activity levels. It is found that the proton contribution to conductance is relatively higher during quiet periods than during substorms. The proton contribution is higher in the period before the onset and strongly decreases during the expansion phase of substorms. During a substorm which occurred on 28 April 2001, a region of strong proton precipitation is observed with SI12 around 14:00MLT at ~75° MLAT. Calculation of conductances in this sector shows that neglecting the protons contribution would produce a large error. We discuss possible effects of the proton precipitation on electron precipitation in auroral arcs. The increase in the ionospheric conductivity, induced by a former proton precipitation can reduce the potential drop

  15. Wavelet-based compression of pathological images for telemedicine applications

    Science.gov (United States)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  16. Scintillator Based Coded-Aperture Imaging for Neutron Detection

    International Nuclear Information System (INIS)

    Hayes, Sean-C.; Gamage, Kelum-A-A.

    2013-06-01

    In this paper we are going to assess the variations of neutron images using a series of Monte Carlo simulations. We are going to study neutron images of the same neutron source with different source locations, using a scintillator based coded-aperture system. The Monte Carlo simulations have been conducted making use of the EJ-426 neutron scintillator detector. This type of detector has a low sensitivity to gamma rays and is therefore of particular use in a system with a source that emits a mixed radiation field. From the use of different source locations, several neutron images have been produced, compared both qualitatively and quantitatively for each case. This allows conclusions to be drawn on how suited the scintillator based coded-aperture neutron imaging system is to detecting various neutron source locations. This type of neutron imaging system can be easily used to identify and locate nuclear materials precisely. (authors)

  17. Subsurface imaging of water electrical conductivity, hydraulic permeability and lithology at contaminated sites by induced polarization

    Science.gov (United States)

    Maurya, P. K.; Balbarini, N.; Møller, I.; Rønde, V.; Christiansen, A. V.; Bjerg, P. L.; Auken, E.; Fiandaca, G.

    2018-05-01

    At contaminated sites, knowledge about geology and hydraulic properties of the subsurface and extent of the contamination is needed for assessing the risk and for designing potential site remediation. In this study, we have developed a new approach for characterizing contaminated sites through time-domain spectral induced polarization. The new approach is based on: (1) spectral inversion of the induced polarization data through a reparametrization of the Cole-Cole model, which disentangles the electrolytic bulk conductivity from the surface conductivity for delineating the contamination plume; (2) estimation of hydraulic permeability directly from the inverted parameters using a laboratory-derived empirical equation without any calibration; (3) the use of the geophysical imaging results for supporting the geological modelling and planning of drilling campaigns. The new approach was tested on a data set from the Grindsted stream (Denmark), where contaminated groundwater from a factory site discharges to the stream. Two overlapping areas were covered with seven parallel 2-D profiles each, one large area of 410 m × 90 m (5 m electrode spacing) and one detailed area of 126 m × 42 m (2 m electrode spacing). The geophysical results were complemented and validated by an extensive set of hydrologic and geologic information, including 94 estimates of hydraulic permeability obtained from slug tests and grain size analyses, 89 measurements of water electrical conductivity in groundwater, and four geological logs. On average the IP-derived and measured permeability values agreed within one order of magnitude, except for those close to boundaries between lithological layers (e.g. between sand and clay), where mismatches occurred due to the lack of vertical resolution in the geophysical imaging. An average formation factor was estimated from the correlation between the imaged bulk conductivity values and the water conductivity values measured in groundwater, in order to

  18. Imaged-Based Visual Servo Control for a VTOL Aircraft

    Directory of Open Access Journals (Sweden)

    Liying Zou

    2017-01-01

    Full Text Available This paper presents a novel control strategy to force a vertical take-off and landing (VTOL aircraft to accomplish the pinpoint landing task. The control development is based on the image-based visual servoing method and the back-stepping technique; its design differs from the existing methods because the controller maps the image errors onto the actuator space via a visual model which does not contain the depth information of the feature point. The novelty of the proposed method is to extend the image-based visual servoing technique to the VTOL aircraft control. In addition, the Lyapunov theory is used to prove the asymptotic stability of the VTOL aircraft visual servoing system, while the image error can converge to zero. Furthermore, simulations have been also conducted to demonstrate the performances of the proposed method.

  19. Content Based Retrieval System for Magnetic Resonance Images

    International Nuclear Information System (INIS)

    Trojachanets, Katarina

    2010-01-01

    The amount of medical images is continuously increasing as a consequence of the constant growth and development of techniques for digital image acquisition. Manual annotation and description of each image is impractical, expensive and time consuming approach. Moreover, it is an imprecise and insufficient way for describing all information stored in medical images. This induces the necessity for developing efficient image storage, annotation and retrieval systems. Content based image retrieval (CBIR) emerges as an efficient approach for digital image retrieval from large databases. It includes two phases. In the first phase, the visual content of the image is analyzed and the feature extraction process is performed. An appropriate descriptor, namely, feature vector is then associated with each image. These descriptors are used in the second phase, i.e. the retrieval process. With the aim to improve the efficiency and precision of the content based image retrieval systems, feature extraction and automatic image annotation techniques are subject of continuous researches and development. Including the classification techniques in the retrieval process enables automatic image annotation in an existing CBIR system. It contributes to more efficient and easier image organization in the system.Applying content based retrieval in the field of magnetic resonance is a big challenge. Magnetic resonance imaging is an image based diagnostic technique which is widely used in medical environment. According to this, the number of magnetic resonance images is enormously growing. Magnetic resonance images provide plentiful medical information, high resolution and specific nature. Thus, the capability of CBIR systems for image retrieval from large database is of great importance for efficient analysis of this kind of images. The aim of this thesis is to propose content based retrieval system architecture for magnetic resonance images. To provide the system efficiency, feature

  20. Usefulness of tomographic phase image in ventricular conduction abnormalities

    International Nuclear Information System (INIS)

    Sakurai, Mitsuru; Watanabe, Yoshihiko; Kondo, Takeshi

    1985-01-01

    In order to evaluate three-dimensional phase changes in ventricular conduction abnormalities, tomographic phase images were constructed in 7 normal subjects, 12 patients with ventricular pacing, 21 patients with bundle branch block and 12 patients with Wolff-Parkinson-White syndrome. Eight to 12 slices of the short-axis ventricular tomographic phase image (TPI) were derived using a 7-pinhole collimator, and compared with planar phase images (PPIs) in left anterior oblique (LAO) and right anterior oblique (RAO) projections. TPIs were excellent for observing biventricular phase changes in the long-axis direction. In 6 cases of complete right bundle branch block with left axis deviation (beyond -30 0 ), the phase delay in the left ventricular anterior wall was recognized in 5 cases by TPI, although it was difficult to be detected by PPIs. The site of the pacing electrode was identified by TPI in 11 out of 12 cases, compared to 8 cases by PPIs in LAO and RAO projections. The site of the accessory pathway in Wolff-Parkinson-White syndrome was detected in the basal slice of TPIs in 10 out of 12 cases, compared to 8 cases by PPI in the LAO projection. Therefore, it is obvious that TPIs offer more valid information than PPIs. In conclusion, TPI is useful for investigation of ventricular conduction abnormalities. (author)

  1. Classification of materials for conducting spheroids based on the first order polarization tensor

    Science.gov (United States)

    Khairuddin, TK Ahmad; Mohamad Yunos, N.; Aziz, ZA; Ahmad, T.; Lionheart, WRB

    2017-09-01

    Polarization tensor is an old terminology in mathematics and physics with many recent industrial applications including medical imaging, nondestructive testing and metal detection. In these applications, it is theoretically formulated based on the mathematical modelling either in electrics, electromagnetics or both. Generally, polarization tensor represents the perturbation in the electric or electromagnetic fields due to the presence of conducting objects and hence, it also desribes the objects. Understanding the properties of the polarization tensor is necessary and important in order to apply it. Therefore, in this study, when the conducting object is a spheroid, we show that the polarization tensor is positive-definite if and only if the conductivity of the object is greater than one. In contrast, we also prove that the polarization tensor is negative-definite if and only if the conductivity of the object is between zero and one. These features categorize the conductivity of the spheroid based on in its polarization tensor and can then help to classify the material of the spheroid.

  2. TH-AB-209-09: Quantitative Imaging of Electrical Conductivity by VHF-Induced Thermoacoustics

    Energy Technology Data Exchange (ETDEWEB)

    Patch, S; Hull, D [Avero Diagnostics, Irving, TX (United States); See, W [Medical College of Wisconsin, Milwaukee, WI (United States); Hanson, G [UW-Milwaukee, Milwaukee, WI (United States)

    2016-06-15

    Purpose: To demonstrate that very high frequency (VHF) induced thermoacoustics has the potential to provide quantitative images of electrical conductivity in Siemens/meter, much as shear wave elastography provides tissue stiffness in kPa. Quantitatively imaging a large organ requires exciting thermoacoustic pulses throughout the volume and broadband detection of those pulses because tomographic image reconstruction preserves frequency content. Applying the half-wavelength limit to a 200-micron inclusion inside a 7.5 cm diameter organ requires measurement sensitivity to frequencies ranging from 4 MHz down to 10 kHz, respectively. VHF irradiation provides superior depth penetration over near infrared used in photoacoustics. Additionally, VHF signal production is proportional to electrical conductivity, and prostate cancer is known to suppress electrical conductivity of prostatic fluid. Methods: A dual-transducer system utilizing a P4-1 array connected to a Verasonics V1 system augmented by a lower frequency focused single element transducer was developed. Simultaneous acquisition of VHF-induced thermoacoustic pulses by both transducers enabled comparison of transducer performance. Data from the clinical array generated a stack of 96-images with separation of 0.3 mm, whereas the single element transducer imaged only in a single plane. In-plane resolution and quantitative accuracy were measured at isocenter. Results: The array provided volumetric imaging capability with superior resolution whereas the single element transducer provided superior quantitative accuracy. Combining axial images from both transducers preserved resolution of the P4-1 array and improved image contrast. Neither transducer was sensitive to frequencies below 50 kHz, resulting in a DC offset and low-frequency shading over fields of view exceeding 15 mm. Fresh human prostates were imaged ex vivo and volumetric reconstructions reveal structures rarely seen in diagnostic images. Conclusion

  3. Two-dimensional Tissue Image Reconstruction Based on Magnetic Field Data

    Directory of Open Access Journals (Sweden)

    J. Dedkova

    2012-09-01

    Full Text Available This paper introduces new possibilities within two-dimensional reconstruction of internal conductivity distribution. In addition to the electric field inside the given object, the injected current causes a magnetic field which can be measured either outside the object by means of a Hall probe or inside the object through magnetic resonance imaging. The Magnetic Resonance method, together with Electrical impedance tomography (MREIT, is well known as a bio-imaging modality providing cross-sectional conductivity images with a good spatial resolution from the measurements of internal magnetic flux density produced by externally injected currents. A new algorithm for the conductivity reconstruction, which utilizes the internal current information with respect to corresponding boundary conditions and the external magnetic field, was developed. A series of computer simulations has been conducted to assess the performance of the proposed algorithm within the process of estimating electrical conductivity changes in the lungs, heart, and brain tissues captured in two-dimensional piecewise homogeneous chest and head models. The reconstructed conductivity distribution using the proposed method is compared with that using a conventional method based on Electrical Impedance Tomography (EIT. The acquired experience is discussed and the direction of further research is proposed.

  4. IMAGE DESCRIPTIONS FOR SKETCH BASED IMAGE RETRIEVAL

    OpenAIRE

    SAAVEDRA RONDO, JOSE MANUEL; SAAVEDRA RONDO, JOSE MANUEL

    2008-01-01

    Due to the massive use of Internet together with the proliferation of media devices, content based image retrieval has become an active discipline in computer science. A common content based image retrieval approach requires that the user gives a regular image (e.g, a photo) as a query. However, having a regular image as query may be a serious problem. Indeed, people commonly use an image retrieval system because they do not count on the desired image. An easy alternative way t...

  5. Surface and borehole electromagnetic imaging of conducting contaminant plumes. 1997 annual progress report

    International Nuclear Information System (INIS)

    Berryman, J.G.

    1997-01-01

    'Electromagnetic induction tomography is a promising new tool for imaging electrical conductivity variations in the earth. The EM source field is produced by induction coil (magnetic dipole) transmitters deployed at the surface or in boreholes. Vertical and horizontal component magnetic field detectors are deployed in other boreholes or on the surface. Sources and receivers are typically deployed in a configuration surrounding the region of interest. The goal of this procedure is to image electrical conductivity variations in the earth, much as x-ray tomography is used to image density variations through cross-sections of the body. Although such EM field techniques have been developed and applied, the algorithms for inverting the magnetic data to produce the desired images of electrical conductivity have not kept pace. One of the main reasons for the lag in the algorithm development has been the fact that the magnetic induction problem is inherently three dimensional: other imaging methods such as x-ray and seismic can make use of two-dimensional approximations that are not too far from reality, but the author does not have this luxury in EM induction tomography. In addition, previous field experiments were conducted at controlled test sites that typically do not have much external noise or extensive surface clutter problems often associated with environmental sites. To use the same field techniques in environments more typical of cleanup sites requires a new set of data processing tools to remove the effects of both noise and clutter. The goal of this project is to join theory and experiment to produce enhanced images of electrically conducting fluids underground, allowing better localization of contaminants and improved planning strategies for the subsequent remediation efforts. After explaining the physical context in more detail, this report will summarize the progress made in the first year of this project: (1) on code development and (2) on field tests of

  6. Anthropometric body measurements based on multi-view stereo image reconstruction.

    Science.gov (United States)

    Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui

    2013-01-01

    Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of the proposed system.

  7. Content-Based Image Retrial Based on Hadoop

    Directory of Open Access Journals (Sweden)

    DongSheng Yin

    2013-01-01

    Full Text Available Generally, time complexity of algorithms for content-based image retrial is extremely high. In order to retrieve images on large-scale databases efficiently, a new way for retrieving based on Hadoop distributed framework is proposed. Firstly, a database of images features is built by using Speeded Up Robust Features algorithm and Locality-Sensitive Hashing and then perform the search on Hadoop platform in a parallel way specially designed. Considerable experimental results show that it is able to retrieve images based on content on large-scale cluster and image sets effectively.

  8. Electrical conductivity imaging in the western Pacific subduction zone

    Science.gov (United States)

    Utada, Hisashi; Baba, Kiyoshi; Shimizu, Hisayoshi

    2010-05-01

    Oceanic plate subduction is an important process for the dynamics and evolution of the Earth's interior, as it is regarded as a typical downward flow of the mantle convection that transports materials from the near surface to the deep mantle. Recent seismological study showed evidence suggesting the transportation of a certain amount of water by subduction of old oceanic plate such as the Pacific plate down to 150-200 km depth into the back arc mantle. However it is not well clarified how deep into the mantle the water can be transported. The electromagnetic induction method to image electrical conductivity distribution is a possible tool to answer this question as it is known to be sensitive to the presence of water. Here we show recent result of observational study from the western Pacific subduction zone to examine the electrical conductivity distribution in the upper mantle and in the mantle transition zone (MTZ), which will provide implications how water distributes in the mantle. We take two kinds of approach for imaging the mantle conductivity, (a) semi-global and (b) regional induction approaches. Result may be summarized as follows: (a) Long (5-30 years) time series records from 8 submarine cables and 13 geomagnetic observatories in the north Pacific region were analyzed and long period magnetotelluric (MT) and geomagnetic deep sounding (GDS) responses were estimated in the period range from 1.7 to 35 days. These frequency dependent response functions were inverted to 3-dimensional conductivity distribution in the depth range between 350 and 850 km. Three major features are suggested in the MTZ depth such as, (1) a high conductivity anomaly beneath the Philippine Sea, (2) a high conductivity anomaly beneath the Hawaiian Islands, and (3) a low conductivity anomaly beneath and in the vicinity of northern Japan. (b) A three-year long deployment of ocean bottom electro-magnetometers (OBEM's) was conducted in the Philippine Sea and west Pacific Ocean from 2005

  9. Conducting Polymer Based Nanobiosensors

    Directory of Open Access Journals (Sweden)

    Chul Soon Park

    2016-06-01

    Full Text Available In recent years, conducting polymer (CP nanomaterials have been used in a variety of fields, such as in energy, environmental, and biomedical applications, owing to their outstanding chemical and physical properties compared to conventional metal materials. In particular, nanobiosensors based on CP nanomaterials exhibit excellent performance sensing target molecules. The performance of CP nanobiosensors varies based on their size, shape, conductivity, and morphology, among other characteristics. Therefore, in this review, we provide an overview of the techniques commonly used to fabricate novel CP nanomaterials and their biosensor applications, including aptasensors, field-effect transistor (FET biosensors, human sense mimicking biosensors, and immunoassays. We also discuss prospects for state-of-the-art nanobiosensors using CP nanomaterials by focusing on strategies to overcome the current limitations.

  10. Computed Tomography Image Origin Identification Based on Original Sensor Pattern Noise and 3-D Image Reconstruction Algorithm Footprints.

    Science.gov (United States)

    Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou

    2017-07-01

    In this paper, we focus on the "blind" identification of the computed tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT scanner based on an original sensor pattern noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its three-dimensional (3-D) image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train a support vector machine (SVM) based classifier to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than sensor pattern noise (SPN) based strategy proposed for general public camera devices.

  11. Level-set-based reconstruction algorithm for EIT lung images: first clinical results.

    Science.gov (United States)

    Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy

    2012-05-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.

  12. Level-set-based reconstruction algorithm for EIT lung images: first clinical results

    International Nuclear Information System (INIS)

    Rahmati, Peyman; Adler, Andy; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz

    2012-01-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure–volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM. (paper)

  13. Database for hydraulically conductive fractures. Update 2009

    International Nuclear Information System (INIS)

    Palmen, J.; Tammisto, E.; Ahokas, H.

    2010-03-01

    Posiva flow logging (PFL) with a 0.5 m test interval and made in 10 cm steps can be used for the determination of the depth of hydraulically conductive fractures. Together with drillhole wall images and fracture data from core logging, PFL provides possibilities to detect individual conductive fractures. In this report, the results of PFL are combined with fracture data on drillholes OL-KR41 - OL-KR48, OL-KR41B - OLKR45B and pilot holes ONK-PH8 - ONK-PH10. In addition, HTU-data measured by 2 m section length and 2 m steps in holes OL-KR39 and OL-KR40 at depths 300-700 m were analyzed and combined with fracture data in a similar way. The conductive fractures were first recognised from PFL data and digital drillhole images and then the fractures from the core logging that correspond to the ones picked from the digital drillhole images were identified. The conductive fractures were primarily recognised in the images based on the openness of fractures or a visible flow in the image. In most of the cases, no tails of flow were seen in the image. In these cases the conductive fractures were recognised in the image based on the openness of fractures and a matching depth. On the basis of the results hydraulically conductive fractures/zones could in most cases be distinguished in the drillhole wall images. An important phase in the work is the calibration of the depth of the image, flow logging and the HTU logging with the sample length. In addition to results of PFL-correlation, Hydraulic Testing Unit (HTU) data measured by 2 m section length and 2 m steps was studied at selected depths for holes OL-KR39, OL-KR40, OL-KR42 and OL-KR45. Due to low HTU section depth accuracy the conducting fractures were successfully correlated with Fracture Data Base (FDB) fractures only in drillholes OL-KR39 and OL-KR40. HTU-data depth matching in these two drillholes was performed using geophysical Single Point Resistance (SPR) data both from geophysical and PFL measurements as a depth

  14. Reconstruction of conductivity changes and electrode movements based on EIT temporal sequences

    International Nuclear Information System (INIS)

    Dai, Tao; Gómez-Laberge, Camille; Adler, Andy

    2008-01-01

    Electrical impedance tomography (EIT) reconstructs a conductivity change image within a body from electrical measurements on the body surface; while it has relatively low spatial resolution, it has a high temporal resolution. One key difficulty with EIT measurements is due to the movement and position uncertainty of the electrodes, especially due to breathing and posture change. In this paper, we develop an approach to reconstruct both the conductivity change image and the electrode movements from the temporal sequence of EIT measurements. Since both the conductivity change and electrode movement are slow with respect to the data frame rate, there are significant temporal correlations which we formulate as priors for the regularized image reconstruction model. Image reconstruction is posed in terms of a regularization matrix and a Jacobian matrix which are augmented for the conductivity change and electrode movement, and then further augmented to concatenate the d previous and future frames. Results are shown for simulation, phantom and human data, and show that the proposed algorithm yields improved resolution and noise performance in comparison to a conventional one-step reconstruction method

  15. Parameter-based estimation of CT dose index and image quality using an in-house android™-based software

    International Nuclear Information System (INIS)

    Mubarok, S; Lubis, L E; Pawiro, S A

    2016-01-01

    Compromise between radiation dose and image quality is essential in the use of CT imaging. CT dose index (CTDI) is currently the primary dosimetric formalisms in CT scan, while the low and high contrast resolutions are aspects indicating the image quality. This study was aimed to estimate CTDI vol and image quality measures through a range of exposure parameters variation. CTDI measurements were performed using PMMA (polymethyl methacrylate) phantom of 16 cm diameter, while the image quality test was conducted by using catphan ® 600. CTDI measurements were carried out according to IAEA TRS 457 protocol using axial scan mode, under varied parameters of tube voltage, collimation or slice thickness, and tube current. Image quality test was conducted accordingly under the same exposure parameters with CTDI measurements. An Android™ based software was also result of this study. The software was designed to estimate the value of CTDI vol with maximum difference compared to actual CTDI vol measurement of 8.97%. Image quality can also be estimated through CNR parameter with maximum difference to actual CNR measurement of 21.65%. (paper)

  16. Nanoplatform-based molecular imaging

    National Research Council Canada - National Science Library

    Chen, Xiaoyuan

    2011-01-01

    "Nanoplathform-Based Molecular Imaging provides rationale for using nanoparticle-based probes for molecular imaging, then discusses general strategies for this underutilized, yet promising, technology...

  17. Database for Hydraulically Conductive Fractures. Update 2010

    International Nuclear Information System (INIS)

    Tammisto, E.; Palmen, J.

    2011-02-01

    Posiva flow logging (PFL) with 0.5 m test interval and made in 10 cm steps can be used for exact depth determination of hydraulically conductive fractures. Together with drillhole wall images and fracture data from core logging PFL provides possibilities to detect single conductive fractures. In this report, the results of PFL are combined to the fracture data in drillholes OL-KR49 .. OL-KR53, OL-KR50B, OL-KR52B and OLKR53B and pilot holes ONK-PH11 - ONK-PH13. The results are used mainly in development of hydroDFN- models. The conductive fractures were first recognised from the PFL data and digital drillhole images and then the fractures from the core logging corresponding to the ones picked from the digital drillhole images were identified. The conductive fractures were recognised from the images primarily based on openness of fractures or a visible flow in the image. In most of the cases of measured flow, no tails of flow were seen in the image. In these cases, the conductive fractures were recognised from the image based on openness of fractures and a matching depth. According to the results the hydraulically conductive fractures/zones can be distinguished from the drillhole wall images in most cases. An important phase in the work is to calibrate the depth of the image and the flow logging with the sample length. The hydraulic conductivity is clearly higher in the upper part of the bedrock in the depth range 0-150 m below sea level than deeper in the bedrock. The frequency of hydraulically conductive fractures detected in flow logging (T > 10 -10 -10 -9 m 2 /s) in depth range 0-150 m varies from 0.07 to 0.84 fractures/meter of sample length. Deeper in the rock the conductive fractures are less frequent, but occur often in groups of few fractures. In drillholes OL-KR49 .. OL-KR53, OL-KR50B, OL-KR52B and OL-KR53B about 8.5 % of all fractures and 4.4 % of the conductive fractures are within HZ-structures. (orig.)

  18. Multi-Label Classification Based on Low Rank Representation for Image Annotation

    Directory of Open Access Journals (Sweden)

    Qiaoyu Tan

    2017-01-01

    Full Text Available Annotating remote sensing images is a challenging task for its labor demanding annotation process and requirement of expert knowledge, especially when images can be annotated with multiple semantic concepts (or labels. To automatically annotate these multi-label images, we introduce an approach called Multi-Label Classification based on Low Rank Representation (MLC-LRR. MLC-LRR firstly utilizes low rank representation in the feature space of images to compute the low rank constrained coefficient matrix, then it adapts the coefficient matrix to define a feature-based graph and to capture the global relationships between images. Next, it utilizes low rank representation in the label space of labeled images to construct a semantic graph. Finally, these two graphs are exploited to train a graph-based multi-label classifier. To validate the performance of MLC-LRR against other related graph-based multi-label methods in annotating images, we conduct experiments on a public available multi-label remote sensing images (Land Cover. We perform additional experiments on five real-world multi-label image datasets to further investigate the performance of MLC-LRR. Empirical study demonstrates that MLC-LRR achieves better performance on annotating images than these comparing methods across various evaluation criteria; it also can effectively exploit global structure and label correlations of multi-label images.

  19. Experimental evaluation of electrical conductivity imaging of anisotropic brain tissues using a combination of diffusion tensor imaging and magnetic resonance electrical impedance tomography

    Energy Technology Data Exchange (ETDEWEB)

    Sajib, Saurav Z. K.; Jeong, Woo Chul; Oh, Tong In; Kim, Hyung Joong, E-mail: bmekim@khu.ac.kr, E-mail: ejwoo@khu.ac.kr; Woo, Eung Je, E-mail: bmekim@khu.ac.kr, E-mail: ejwoo@khu.ac.kr [Department of Biomedical Engineering, Kyung Hee University, Seoul 02447 (Korea, Republic of); Kyung, Eun Jung [Department of Pharmacology, Chung-Ang University, Seoul 06974 (Korea, Republic of); Kim, Hyun Bum [Department of East-West Medical Science, Kyung Hee University, Yongin 17104 (Korea, Republic of); Kwon, Oh In [Department of Mathematics, Konkuk University, Seoul 05029 (Korea, Republic of)

    2016-06-15

    Anisotropy of biological tissues is a low-frequency phenomenon that is associated with the function and structure of cell membranes. Imaging of anisotropic conductivity has potential for the analysis of interactions between electromagnetic fields and biological systems, such as the prediction of current pathways in electrical stimulation therapy. To improve application to the clinical environment, precise approaches are required to understand the exact responses inside the human body subjected to the stimulated currents. In this study, we experimentally evaluate the anisotropic conductivity tensor distribution of canine brain tissues, using a recently developed diffusion tensor-magnetic resonance electrical impedance tomography method. At low frequency, electrical conductivity of the biological tissues can be expressed as a product of the mobility and concentration of ions in the extracellular space. From diffusion tensor images of the brain, we can obtain directional information on diffusive movements of water molecules, which correspond to the mobility of ions. The position dependent scale factor, which provides information on ion concentration, was successfully calculated from the magnetic flux density, to obtain the equivalent conductivity tensor. By combining the information from both techniques, we can finally reconstruct the anisotropic conductivity tensor images of brain tissues. The reconstructed conductivity images better demonstrate the enhanced signal intensity in strongly anisotropic brain regions, compared with those resulting from previous methods using a global scale factor.

  20. Shape-based interpolation of multidimensional grey-level images

    International Nuclear Information System (INIS)

    Grevera, G.J.; Udupa, J.K.

    1996-01-01

    Shape-based interpolation as applied to binary images causes the interpolation process to be influenced by the shape of the object. It accomplishes this by first applying a distance transform to the data. This results in the creation of a grey-level data set in which the value at each point represents the minimum distance from that point to the surface of the object. (By convention, points inside the object are assigned positive values; points outside are assigned negative values.) This distance transformed data set is then interpolated using linear or higher-order interpolation and is then thresholded at a distance value of zero to produce the interpolated binary data set. In this paper, the authors describe a new method that extends shape-based interpolation to grey-level input data sets. This generalization consists of first lifting the n-dimensional (n-D) image data to represent it as a surface, or equivalently as a binary image, in an (n + 1)-dimensional [(n + 1)-D] space. The binary shape-based method is then applied to this image to create an (n + 1)-D binary interpolated image. Finally, this image is collapsed (inverse of lifting) to create the n-D interpolated grey-level data set. The authors have conducted several evaluation studies involving patient computed tomography (CT) and magnetic resonance (MR) data as well as mathematical phantoms. They all indicate that the new method produces more accurate results than commonly used grey-level linear interpolation methods, although at the cost of increased computation

  1. R-FCN Object Detection Ensemble based on Object Resolution and Image Quality

    DEFF Research Database (Denmark)

    Rasmussen, Christoffer Bøgelund; Nasrollahi, Kamal; Moeslund, Thomas B.

    2017-01-01

    Object detection can be difficult due to challenges such as variations in objects both inter- and intra-class. Additionally, variations can also be present between images. Based on this, research was conducted into creating an ensemble of Region-based Fully Convolutional Networks (R-FCN) object d...

  2. Evidence-based cancer imaging

    Energy Technology Data Exchange (ETDEWEB)

    Shinagare, Atul B.; Khorasani, Ramin [Dept. of Radiology, Brigham and Women' s Hospital, Boston (Korea, Republic of)

    2017-01-15

    With the advances in the field of oncology, imaging is increasingly used in the follow-up of cancer patients, leading to concerns about over-utilization. Therefore, it has become imperative to make imaging more evidence-based, efficient, cost-effective and equitable. This review explores the strategies and tools to make diagnostic imaging more evidence-based, mainly in the context of follow-up of cancer patients.

  3. Prospective regularization design in prior-image-based reconstruction

    International Nuclear Information System (INIS)

    Dang, Hao; Siewerdsen, Jeffrey H; Stayman, J Webster

    2015-01-01

    Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in

  4. Edge-based correlation image registration for multispectral imaging

    Science.gov (United States)

    Nandy, Prabal [Albuquerque, NM

    2009-11-17

    Registration information for images of a common target obtained from a plurality of different spectral bands can be obtained by combining edge detection and phase correlation. The images are edge-filtered, and pairs of the edge-filtered images are then phase correlated to produce phase correlation images. The registration information can be determined based on these phase correlation images.

  5. Single underwater image enhancement based on color cast removal and visibility restoration

    Science.gov (United States)

    Li, Chongyi; Guo, Jichang; Wang, Bo; Cong, Runmin; Zhang, Yan; Wang, Jian

    2016-05-01

    Images taken under underwater condition usually have color cast and serious loss of contrast and visibility. Degraded underwater images are inconvenient for observation and analysis. In order to address these problems, an underwater image-enhancement method is proposed. A simple yet effective underwater image color cast removal algorithm is first presented based on the optimization theory. Then, based on the minimum information loss principle and inherent relationship of medium transmission maps of three color channels in an underwater image, an effective visibility restoration algorithm is proposed to recover visibility, contrast, and natural appearance of degraded underwater images. To evaluate the performance of the proposed method, qualitative comparison, quantitative comparison, and color accuracy test are conducted. Experimental results demonstrate that the proposed method can effectively remove color cast, improve contrast and visibility, and recover natural appearance of degraded underwater images. Additionally, the proposed method is comparable to and even better than several state-of-the-art methods.

  6. Pleasant/Unpleasant Filtering for Affective Image Retrieval Based on Cross-Correlation of EEG Features

    Directory of Open Access Journals (Sweden)

    Keranmu Xielifuguli

    2014-01-01

    Full Text Available People often make decisions based on sensitivity rather than rationality. In the field of biological information processing, methods are available for analyzing biological information directly based on electroencephalogram: EEG to determine the pleasant/unpleasant reactions of users. In this study, we propose a sensitivity filtering technique for discriminating preferences (pleasant/unpleasant for images using a sensitivity image filtering system based on EEG. Using a set of images retrieved by similarity retrieval, we perform the sensitivity-based pleasant/unpleasant classification of images based on the affective features extracted from images with the maximum entropy method: MEM. In the present study, the affective features comprised cross-correlation features obtained from EEGs produced when an individual observed an image. However, it is difficult to measure the EEG when a subject visualizes an unknown image. Thus, we propose a solution where a linear regression method based on canonical correlation is used to estimate the cross-correlation features from image features. Experiments were conducted to evaluate the validity of sensitivity filtering compared with image similarity retrieval methods based on image features. We found that sensitivity filtering using color correlograms was suitable for the classification of preferred images, while sensitivity filtering using local binary patterns was suitable for the classification of unpleasant images. Moreover, sensitivity filtering using local binary patterns for unpleasant images had a 90% success rate. Thus, we conclude that the proposed method is efficient for filtering unpleasant images.

  7. Lessons Learned: Conducting Research With Victims Portrayed in Sexual Abuse Images and Their Parents.

    Science.gov (United States)

    Walsh, Wendy A; Wolak, Janis; Lounsbury, Kaitlin; Howley, Susan; Lippert, Tonya; Thompson, Lawrence

    2016-03-27

    Victims portrayed in sexual abuse images may be resistant to participate in research because of embarrassment or shame due to the sensitive nature and potential permanency of images. No studies we are aware of explore reactions to participating in research after this type of crime. Telephone interviews were conducted with convenience samples of parents (n= 46) and adolescents who were victims of child sexual abuse (n= 11; some of whom were portrayed in sexual abuse images), and online surveys were completed by adult survivors depicted in abuse images (N= 133). The first lesson was that few agencies tracked this type of crime. This lack of tracking raises the question as to what types of data should be collected and tracked as part of an investigation. The second lesson was that few victims at the two participating agencies had been portrayed in sexual abuse images (4%-5%). The third lesson was that once possible cases were identified, we found relatively high percentages of consent to contact and interview completions. This implies that researchers and service providers should not be hesitant about conducting research after an investigation of child sexual abuse. The fourth lesson was that the vast majority of participants reported not being upset by the questions. We hope that the data presented here will encourage agencies to reconsider the types of data being tracked and will encourage researchers to conduct in-depth research with populations that are often difficult to reach to continue improving the professional response to child victimization. © The Author(s) 2016.

  8. Electrical impedance tomography-based sensing skin for quantitative imaging of damage in concrete

    International Nuclear Information System (INIS)

    Hallaji, Milad; Pour-Ghaz, Mohammad; Seppänen, Aku

    2014-01-01

    This paper outlines the development of a large-area sensing skin for damage detection in concrete structures. The developed sensing skin consists of a thin layer of electrically conductive copper paint that is applied to the surface of the concrete. Cracking of the concrete substrate results in the rupture of the sensing skin, decreasing its electrical conductivity locally. The decrease in conductivity is detected with electrical impedance tomography (EIT) imaging. In previous works, electrically based sensing skins have provided only qualitative information on the damage on the substrate surface. In this paper, we study whether quantitative imaging of the damage is possible. We utilize application-specific models and computational methods in the image reconstruction, including a total variation (TV) prior model for the damage and an approximate correction of the modeling errors caused by the inhomogeneity of the painted sensing skin. The developed damage detection method is tested experimentally by applying the sensing skin to polymeric substrates and a reinforced concrete beam under four-point bending. In all test cases, the EIT-based sensing skin provides quantitative information on cracks and/or other damages on the substrate surface: featuring a very low conductivity in the damage locations, and a reliable indication of the lengths and shapes of the cracks. The results strongly support the applicability of the painted EIT-based sensing skin for damage detection in reinforced concrete elements and other substrates. (paper)

  9. Content Based Medical Image Retrieval for Histopathological, CT and MRI Images

    Directory of Open Access Journals (Sweden)

    Swarnambiga AYYACHAMY

    2013-09-01

    Full Text Available A content based approach is followed for medical images. The purpose of this study is to access the stability of these methods for medical image retrieval. The methods used in color based retrieval for histopathological images are color co-occurrence matrix (CCM and histogram with meta features. For texture based retrieval GLCM (gray level co-occurrence matrix and local binary pattern (LBP were used. For shape based retrieval canny edge detection and otsu‘s method with multivariable threshold were used. Texture and shape based retrieval were implemented using MRI (magnetic resonance images. The most remarkable characteristics of the article are its content based approach for each medical imaging modality. Our efforts were focused on the initial visual search. From our experiment, histogram with meta features in color based retrieval for histopathological images shows a precision of 60 % and recall of 30 %. Whereas GLCM in texture based retrieval for MRI images shows a precision of 70 % and recall of 20 %. Shape based retrieval for MRI images shows a precision of 50% and recall of 25 %. The retrieval results shows that this simple approach is successful.

  10. Detail Enhancement for Infrared Images Based on Propagated Image Filter

    Directory of Open Access Journals (Sweden)

    Yishu Peng

    2016-01-01

    Full Text Available For displaying high-dynamic-range images acquired by thermal camera systems, 14-bit raw infrared data should map into 8-bit gray values. This paper presents a new method for detail enhancement of infrared images to display the image with a relatively satisfied contrast and brightness, rich detail information, and no artifacts caused by the image processing. We first adopt a propagated image filter to smooth the input image and separate the image into the base layer and the detail layer. Then, we refine the base layer by using modified histogram projection for compressing. Meanwhile, the adaptive weights derived from the layer decomposition processing are used as the strict gain control for the detail layer. The final display result is obtained by recombining the two modified layers. Experimental results on both cooled and uncooled infrared data verify that the proposed method outperforms the method based on log-power histogram modification and bilateral filter-based detail enhancement in both detail enhancement and visual effect.

  11. Star tracking method based on multiexposure imaging for intensified star trackers.

    Science.gov (United States)

    Yu, Wenbo; Jiang, Jie; Zhang, Guangjun

    2017-07-20

    The requirements for the dynamic performance of star trackers are rapidly increasing with the development of space exploration technologies. However, insufficient knowledge of the angular acceleration has largely decreased the performance of the existing star tracking methods, and star trackers may even fail to track under highly dynamic conditions. This study proposes a star tracking method based on multiexposure imaging for intensified star trackers. The accurate estimation model of the complete motion parameters, including the angular velocity and angular acceleration, is established according to the working characteristic of multiexposure imaging. The estimation of the complete motion parameters is utilized to generate the predictive star image accurately. Therefore, the correct matching and tracking between stars in the real and predictive star images can be reliably accomplished under highly dynamic conditions. Simulations with specific dynamic conditions are conducted to verify the feasibility and effectiveness of the proposed method. Experiments with real starry night sky observation are also conducted for further verification. Simulations and experiments demonstrate that the proposed method is effective and shows excellent performance under highly dynamic conditions.

  12. Comparison of classification algorithms for various methods of preprocessing radar images of the MSTAR base

    Science.gov (United States)

    Borodinov, A. A.; Myasnikov, V. V.

    2018-04-01

    The present work is devoted to comparing the accuracy of the known qualification algorithms in the task of recognizing local objects on radar images for various image preprocessing methods. Preprocessing involves speckle noise filtering and normalization of the object orientation in the image by the method of image moments and by a method based on the Hough transform. In comparison, the following classification algorithms are used: Decision tree; Support vector machine, AdaBoost, Random forest. The principal component analysis is used to reduce the dimension. The research is carried out on the objects from the base of radar images MSTAR. The paper presents the results of the conducted studies.

  13. Dictionary Based Image Segmentation

    DEFF Research Database (Denmark)

    Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2015-01-01

    We propose a method for weakly supervised segmentation of natural images, which may contain both textured or non-textured regions. Our texture representation is based on a dictionary of image patches. To divide an image into separated regions with similar texture we use an implicit level sets...

  14. Macro-SICM: A Scanning Ion Conductance Microscope for Large-Range Imaging.

    Science.gov (United States)

    Schierbaum, Nicolas; Hack, Martin; Betz, Oliver; Schäffer, Tilman E

    2018-04-17

    The scanning ion conductance microscope (SICM) is a versatile, high-resolution imaging technique that uses an electrolyte-filled nanopipet as a probe. Its noncontact imaging principle makes the SICM uniquely suited for the investigation of soft and delicate surface structures in a liquid environment. The SICM has found an ever-increasing number of applications in chemistry, physics, and biology. However, a drawback of conventional SICMs is their relatively small scan range (typically 100 μm × 100 μm in the lateral and 10 μm in the vertical direction). We have developed a Macro-SICM with an exceedingly large scan range of 25 mm × 25 mm in the lateral and 0.25 mm in the vertical direction. We demonstrate the high versatility of the Macro-SICM by imaging at different length scales: from centimeters (fingerprint, coin) to millimeters (bovine tongue tissue, insect wing) to micrometers (cellular extensions). We applied the Macro-SICM to the study of collective cell migration in epithelial wound healing.

  15. SALIENCY BASED SEGMENTATION OF SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    A. Sharma

    2015-03-01

    Full Text Available Saliency gives the way as humans see any image and saliency based segmentation can be eventually helpful in Psychovisual image interpretation. Keeping this in view few saliency models are used along with segmentation algorithm and only the salient segments from image have been extracted. The work is carried out for terrestrial images as well as for satellite images. The methodology used in this work extracts those segments from segmented image which are having higher or equal saliency value than a threshold value. Salient and non salient regions of image become foreground and background respectively and thus image gets separated. For carrying out this work a dataset of terrestrial images and Worldview 2 satellite images (sample data are used. Results show that those saliency models which works better for terrestrial images are not good enough for satellite image in terms of foreground and background separation. Foreground and background separation in terrestrial images is based on salient objects visible on the images whereas in satellite images this separation is based on salient area rather than salient objects.

  16. Multi-frequency time-difference complex conductivity imaging of canine and human lungs using the KHU Mark1 EIT system

    International Nuclear Information System (INIS)

    Kuen, Jihyeon; Woo, Eung Je; Seo, Jin Keun

    2009-01-01

    We evaluated the performance of the lately developed electrical impedance tomography (EIT) system KHU Mark1 through time-difference imaging experiments of canine and human lungs. We derived a multi-frequency time-difference EIT (mftdEIT) image reconstruction algorithm based on the concept of the equivalent homogeneous complex conductivity. Imaging experiments were carried out at three different frequencies of 10, 50 and 100 kHz with three different postures of right lateral, sitting (or prone) and left lateral positions. For three normal canine subjects, we controlled the ventilation using a ventilator at three tidal volumes of 100, 150 and 200 ml. Three human subjects were asked to breath spontaneously at a normal tidal volume. Real- and imaginary-part images of the canine and human lungs were reconstructed at three frequencies and three postures. Images showed different stages of breathing cycles and we could interpret them based on the understanding of the proposed mftdEIT image reconstruction algorithm. Time series of images were further analyzed by using the functional EIT (fEIT) method. Images of human subjects showed the gravity effect on air distribution in two lungs. In the canine subjects, the morphological change seems to dominate the gravity effect. We could also observe that two different types of ventilation should have affected the results. The KHU Mark1 EIT system is expected to provide reliable mftdEIT images of the human lungs. In terms of the image reconstruction algorithm, it would be worthwhile including the effects of three-dimensional current flows inside the human thorax. We suggest clinical trials of the KHU Mark1 for pulmonary applications

  17. Multi-frequency time-difference complex conductivity imaging of canine and human lungs using the KHU Mark1 EIT system.

    Science.gov (United States)

    Kuen, Jihyeon; Woo, Eung Je; Seo, Jin Keun

    2009-06-01

    We evaluated the performance of the lately developed electrical impedance tomography (EIT) system KHU Mark1 through time-difference imaging experiments of canine and human lungs. We derived a multi-frequency time-difference EIT (mftdEIT) image reconstruction algorithm based on the concept of the equivalent homogeneous complex conductivity. Imaging experiments were carried out at three different frequencies of 10, 50 and 100 kHz with three different postures of right lateral, sitting (or prone) and left lateral positions. For three normal canine subjects, we controlled the ventilation using a ventilator at three tidal volumes of 100, 150 and 200 ml. Three human subjects were asked to breath spontaneously at a normal tidal volume. Real- and imaginary-part images of the canine and human lungs were reconstructed at three frequencies and three postures. Images showed different stages of breathing cycles and we could interpret them based on the understanding of the proposed mftdEIT image reconstruction algorithm. Time series of images were further analyzed by using the functional EIT (fEIT) method. Images of human subjects showed the gravity effect on air distribution in two lungs. In the canine subjects, the morphological change seems to dominate the gravity effect. We could also observe that two different types of ventilation should have affected the results. The KHU Mark1 EIT system is expected to provide reliable mftdEIT images of the human lungs. In terms of the image reconstruction algorithm, it would be worthwhile including the effects of three-dimensional current flows inside the human thorax. We suggest clinical trials of the KHU Mark1 for pulmonary applications.

  18. Automatic medical image annotation and keyword-based image retrieval using relevance feedback.

    Science.gov (United States)

    Ko, Byoung Chul; Lee, JiHyeon; Nam, Jae-Yeal

    2012-08-01

    This paper presents novel multiple keywords annotation for medical images, keyword-based medical image retrieval, and relevance feedback method for image retrieval for enhancing image retrieval performance. For semantic keyword annotation, this study proposes a novel medical image classification method combining local wavelet-based center symmetric-local binary patterns with random forests. For keyword-based image retrieval, our retrieval system use the confidence score that is assigned to each annotated keyword by combining probabilities of random forests with predefined body relation graph. To overcome the limitation of keyword-based image retrieval, we combine our image retrieval system with relevance feedback mechanism based on visual feature and pattern classifier. Compared with other annotation and relevance feedback algorithms, the proposed method shows both improved annotation performance and accurate retrieval results.

  19. QR code based noise-free optical encryption and decryption of a gray scale image

    Science.gov (United States)

    Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-03-01

    In optical encryption systems, speckle noise is one major challenge in obtaining high quality decrypted images. This problem can be addressed by employing a QR code based noise-free scheme. Previous works have been conducted for optically encrypting a few characters or a short expression employing QR codes. This paper proposes a practical scheme for optically encrypting and decrypting a gray-scale image based on QR codes for the first time. The proposed scheme is compatible with common QR code generators and readers. Numerical simulation results reveal the proposed method can encrypt and decrypt an input image correctly.

  20. Image inpainting based on stacked autoencoders

    International Nuclear Information System (INIS)

    Shcherbakov, O; Batishcheva, V

    2014-01-01

    Recently we have proposed the algorithm for the problem of image inpaiting (filling in occluded or damaged parts of images). This algorithm was based on the criterion spectrum entropy and showed promising results despite of using hand-crafted representation of images. In this paper, we present a method for solving image inpaiting task based on learning some image representation. Some results are shown to illustrate quality of image reconstruction.

  1. Evidence based medical imaging (EBMI)

    International Nuclear Information System (INIS)

    Smith, Tony

    2008-01-01

    Background: The evidence based paradigm was first described about a decade ago. Previous authors have described a framework for the application of evidence based medicine which can be readily adapted to medical imaging practice. Purpose: This paper promotes the application of the evidence based framework in both the justification of the choice of examination type and the optimisation of the imaging technique used. Methods: The framework includes five integrated steps: framing a concise clinical question; searching for evidence to answer that question; critically appraising the evidence; applying the evidence in clinical practice; and, evaluating the use of revised practices. Results: This paper illustrates the use of the evidence based framework in medical imaging (that is, evidence based medical imaging) using the examples of two clinically relevant case studies. In doing so, a range of information technology and other resources available to medical imaging practitioners are identified with the intention of encouraging the application of the evidence based paradigm in radiography and radiology. Conclusion: There is a perceived need for radiographers and radiologists to make greater use of valid research evidence from the literature to inform their clinical practice and thus provide better quality services

  2. Hierarchical graph-based segmentation for extracting road networks from high-resolution satellite images

    Science.gov (United States)

    Alshehhi, Rasha; Marpu, Prashanth Reddy

    2017-04-01

    Extraction of road networks in urban areas from remotely sensed imagery plays an important role in many urban applications (e.g. road navigation, geometric correction of urban remote sensing images, updating geographic information systems, etc.). It is normally difficult to accurately differentiate road from its background due to the complex geometry of the buildings and the acquisition geometry of the sensor. In this paper, we present a new method for extracting roads from high-resolution imagery based on hierarchical graph-based image segmentation. The proposed method consists of: 1. Extracting features (e.g., using Gabor and morphological filtering) to enhance the contrast between road and non-road pixels, 2. Graph-based segmentation consisting of (i) Constructing a graph representation of the image based on initial segmentation and (ii) Hierarchical merging and splitting of image segments based on color and shape features, and 3. Post-processing to remove irregularities in the extracted road segments. Experiments are conducted on three challenging datasets of high-resolution images to demonstrate the proposed method and compare with other similar approaches. The results demonstrate the validity and superior performance of the proposed method for road extraction in urban areas.

  3. Image based Monument Recognition using Graph based Visual Saliency

    DEFF Research Database (Denmark)

    Kalliatakis, Grigorios; Triantafyllidis, Georgios

    2013-01-01

    This article presents an image-based application aiming at simple image classification of well-known monuments in the area of Heraklion, Crete, Greece. This classification takes place by utilizing Graph Based Visual Saliency (GBVS) and employing Scale Invariant Feature Transform (SIFT) or Speeded......, the images have been previously processed according to the Graph Based Visual Saliency model in order to keep either SIFT or SURF features corresponding to the actual monuments while the background “noise” is minimized. The application is then able to classify these images, helping the user to better...

  4. Content-Based Image Retrieval Based on Electromagnetism-Like Mechanism

    Directory of Open Access Journals (Sweden)

    Hamid A. Jalab

    2013-01-01

    Full Text Available Recently, many researchers in the field of automatic content-based image retrieval have devoted a remarkable amount of research looking for methods to retrieve the best relevant images to the query image. This paper presents a novel algorithm for increasing the precision in content-based image retrieval based on electromagnetism optimization technique. The electromagnetism optimization is a nature-inspired technique that follows the collective attraction-repulsion mechanism by considering each image as an electrical charge. The algorithm is composed of two phases: fitness function measurement and electromagnetism optimization technique. It is implemented on a database with 8,000 images spread across 80 classes with 100 images in each class. Eight thousand queries are fired on the database, and the overall average precision is computed. Experimental results of the proposed approach have shown significant improvement in the retrieval performance in regard to precision.

  5. Fast single image dehazing based on image fusion

    Science.gov (United States)

    Liu, Haibo; Yang, Jie; Wu, Zhengping; Zhang, Qingnian

    2015-01-01

    Images captured in foggy weather conditions often fade the colors and reduce the contrast of the observed objects. An efficient image fusion method is proposed to remove haze from a single input image. First, the initial medium transmission is estimated based on the dark channel prior. Second, the method adopts an assumption that the degradation level affected by haze of each region is the same, which is similar to the Retinex theory, and uses a simple Gaussian filter to get the coarse medium transmission. Then, pixel-level fusion is achieved between the initial medium transmission and coarse medium transmission. The proposed method can recover a high-quality haze-free image based on the physical model, and the complexity of the proposed method is only a linear function of the number of input image pixels. Experimental results demonstrate that the proposed method can allow a very fast implementation and achieve better restoration for visibility and color fidelity compared to some state-of-the-art methods.

  6. Color-Based Image Retrieval from High-Similarity Image Databases

    DEFF Research Database (Denmark)

    Hansen, Michael Adsetts Edberg; Carstensen, Jens Michael

    2003-01-01

    Many image classification problems can fruitfully be thought of as image retrieval in a "high similarity image database" (HSID) characterized by being tuned towards a specific application and having a high degree of visual similarity between entries that should be distinguished. We introduce...... a method for HSID retrieval using a similarity measure based on a linear combination of Jeffreys-Matusita (JM) distances between distributions of color (and color derivatives) estimated from a set of automatically extracted image regions. The weight coefficients are estimated based on optimal retrieval...... performance. Experimental results on the difficult task of visually identifying clones of fungal colonies grown in a petri dish and categorization of pelts show a high retrieval accuracy of the method when combined with standardized sample preparation and image acquisition....

  7. ZnO based transparent conductive oxide films with controlled type of conduction

    Energy Technology Data Exchange (ETDEWEB)

    Zaharescu, M., E-mail: mzaharescu@icf.ro [Institute of Physical Chemistry “Ilie Murgulescu”, Romanian Academy, 202 Splaiul Independentei, 060021 Bucharest (Romania); Mihaiu, S., E-mail: smihaiu@icf.ro [Institute of Physical Chemistry “Ilie Murgulescu”, Romanian Academy, 202 Splaiul Independentei, 060021 Bucharest (Romania); Toader, A. [Institute of Physical Chemistry “Ilie Murgulescu”, Romanian Academy, 202 Splaiul Independentei, 060021 Bucharest (Romania); Atkinson, I., E-mail: irinaatkinson@yahoo.com [Institute of Physical Chemistry “Ilie Murgulescu”, Romanian Academy, 202 Splaiul Independentei, 060021 Bucharest (Romania); Calderon-Moreno, J.; Anastasescu, M.; Nicolescu, M.; Duta, M.; Gartner, M. [Institute of Physical Chemistry “Ilie Murgulescu”, Romanian Academy, 202 Splaiul Independentei, 060021 Bucharest (Romania); Vojisavljevic, K.; Malic, B. [Institute Jožef Stefan, Ljubljana (Slovenia); Ivanov, V.A.; Zaretskaya, E.P. [State Scientific and Production Association “Scientific-Practical Materials Research Center of the National Academy of Science Belarus, P. Brovska str.19, 220072, Minsk (Belarus)

    2014-11-28

    The transparent conductive oxide films with controlled type of conduction are of great importance and their preparation is intensively studied. In our work, the preparation of such films based on doped ZnO was realized in order to achieve controlled type of conduction and high concentration of the charge carriers. Sol–gel method was used for films preparation and several dopants were tested (Sn, Li, Ni). Multilayer deposition was performed on several substrates: SiO{sub 2}/Si wafers, silica-soda-lime and/or silica glasses. The structural and morphological characterization of the obtained films were done by scanning electron microscopy, X-ray diffraction, X-ray fluorescence, X-ray photoelectron spectroscopy and atomic force microscopy respectively, while spectroscopic ellipsometry and transmittance measurements were done for determination of optical properties. The selected samples with the best structural, morphological and optical properties were subjected to electrical measurement (Hall and Seebeck effect). In all studied cases, samples with good adherence and homogeneous morphology as well as monophasic wurtzite type structure were obtained. The optical constants (refractive index and extinction coefficient) were calculated from spectroscopic ellipsometry data using Cauchy model. Films with n- or p-type conduction were obtained depending on the composition, number of deposition and thermal treatment temperature. - Highlights: • Transparent conductive ZnO based thin films were prepared by the sol–gel method. • Controlled type of conduction is obtained in (Sn, Li) doped and Li-Ni co-doped ZnO films. • Hall and Seebeck measurements proved the p-type conductivity for Li-Ni co-doped ZnO films. • The p-type conductivity was maintained even after 4-months of storage. • Influence of dopant- and substrate-type on the ZnO films properties was established.

  8. IDIOS: An innovative index for evaluating dental imaging-based osteoporosis screening indices

    Energy Technology Data Exchange (ETDEWEB)

    Barngkgei, Imad; Al Haffar, Iyad; Khattab, Razan [Faculty of Dentistry, Damascus University, Damascus (Syrian Arab Republic); Halboub, Esam; Almashraqi, Abeer Abdulkareem [Dept. of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, Jazan University, Jazan (Saudi Arabia)

    2016-09-15

    The goal of this study was to develop a new index as an objective reference for evaluating current and newly developed indices used for osteoporosis screening based on dental images. Its name; IDIOS, stands for Index of Dental-imaging Indices of Osteoporosis Screening. A comprehensive PubMed search was conducted to retrieve studies on dental imaging-based indices for osteoporosis screening. The results of the eligible studies, along with other relevant criteria, were used to develop IDIOS, which has scores ranging from 0 (0%) to 15 (100%). The indices presented in the studies we included were then evaluated using IDIOS. The 104 studies that were included utilized 24, 4, and 9 indices derived from panoramic, periapical, and computed tomographic/cone-beam computed tomographic techniques, respectively. The IDIOS scores for these indices ranged from 0 (0%) to 11.75 (78.32%). IDIOS is a valuable reference index that facilitates the evaluation of other dental imaging-based osteoporosis screening indices. Furthermore, IDIOS can be utilized to evaluate the accuracy of newly developed indices.

  9. IDIOS: An innovative index for evaluating dental imaging-based osteoporosis screening indices.

    Science.gov (United States)

    Barngkgei, Imad; Halboub, Esam; Almashraqi, Abeer Abdulkareem; Khattab, Razan; Al Haffar, Iyad

    2016-09-01

    The goal of this study was to develop a new index as an objective reference for evaluating current and newly developed indices used for osteoporosis screening based on dental images. Its name; IDIOS, stands for Index of Dental-imaging Indices of Osteoporosis Screening. A comprehensive PubMed search was conducted to retrieve studies on dental imaging-based indices for osteoporosis screening. The results of the eligible studies, along with other relevant criteria, were used to develop IDIOS, which has scores ranging from 0 (0%) to 15 (100%). The indices presented in the studies we included were then evaluated using IDIOS. The 104 studies that were included utilized 24, 4, and 9 indices derived from panoramic, periapical, and computed tomographic/cone-beam computed tomographic techniques, respectively. The IDIOS scores for these indices ranged from 0 (0%) to 11.75 (78.32%). IDIOS is a valuable reference index that facilitates the evaluation of other dental imaging-based osteoporosis screening indices. Furthermore, IDIOS can be utilized to evaluate the accuracy of newly developed indices.

  10. IDIOS: An innovative index for evaluating dental imaging-based osteoporosis screening indices

    International Nuclear Information System (INIS)

    Barngkgei, Imad; Al Haffar, Iyad; Khattab, Razan; Halboub, Esam; Almashraqi, Abeer Abdulkareem

    2016-01-01

    The goal of this study was to develop a new index as an objective reference for evaluating current and newly developed indices used for osteoporosis screening based on dental images. Its name; IDIOS, stands for Index of Dental-imaging Indices of Osteoporosis Screening. A comprehensive PubMed search was conducted to retrieve studies on dental imaging-based indices for osteoporosis screening. The results of the eligible studies, along with other relevant criteria, were used to develop IDIOS, which has scores ranging from 0 (0%) to 15 (100%). The indices presented in the studies we included were then evaluated using IDIOS. The 104 studies that were included utilized 24, 4, and 9 indices derived from panoramic, periapical, and computed tomographic/cone-beam computed tomographic techniques, respectively. The IDIOS scores for these indices ranged from 0 (0%) to 11.75 (78.32%). IDIOS is a valuable reference index that facilitates the evaluation of other dental imaging-based osteoporosis screening indices. Furthermore, IDIOS can be utilized to evaluate the accuracy of newly developed indices

  11. Imaging in electrically conductive porous media without frequency encoding.

    Science.gov (United States)

    Lehmann-Horn, J A; Walbrecker, J O

    2012-07-01

    Understanding multi-phase fluid flow and transport processes under various pressure, temperature, and salinity conditions is a key feature in many remote monitoring applications, such as long-term storage of carbon dioxide (CO(2)) or nuclear waste in geological formations. We propose a low-field NMR tomographic method to non-invasively image the water-content distribution in electrically conductive formations in relatively large-scale experiments (∼1 m(3) sample volumes). Operating in the weak magnetic field of Earth entails low Larmor frequencies at which electromagnetic fields can penetrate electrically conductive material. The low signal strengths associated with NMR in Earth's field are enhanced by pre-polarization before signal recording. To localize the origin of the NMR signal in the sample region we do not employ magnetic field gradients, as is done in conventional NMR imaging, because they can be difficult to control in the large sample volumes that we are concerned with, and may be biased by magnetic materials in the sample. Instead, we utilize the spatially dependent inhomogeneity of fields generated by surface coils that are installed around the sample volume. This relatively simple setup makes the instrument inexpensive and mobile (it can be potentially installed in remote locations outside of a laboratory), while allowing spatial resolution of the order of 10 cm. We demonstrate the general feasibility of our approach in a simulated CO(2) injection experiment, where we locate and quantify the drop in water content following gas injection into a water-saturated cylindrical sample of 0.45 m radius and 0.9 m height. Our setup comprises four surface coils and an array consisting of three volume coils surrounding the sample. The proposed tomographic NMR methodology provides a more direct estimate of fluid content and properties than can be achieved with acoustic or electromagnetic methods alone. Therefore, we expect that our proposed method is relevant

  12. Diffusion tensor imaging and diffusion tensor imaging-fibre tractograph depict the mechanisms of Broca-like and Wernicke-like conduction aphasia.

    Science.gov (United States)

    Song, Xinjie; Dornbos, David; Lai, Zongli; Zhang, Yumei; Li, Tieshan; Chen, Hongyan; Yang, Zhonghua

    2011-06-01

    Conduction aphasia is usually considered a result of damage of the arcuate fasciculus, which is subjacent to the parietal portion of the supra-marginal gyrus and the upper part of the insula. It is important to stress that many features of conduction aphasia relate to a cortical deficit, more than a pure disconnection mechanism. In this study, we explore the mechanism of Broca-like and Wernicke-like conduction aphasia by using diffusion tensor imaging (DTI) and diffusion tensor imaging-fibre tractograph (DT-FT). We enrolled five Broca-like conduction aphasia cases, five Wernicke-like aphasia conduction cases and 10 healthy volunteers residing in Beijing and speaking Mandarin. All are right handed. We analyzed the arcuate fasciculus, Broca's areas and Wernicke's areas by DTI and measured fractional anisotrogy (FA). The results of left and right hemispheres were compared in both conduction aphasia cases and volunteers. Then the results of the conduction aphasia cases were compared with those of volunteers. The fibre construction of Broca's and Wernicke's areas was also compared by DTI-FT. The FA occupied by the identified connective pathways (Broca's area, Wernicke's area and the arcuate fasciculus) in the left hemisphere was larger than that in the right hemisphere in the control group (Paphasia cases, the FA of the left Broca's area was smaller than that of the right mirror side (PWernicke-like conduction aphasia patients, the FA of the left Wernicke's area was smaller than that of right mirror side (Paphasia results from not only arcuate fasciculus destruction, but also from disruption of the associated cortical areas. Along different segments of the arcuate fasciculus, the characteristics of language disorders of conduction aphasia were different. A lesion involving Broca's area and the anterior segments of the arcuate fasciculus would lead to Broca-like conduction aphasia, whereas a lesion involved Wernicke's area and posterior segments of the arcuate fasciculus

  13. Deep Learning MR Imaging-based Attenuation Correction for PET/MR Imaging.

    Science.gov (United States)

    Liu, Fang; Jang, Hyungseok; Kijowski, Richard; Bradshaw, Tyler; McMillan, Alan B

    2018-02-01

    Purpose To develop and evaluate the feasibility of deep learning approaches for magnetic resonance (MR) imaging-based attenuation correction (AC) (termed deep MRAC) in brain positron emission tomography (PET)/MR imaging. Materials and Methods A PET/MR imaging AC pipeline was built by using a deep learning approach to generate pseudo computed tomographic (CT) scans from MR images. A deep convolutional auto-encoder network was trained to identify air, bone, and soft tissue in volumetric head MR images coregistered to CT data for training. A set of 30 retrospective three-dimensional T1-weighted head images was used to train the model, which was then evaluated in 10 patients by comparing the generated pseudo CT scan to an acquired CT scan. A prospective study was carried out for utilizing simultaneous PET/MR imaging for five subjects by using the proposed approach. Analysis of covariance and paired-sample t tests were used for statistical analysis to compare PET reconstruction error with deep MRAC and two existing MR imaging-based AC approaches with CT-based AC. Results Deep MRAC provides an accurate pseudo CT scan with a mean Dice coefficient of 0.971 ± 0.005 for air, 0.936 ± 0.011 for soft tissue, and 0.803 ± 0.021 for bone. Furthermore, deep MRAC provides good PET results, with average errors of less than 1% in most brain regions. Significantly lower PET reconstruction errors were realized with deep MRAC (-0.7% ± 1.1) compared with Dixon-based soft-tissue and air segmentation (-5.8% ± 3.1) and anatomic CT-based template registration (-4.8% ± 2.2). Conclusion The authors developed an automated approach that allows generation of discrete-valued pseudo CT scans (soft tissue, bone, and air) from a single high-spatial-resolution diagnostic-quality three-dimensional MR image and evaluated it in brain PET/MR imaging. This deep learning approach for MR imaging-based AC provided reduced PET reconstruction error relative to a CT-based standard within the brain compared

  14. Understanding the conductive channel evolution in Na:WO3-x-based planar devices

    Science.gov (United States)

    Shang, Dashan; Li, Peining; Wang, Tao; Carria, Egidio; Sun, Jirong; Shen, Baogen; Taubner, Thomas; Valov, Ilia; Waser, Rainer; Wuttig, Matthias

    2015-03-01

    An ion migration process in a solid electrolyte is important for ion-based functional devices, such as fuel cells, batteries, electrochromics, gas sensors, and resistive switching systems. In this study, a planar sandwich structure is prepared by depositing tungsten oxide (WO3-x) films on a soda-lime glass substrate, from which Na+ diffuses into the WO3-x films during the deposition. The entire process of Na+ migration driven by an alternating electric field is visualized in the Na-doped WO3-x films in the form of conductive channel by in situ optical imaging combined with infrared spectroscopy and near-field imaging techniques. A reversible change of geometry between a parabolic and a bar channel is observed with the resistance change of the devices. The peculiar channel evolution is interpreted by a thermal-stress-induced mechanical deformation of the films and an asymmetric Na+ mobility between the parabolic and the bar channels. These results exemplify a typical ion migration process driven by an alternating electric field in a solid electrolyte with a low ion mobility and are expected to be beneficial to improve the controllability of the ion migration in ion-based functional devices, such as resistive switching devices.An ion migration process in a solid electrolyte is important for ion-based functional devices, such as fuel cells, batteries, electrochromics, gas sensors, and resistive switching systems. In this study, a planar sandwich structure is prepared by depositing tungsten oxide (WO3-x) films on a soda-lime glass substrate, from which Na+ diffuses into the WO3-x films during the deposition. The entire process of Na+ migration driven by an alternating electric field is visualized in the Na-doped WO3-x films in the form of conductive channel by in situ optical imaging combined with infrared spectroscopy and near-field imaging techniques. A reversible change of geometry between a parabolic and a bar channel is observed with the resistance change of the

  15. Development and Performance Evaluation of Image-Based Robotic Waxing System for Detailing Automobiles.

    Science.gov (United States)

    Lin, Chi-Ying; Hsu, Bing-Cheng

    2018-05-14

    Waxing is an important aspect of automobile detailing, aimed at protecting the finish of the car and preventing rust. At present, this delicate work is conducted manually due to the need for iterative adjustments to achieve acceptable quality. This paper presents a robotic waxing system in which surface images are used to evaluate the quality of the finish. An RGB-D camera is used to build a point cloud that details the sheet metal components to enable path planning for a robot manipulator. The robot is equipped with a multi-axis force sensor to measure and control the forces involved in the application and buffing of wax. Images of sheet metal components that were waxed by experienced car detailers were analyzed using image processing algorithms. A Gaussian distribution function and its parameterized values were obtained from the images for use as a performance criterion in evaluating the quality of surfaces prepared by the robotic waxing system. Waxing force and dwell time were optimized using a mathematical model based on the image-based criterion used to measure waxing performance. Experimental results demonstrate the feasibility of the proposed robotic waxing system and image-based performance evaluation scheme.

  16. Image matching navigation based on fuzzy information

    Institute of Scientific and Technical Information of China (English)

    田玉龙; 吴伟仁; 田金文; 柳健

    2003-01-01

    In conventional image matching methods, the image matching process is mostly based on image statistic information. One aspect neglected by all these methods is that there is much fuzzy information contained in these images. A new fuzzy matching algorithm based on fuzzy similarity for navigation is presented in this paper. Because the fuzzy theory is of the ability of making good description of the fuzzy information contained in images, the image matching method based on fuzzy similarity would look forward to producing good performance results. Experimental results using matching algorithm based on fuzzy information also demonstrate its reliability and practicability.

  17. A review of supervised object-based land-cover image classification

    Science.gov (United States)

    Ma, Lei; Li, Manchun; Ma, Xiaoxue; Cheng, Liang; Du, Peijun; Liu, Yongxue

    2017-08-01

    Object-based image classification for land-cover mapping purposes using remote-sensing imagery has attracted significant attention in recent years. Numerous studies conducted over the past decade have investigated a broad array of sensors, feature selection, classifiers, and other factors of interest. However, these research results have not yet been synthesized to provide coherent guidance on the effect of different supervised object-based land-cover classification processes. In this study, we first construct a database with 28 fields using qualitative and quantitative information extracted from 254 experimental cases described in 173 scientific papers. Second, the results of the meta-analysis are reported, including general characteristics of the studies (e.g., the geographic range of relevant institutes, preferred journals) and the relationships between factors of interest (e.g., spatial resolution and study area or optimal segmentation scale, accuracy and number of targeted classes), especially with respect to the classification accuracy of different sensors, segmentation scale, training set size, supervised classifiers, and land-cover types. Third, useful data on supervised object-based image classification are determined from the meta-analysis. For example, we find that supervised object-based classification is currently experiencing rapid advances, while development of the fuzzy technique is limited in the object-based framework. Furthermore, spatial resolution correlates with the optimal segmentation scale and study area, and Random Forest (RF) shows the best performance in object-based classification. The area-based accuracy assessment method can obtain stable classification performance, and indicates a strong correlation between accuracy and training set size, while the accuracy of the point-based method is likely to be unstable due to mixed objects. In addition, the overall accuracy benefits from higher spatial resolution images (e.g., unmanned aerial

  18. Magnetoacoustic Imaging of Electrical Conductivity of Biological Tissues at a Spatial Resolution Better than 2 mm

    OpenAIRE

    Hu, Gang; He, Bin

    2011-01-01

    Magnetoacoustic tomography with magnetic induction (MAT-MI) is an emerging approach for noninvasively imaging electrical impedance properties of biological tissues. The MAT-MI imaging system measures ultrasound waves generated by the Lorentz force, having been induced by magnetic stimulation, which is related to the electrical conductivity distribution in tissue samples. MAT-MI promises to provide fine spatial resolution for biological tissue imaging as compared to ultrasound resolution. In t...

  19. Conducting Web-based Surveys.

    OpenAIRE

    David J. Solomon

    2001-01-01

    Web-based surveying is becoming widely used in social science and educational research. The Web offers significant advantages over more traditional survey techniques however there are still serious methodological challenges with using this approach. Currently coverage bias or the fact significant numbers of people do not have access, or choose not to use the Internet is of most concern to researchers. Survey researchers also have much to learn concerning the most effective ways to conduct s...

  20. Voxel-based clustered imaging by multiparameter diffusion tensor images for glioma grading.

    Science.gov (United States)

    Inano, Rika; Oishi, Naoya; Kunieda, Takeharu; Arakawa, Yoshiki; Yamao, Yukihiro; Shibata, Sumiya; Kikuchi, Takayuki; Fukuyama, Hidenao; Miyamoto, Susumu

    2014-01-01

    Gliomas are the most common intra-axial primary brain tumour; therefore, predicting glioma grade would influence therapeutic strategies. Although several methods based on single or multiple parameters from diagnostic images exist, a definitive method for pre-operatively determining glioma grade remains unknown. We aimed to develop an unsupervised method using multiple parameters from pre-operative diffusion tensor images for obtaining a clustered image that could enable visual grading of gliomas. Fourteen patients with low-grade gliomas and 19 with high-grade gliomas underwent diffusion tensor imaging and three-dimensional T1-weighted magnetic resonance imaging before tumour resection. Seven features including diffusion-weighted imaging, fractional anisotropy, first eigenvalue, second eigenvalue, third eigenvalue, mean diffusivity and raw T2 signal with no diffusion weighting, were extracted as multiple parameters from diffusion tensor imaging. We developed a two-level clustering approach for a self-organizing map followed by the K-means algorithm to enable unsupervised clustering of a large number of input vectors with the seven features for the whole brain. The vectors were grouped by the self-organizing map as protoclusters, which were classified into the smaller number of clusters by K-means to make a voxel-based diffusion tensor-based clustered image. Furthermore, we also determined if the diffusion tensor-based clustered image was really helpful for predicting pre-operative glioma grade in a supervised manner. The ratio of each class in the diffusion tensor-based clustered images was calculated from the regions of interest manually traced on the diffusion tensor imaging space, and the common logarithmic ratio scales were calculated. We then applied support vector machine as a classifier for distinguishing between low- and high-grade gliomas. Consequently, the sensitivity, specificity, accuracy and area under the curve of receiver operating characteristic

  1. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    International Nuclear Information System (INIS)

    Dhou, S; Hurwitz, M; Cai, W; Rottmann, J; Williams, C; Wagar, M; Berbeco, R; Lewis, J H; Mishra, P; Li, R; Ionascu, D

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. (paper)

  2. Magnetoacoustic Tomography with Magnetic Induction (MAT-MI) for Imaging Electrical Conductivity of Biological Tissue: A Tutorial Review

    Science.gov (United States)

    Li, Xu; Yu, Kai; He, Bin

    2016-01-01

    Magnetoacoustic tomography with magnetic induction (MAT-MI) is a noninvasive imaging method developed to map electrical conductivity of biological tissue with millimeter level spatial resolution. In MAT-MI, a time-varying magnetic stimulation is applied to induce eddy current inside the conductive tissue sample. With the existence of a static magnetic field, the Lorentz force acting on the induced eddy current drives mechanical vibrations producing detectable ultrasound signals. These ultrasound signals can then be acquired to reconstruct a map related to the sample’s electrical conductivity contrast. This work reviews fundamental ideas of MAT-MI and major techniques developed in these years. First, the physical mechanisms underlying MAT-MI imaging are described including the magnetic induction and Lorentz force induced acoustic wave propagation. Second, experimental setups and various imaging strategies for MAT-MI are reviewed and compared together with the corresponding experimental results. In addition, as a recently developed reverse mode of MAT-MI, magneto-acousto-electrical tomography with magnetic induction (MAET-MI) is briefly reviewed in terms of its theory and experimental studies. Finally, we give our opinions on existing challenges and future directions for MAT-MI research. With all the reported and future technical advancement, MAT-MI has the potential to become an important noninvasive modality for electrical conductivity imaging of biological tissue. PMID:27542088

  3. Structural study of TiO2-based transparent conducting films

    International Nuclear Information System (INIS)

    Hitosugi, T.; Yamada, N.; Nakao, S.; Hatabayashi, K.; Shimada, T.; Hasegawa, T.

    2008-01-01

    We have investigated microscopic structures of sputter and pulsed laser deposited (PLD) anatase Nb-doped TiO 2 transparent conducting films, and discuss what causes the degradation of resistivity in sputter-deposited films. Cross-sectional transmission electron microscope and polarized optical microscope images show inhomogeneous intragrain structures and small grains of ∼10 μm in sputter-deposited films. From comparison with PLD films, these results suggest that homogeneous film growth is the important factor to obtain highly conducting sputter-deposited film

  4. A SVD Based Image Complexity Measure

    DEFF Research Database (Denmark)

    Gustafsson, David Karl John; Pedersen, Kim Steenstrup; Nielsen, Mads

    2009-01-01

    Images are composed of geometric structures and texture, and different image processing tools - such as denoising, segmentation and registration - are suitable for different types of image contents. Characterization of the image content in terms of geometric structure and texture is an important...... problem that one is often faced with. We propose a patch based complexity measure, based on how well the patch can be approximated using singular value decomposition. As such the image complexity is determined by the complexity of the patches. The concept is demonstrated on sequences from the newly...... collected DIKU Multi-Scale image database....

  5. Image segmentation-based robust feature extraction for color image watermarking

    Science.gov (United States)

    Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen

    2018-04-01

    This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.

  6. A regularized, model-based approach to phase-based conductivity mapping using MRI.

    Science.gov (United States)

    Ropella, Kathleen M; Noll, Douglas C

    2017-11-01

    To develop a novel regularized, model-based approach to phase-based conductivity mapping that uses structural information to improve the accuracy of conductivity maps. The inverse of the three-dimensional Laplacian operator is used to model the relationship between measured phase maps and the object conductivity in a penalized weighted least-squares optimization problem. Spatial masks based on structural information are incorporated into the problem to preserve data near boundaries. The proposed Inverse Laplacian method was compared against a restricted Gaussian filter in simulation, phantom, and human experiments. The Inverse Laplacian method resulted in lower reconstruction bias and error due to noise in simulations than the Gaussian filter. The Inverse Laplacian method also produced conductivity maps closer to the measured values in a phantom and with reduced noise in the human brain, as compared to the Gaussian filter. The Inverse Laplacian method calculates conductivity maps with less noise and more accurate values near boundaries. Improving the accuracy of conductivity maps is integral for advancing the applications of conductivity mapping. Magn Reson Med 78:2011-2021, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  7. Oligoaniline-based conductive biomaterials for tissue engineering.

    Science.gov (United States)

    Zarrintaj, Payam; Bakhshandeh, Behnaz; Saeb, Mohammad Reza; Sefat, Farshid; Rezaeian, Iraj; Ganjali, Mohammad Reza; Ramakrishna, Seeram; Mozafari, Masoud

    2018-05-01

    The science and engineering of biomaterials have improved the human life expectancy. Tissue engineering is one of the nascent strategies with an aim to fulfill this target. Tissue engineering scaffolds are one of the most significant aspects of the recent tissue repair strategies; hence, it is imperative to design biomimetic substrates with suitable features. Conductive substrates can ameliorate the cellular activity through enhancement of cellular signaling. Biocompatible polymers with conductivity can mimic the cells' niche in an appropriate manner. Bioconductive polymers based on aniline oligomers can potentially actualize this purpose because of their unique and tailoring properties. The aniline oligomers can be positioned within the molecular structure of other polymers, thus painter acting with the side groups of the main polymer or acting as a comonomer in their backbone. The conductivity of oligoaniline-based conductive biomaterials can be tailored to mimic the electrical and mechanical properties of targeted tissues/organs. These bioconductive substrates can be designed with high mechanical strength for hard tissues such as the bone and with high elasticity to be used for the cardiac tissue or can be synthesized in the form of injectable hydrogels, particles, and nanofibers for noninvasive implantation; these structures can be used for applications such as drug/gene delivery and extracellular biomimetic structures. It is expected that with progress in the fields of biomaterials and tissue engineering, more innovative constructs will be proposed in the near future. This review discusses the recent advancements in the use of oligoaniline-based conductive biomaterials for tissue engineering and regenerative medicine applications. The tissue engineering applications of aniline oligomers and their derivatives have recently attracted an increasing interest due to their electroactive and biodegradable properties. However, no reports have systematically reviewed

  8. Magnetic susceptibility and electrical conductivity of metallic dental materials and their impact on MR imaging artifacts

    Czech Academy of Sciences Publication Activity Database

    Starčuková, Jana; Starčuk jr., Zenon; Hubálková, H.; Linetskiy, I.

    2008-01-01

    Roč. 24, č. 6 (2008), s. 715-723 ISSN 0109-5641 R&D Projects: GA MZd NR8110 Institutional research plan: CEZ:AV0Z20650511 Keywords : metallic dental materials * dental alloys * amalgams * MR imaging * magnetic susceptibility * electric conductivity * image artifact Subject RIV: FF - HEENT, Dentistry Impact factor: 2.941, year: 2008

  9. Privacy-Aware Image Encryption Based on Logistic Map and Data Hiding

    Science.gov (United States)

    Sun, Jianglin; Liao, Xiaofeng; Chen, Xin; Guo, Shangwei

    The increasing need for image communication and storage has created a great necessity for securely transforming and storing images over a network. Whereas traditional image encryption algorithms usually consider the security of the whole plain image, region of interest (ROI) encryption schemes, which are of great importance in practical applications, protect the privacy regions of plain images. Existing ROI encryption schemes usually adopt approximate techniques to detect the privacy region and measure the quality of encrypted images; however, their performance is usually inconsistent with a human visual system (HVS) and is sensitive to statistical attacks. In this paper, we propose a novel privacy-aware ROI image encryption (PRIE) scheme based on logistical mapping and data hiding. The proposed scheme utilizes salient object detection to automatically, adaptively and accurately detect the privacy region of a given plain image. After private pixels have been encrypted using chaotic cryptography, the significant bits are embedded into the nonprivacy region of the plain image using data hiding. Extensive experiments are conducted to illustrate the consistency between our automatic ROI detection and HVS. Our experimental results also demonstrate that the proposed scheme exhibits satisfactory security performance.

  10. SAR Image Classification Based on Its Texture Features

    Institute of Scientific and Technical Information of China (English)

    LI Pingxiang; FANG Shenghui

    2003-01-01

    SAR images not only have the characteristics of all-ay, all-eather, but also provide object information which is different from visible and infrared sensors. However, SAR images have some faults, such as more speckles and fewer bands. The authors conducted the experiments of texture statistics analysis on SAR image features in order to improve the accuracy of SAR image interpretation.It is found that the texture analysis is an effective method for improving the accuracy of the SAR image interpretation.

  11. Development and Analysis of Patient-Based Complete Conducting Airways Models.

    Directory of Open Access Journals (Sweden)

    Rafel Bordas

    Full Text Available The analysis of high-resolution computed tomography (CT images of the lung is dependent on inter-subject differences in airway geometry. The application of computational models in understanding the significance of these differences has previously been shown to be a useful tool in biomedical research. Studies using image-based geometries alone are limited to the analysis of the central airways, down to generation 6-10, as other airways are not visible on high-resolution CT. However, airways distal to this, often termed the small airways, are known to play a crucial role in common airway diseases such as asthma and chronic obstructive pulmonary disease (COPD. Other studies have incorporated an algorithmic approach to extrapolate CT segmented airways in order to obtain a complete conducting airway tree down to the level of the acinus. These models have typically been used for mechanistic studies, but also have the potential to be used in a patient-specific setting. In the current study, an image analysis and modelling pipeline was developed and applied to a number of healthy (n = 11 and asthmatic (n = 24 CT patient scans to produce complete patient-based airway models to the acinar level (mean terminal generation 15.8 ± 0.47. The resulting models are analysed in terms of morphometric properties and seen to be consistent with previous work. A number of global clinical lung function measures are compared to resistance predictions in the models to assess their suitability for use in a patient-specific setting. We show a significant difference (p < 0.01 in airways resistance at all tested flow rates in complete airway trees built using CT data from severe asthmatics (GINA 3-5 versus healthy subjects. Further, model predictions of airways resistance at all flow rates are shown to correlate with patient forced expiratory volume in one second (FEV1 (Spearman ρ = -0.65, p < 0.001 and, at low flow rates (0.00017 L/s, FEV1 over forced vital capacity (FEV1

  12. A simple solution for reducing artefacts due to conductive and dielectric effects in clinical magnetic resonance imaging at 3 T

    International Nuclear Information System (INIS)

    Sreenivas, M.; Lowry, M.; Gibbs, P.; Pickles, M.; Turnbull, L.W.

    2007-01-01

    The quality of imaging obtained at high magnetic field strengths can be degraded by various artefacts due to conductive and dielectric effects, which leads to loss of signal. Various methods have been described and used to improve the quality of the image affected by such artefacts. In this article, we describe the construction and use of a simple solution that can be used to diminish artefacts due to conductive and dielectric effects in clinical imaging at 3 T field strength and thereby improve the diagnostic quality of the images obtained

  13. A simple solution for reducing artefacts due to conductive and dielectric effects in clinical magnetic resonance imaging at 3 T

    Energy Technology Data Exchange (ETDEWEB)

    Sreenivas, M. [Department of Radiology (Yorkshire Deanery-East), Hull Royal Infirmary, Anlaby Road, Hull HU3 2JZ (United Kingdom)]. E-mail: aprilsreenivas@hotmail.com; Lowry, M. [Centre for Magnetic Resonance Investigations, University of Hull, Hull Royal Infirmary, Anlaby Road, 1PR, Hull HU3 2JZ (United Kingdom); Gibbs, P. [Centre for Magnetic Resonance Investigations, University of Hull, Hull Royal Infirmary, Anlaby Road, 1PR, Hull HU3 2JZ (United Kingdom); Pickles, M. [Centre for Magnetic Resonance Investigations, University of Hull, Hull Royal Infirmary, Anlaby Road, 1PR, Hull HU3 2JZ (United Kingdom); Turnbull, L.W. [Centre for Magnetic Resonance Investigations, University of Hull, Hull Royal Infirmary, Anlaby Road, 1PR, Hull HU3 2JZ (United Kingdom)

    2007-04-15

    The quality of imaging obtained at high magnetic field strengths can be degraded by various artefacts due to conductive and dielectric effects, which leads to loss of signal. Various methods have been described and used to improve the quality of the image affected by such artefacts. In this article, we describe the construction and use of a simple solution that can be used to diminish artefacts due to conductive and dielectric effects in clinical imaging at 3 T field strength and thereby improve the diagnostic quality of the images obtained.

  14. Hierarchical layered and semantic-based image segmentation using ergodicity map

    Science.gov (United States)

    Yadegar, Jacob; Liu, Xiaoqing

    2010-04-01

    Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects

  15. Retinal image quality assessment based on image clarity and content

    Science.gov (United States)

    Abdel-Hamid, Lamiaa; El-Rafei, Ahmed; El-Ramly, Salwa; Michelson, Georg; Hornegger, Joachim

    2016-09-01

    Retinal image quality assessment (RIQA) is an essential step in automated screening systems to avoid misdiagnosis caused by processing poor quality retinal images. A no-reference transform-based RIQA algorithm is introduced that assesses images based on five clarity and content quality issues: sharpness, illumination, homogeneity, field definition, and content. Transform-based RIQA algorithms have the advantage of considering retinal structures while being computationally inexpensive. Wavelet-based features are proposed to evaluate the sharpness and overall illumination of the images. A retinal saturation channel is designed and used along with wavelet-based features for homogeneity assessment. The presented sharpness and illumination features are utilized to assure adequate field definition, whereas color information is used to exclude nonretinal images. Several publicly available datasets of varying quality grades are utilized to evaluate the feature sets resulting in area under the receiver operating characteristic curve above 0.99 for each of the individual feature sets. The overall quality is assessed by a classifier that uses the collective features as an input vector. The classification results show superior performance of the algorithm in comparison to other methods from literature. Moreover, the algorithm addresses efficiently and comprehensively various quality issues and is suitable for automatic screening systems.

  16. Electromagnetic MUSIC-type imaging of perfectly conducting, arc-like cracks at single frequency

    Science.gov (United States)

    Park, Won-Kwang; Lesselier, Dominique

    2009-11-01

    We propose a non-iterative MUSIC (MUltiple SIgnal Classification)-type algorithm for the time-harmonic electromagnetic imaging of one or more perfectly conducting, arc-like cracks found within a homogeneous space R2. The algorithm is based on a factorization of the Multi-Static Response (MSR) matrix collected in the far-field at a single, nonzero frequency in either Transverse Magnetic (TM) mode (Dirichlet boundary condition) or Transverse Electric (TE) mode (Neumann boundary condition), followed by the calculation of a MUSIC cost functional expected to exhibit peaks along the crack curves each half a wavelength. Numerical experimentation from exact, noiseless and noisy data shows that this is indeed the case and that the proposed algorithm behaves in robust manner, with better results in the TM mode than in the TE mode for which one would have to estimate the normal to the crack to get the most optimal results.

  17. Automated Orthorectification of VHR Satellite Images by SIFT-Based RPC Refinement

    Directory of Open Access Journals (Sweden)

    Hakan Kartal

    2018-06-01

    Full Text Available Raw remotely sensed images contain geometric distortions and cannot be used directly for map-based applications, accurate locational information extraction or geospatial data integration. A geometric correction process must be conducted to minimize the errors related to distortions and achieve the desired location accuracy before further analysis. A considerable number of images might be needed when working over large areas or in temporal domains in which manual geometric correction requires more labor and time. To overcome these problems, new algorithms have been developed to make the geometric correction process autonomous. The Scale Invariant Feature Transform (SIFT algorithm is an image matching algorithm used in remote sensing applications that has received attention in recent years. In this study, the effects of the incidence angle, surface topography and land cover (LC characteristics on SIFT-based automated orthorectification were investigated at three different study sites with different topographic conditions and LC characteristics using Pleiades very high resolution (VHR images acquired at different incidence angles. The results showed that the location accuracy of the orthorectified images increased with lower incidence angle images. More importantly, the topographic characteristics had no observable impacts on the location accuracy of SIFT-based automated orthorectification, and the results showed that Ground Control Points (GCPs are mainly concentrated in the “Forest” and “Semi Natural Area” LC classes. A multi-thread code was designed to reduce the automated processing time, and the results showed that the process performed 7 to 16 times faster using an automated approach. Analyses performed on various spectral modes of multispectral data showed that the arithmetic data derived from pan-sharpened multispectral images can be used in automated SIFT-based RPC orthorectification.

  18. An Image-Based Finite Element Approach for Simulating Viscoelastic Response of Asphalt Mixture

    Directory of Open Access Journals (Sweden)

    Wenke Huang

    2016-01-01

    Full Text Available This paper presents an image-based micromechanical modeling approach to predict the viscoelastic behavior of asphalt mixture. An improved image analysis technique based on the OTSU thresholding operation was employed to reduce the beam hardening effect in X-ray CT images. We developed a voxel-based 3D digital reconstruction model of asphalt mixture with the CT images after being processed. In this 3D model, the aggregate phase and air void were considered as elastic materials while the asphalt mastic phase was considered as linear viscoelastic material. The viscoelastic constitutive model of asphalt mastic was implemented in a finite element code using the ABAQUS user material subroutine (UMAT. An experimental procedure for determining the parameters of the viscoelastic constitutive model at a given temperature was proposed. To examine the capability of the model and the accuracy of the parameter, comparisons between the numerical predictions and the observed laboratory results of bending and compression tests were conducted. Finally, the verified digital sample of asphalt mixture was used to predict the asphalt mixture viscoelastic behavior under dynamic loading and creep-recovery loading. Simulation results showed that the presented image-based digital sample may be appropriate for predicting the mechanical behavior of asphalt mixture when all the mechanical properties for different phases became available.

  19. NSCT BASED LOCAL ENHANCEMENT FOR ACTIVE CONTOUR BASED IMAGE SEGMENTATION APPLICATION

    Directory of Open Access Journals (Sweden)

    Hiren Mewada

    2010-08-01

    Full Text Available Because of cross-disciplinary nature, Active Contour modeling techniques have been utilized extensively for the image segmentation. In traditional active contour based segmentation techniques based on level set methods, the energy functions are defined based on the intensity gradient. This makes them highly sensitive to the situation where the underlying image content is characterized by image nonhomogeneities due to illumination and contrast condition. This is the most difficult problem to make them as fully automatic image segmentation techniques. This paper introduces one of the approaches based on image enhancement to this problem. The enhanced image is obtained using NonSubsampled Contourlet Transform, which improves the edges strengths in the direction where the illumination is not proper and then active contour model based on level set technique is utilized to segment the object. Experiment results demonstrate that proposed method can be utilized along with existing active contour model based segmentation method under situation characterized by intensity non-homogeneity to make them fully automatic.

  20. Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.

    Science.gov (United States)

    Liu, Min; Wang, Xueping; Zhang, Hongzhong

    2018-03-01

    In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.

    Science.gov (United States)

    Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua

    2018-03-01

    To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  2. Effective Sequential Classifier Training for SVM-Based Multitemporal Remote Sensing Image Classification

    Science.gov (United States)

    Guo, Yiqing; Jia, Xiuping; Paull, David

    2018-06-01

    The explosive availability of remote sensing images has challenged supervised classification algorithms such as Support Vector Machines (SVM), as training samples tend to be highly limited due to the expensive and laborious task of ground truthing. The temporal correlation and spectral similarity between multitemporal images have opened up an opportunity to alleviate this problem. In this study, a SVM-based Sequential Classifier Training (SCT-SVM) approach is proposed for multitemporal remote sensing image classification. The approach leverages the classifiers of previous images to reduce the required number of training samples for the classifier training of an incoming image. For each incoming image, a rough classifier is firstly predicted based on the temporal trend of a set of previous classifiers. The predicted classifier is then fine-tuned into a more accurate position with current training samples. This approach can be applied progressively to sequential image data, with only a small number of training samples being required from each image. Experiments were conducted with Sentinel-2A multitemporal data over an agricultural area in Australia. Results showed that the proposed SCT-SVM achieved better classification accuracies compared with two state-of-the-art model transfer algorithms. When training data are insufficient, the overall classification accuracy of the incoming image was improved from 76.18% to 94.02% with the proposed SCT-SVM, compared with those obtained without the assistance from previous images. These results demonstrate that the leverage of a priori information from previous images can provide advantageous assistance for later images in multitemporal image classification.

  3. Image-based occupancy sensor

    Science.gov (United States)

    Polese, Luigi Gentile; Brackney, Larry

    2015-05-19

    An image-based occupancy sensor includes a motion detection module that receives and processes an image signal to generate a motion detection signal, a people detection module that receives the image signal and processes the image signal to generate a people detection signal, a face detection module that receives the image signal and processes the image signal to generate a face detection signal, and a sensor integration module that receives the motion detection signal from the motion detection module, receives the people detection signal from the people detection module, receives the face detection signal from the face detection module, and generates an occupancy signal using the motion detection signal, the people detection signal, and the face detection signal, with the occupancy signal indicating vacancy or occupancy, with an occupancy indication specifying that one or more people are detected within the monitored volume.

  4. Effect of conductance linearity and multi-level cell characteristics of TaOx-based synapse device on pattern recognition accuracy of neuromorphic system

    Science.gov (United States)

    Sung, Changhyuck; Lim, Seokjae; Kim, Hyungjun; Kim, Taesu; Moon, Kibong; Song, Jeonghwan; Kim, Jae-Joon; Hwang, Hyunsang

    2018-03-01

    To improve the classification accuracy of an image data set (CIFAR-10) by using analog input voltage, synapse devices with excellent conductance linearity (CL) and multi-level cell (MLC) characteristics are required. We analyze the CL and MLC characteristics of TaOx-based filamentary resistive random access memory (RRAM) to implement the synapse device in neural network hardware. Our findings show that the number of oxygen vacancies in the filament constriction region of the RRAM directly controls the CL and MLC characteristics. By adopting a Ta electrode (instead of Ti) and the hot-forming step, we could form a dense conductive filament. As a result, a wide range of conductance levels with CL is achieved and significantly improved image classification accuracy is confirmed.

  5. Optical image hiding based on interference

    Science.gov (United States)

    Zhang, Yan; Wang, Bo

    2009-11-01

    Optical image processing has been paid a lot of attentions recently due to its large capacitance and fast speed. Many image encryption and hiding technologies have been proposed based on the optical technology. In conventional image encryption methods, the random phase masks are usually used as encryption keys to encode the images into random white noise distribution. However, this kind of methods requires interference technology such as holography to record complex amplitude. Furthermore, it is vulnerable to attack techniques. The image hiding methods employ the phase retrieve algorithm to encode the images into two or more phase masks. The hiding process is carried out within a computer and the images are reconstructed optically. But the iterative algorithms need a lot of time to hide the image into the masks. All methods mentioned above are based on the optical diffraction of the phase masks. In this presentation, we will propose a new optical image hiding method based on interference. The coherence lights pass through two phase masks and are combined by a beam splitter. Two beams interfere with each other and the desired image appears at the pre-designed plane. Two phase distribution masks are designed analytically; therefore, the hiding speed can be obviously improved. Simulation results are carried out to demonstrate the validity of the new proposed methods.

  6. Model-Based Reconstructive Elasticity Imaging Using Ultrasound

    Directory of Open Access Journals (Sweden)

    Salavat R. Aglyamov

    2007-01-01

    Full Text Available Elasticity imaging is a reconstructive imaging technique where tissue motion in response to mechanical excitation is measured using modern imaging systems, and the estimated displacements are then used to reconstruct the spatial distribution of Young's modulus. Here we present an ultrasound elasticity imaging method that utilizes the model-based technique for Young's modulus reconstruction. Based on the geometry of the imaged object, only one axial component of the strain tensor is used. The numerical implementation of the method is highly efficient because the reconstruction is based on an analytic solution of the forward elastic problem. The model-based approach is illustrated using two potential clinical applications: differentiation of liver hemangioma and staging of deep venous thrombosis. Overall, these studies demonstrate that model-based reconstructive elasticity imaging can be used in applications where the geometry of the object and the surrounding tissue is somewhat known and certain assumptions about the pathology can be made.

  7. Cluster Ensemble-Based Image Segmentation

    Directory of Open Access Journals (Sweden)

    Xiaoru Wang

    2013-07-01

    Full Text Available Image segmentation is the foundation of computer vision applications. In this paper, we propose a new cluster ensemble-based image segmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories of images. Second, we exploit the PageRank idea from Internet applications and apply it to the image segmentation task. This can improve the final segmentation results by combining the spatial information of the image and the semantic similarity of regions. Our experiments on four public image databases validate the superiority of our algorithm over conventional single type of feature or multiple types of features-based algorithms, since our algorithm can fuse multiple types of features effectively for better segmentation results. Moreover, our method is also proved to be very competitive in comparison with other state-of-the-art segmentation algorithms.

  8. ROV Based Underwater Blurred Image Restoration

    Institute of Scientific and Technical Information of China (English)

    LIU Zhishen; DING Tianfu; WANG Gang

    2003-01-01

    In this paper, we present a method of ROV based image processing to restore underwater blurry images from the theory of light and image transmission in the sea. Computer is used to simulate the maximum detection range of the ROV under different water body conditions. The receiving irradiance of the video camera at different detection ranges is also calculated. The ROV's detection performance under different water body conditions is given by simulation. We restore the underwater blurry images using the Wiener filter based on the simulation. The Wiener filter is shown to be a simple useful method for underwater image restoration in the ROV underwater experiments. We also present examples of restored images of an underwater standard target taken by the video camera in these experiments.

  9. Image classification using multiscale information fusion based on saliency driven nonlinear diffusion filtering.

    Science.gov (United States)

    Hu, Weiming; Hu, Ruiguang; Xie, Nianhua; Ling, Haibin; Maybank, Stephen

    2014-04-01

    In this paper, we propose saliency driven image multiscale nonlinear diffusion filtering. The resulting scale space in general preserves or even enhances semantically important structures such as edges, lines, or flow-like structures in the foreground, and inhibits and smoothes clutter in the background. The image is classified using multiscale information fusion based on the original image, the image at the final scale at which the diffusion process converges, and the image at a midscale. Our algorithm emphasizes the foreground features, which are important for image classification. The background image regions, whether considered as contexts of the foreground or noise to the foreground, can be globally handled by fusing information from different scales. Experimental tests of the effectiveness of the multiscale space for the image classification are conducted on the following publicly available datasets: 1) the PASCAL 2005 dataset; 2) the Oxford 102 flowers dataset; and 3) the Oxford 17 flowers dataset, with high classification rates.

  10. Image degradation characteristics and restoration based on regularization for diffractive imaging

    Science.gov (United States)

    Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun

    2017-11-01

    The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.

  11. No-reference image quality assessment based on statistics of convolution feature maps

    Science.gov (United States)

    Lv, Xiaoxin; Qin, Min; Chen, Xiaohui; Wei, Guo

    2018-04-01

    We propose a Convolutional Feature Maps (CFM) driven approach to accurately predict image quality. Our motivation bases on the finding that the Nature Scene Statistic (NSS) features on convolution feature maps are significantly sensitive to distortion degree of an image. In our method, a Convolutional Neural Network (CNN) is trained to obtain kernels for generating CFM. We design a forward NSS layer which performs on CFM to better extract NSS features. The quality aware features derived from the output of NSS layer is effective to describe the distortion type and degree an image suffered. Finally, a Support Vector Regression (SVR) is employed in our No-Reference Image Quality Assessment (NR-IQA) model to predict a subjective quality score of a distorted image. Experiments conducted on two public databases demonstrate the promising performance of the proposed method is competitive to state of the art NR-IQA methods.

  12. Super-Resolution of Plant Disease Images for the Acceleration of Image-based Phenotyping and Vigor Diagnosis in Agriculture.

    Science.gov (United States)

    Yamamoto, Kyosuke; Togami, Takashi; Yamaguchi, Norio

    2017-11-06

    Unmanned aerial vehicles (UAVs or drones) are a very promising branch of technology, and they have been utilized in agriculture-in cooperation with image processing technologies-for phenotyping and vigor diagnosis. One of the problems in the utilization of UAVs for agricultural purposes is the limitation in flight time. It is necessary to fly at a high altitude to capture the maximum number of plants in the limited time available, but this reduces the spatial resolution of the captured images. In this study, we applied a super-resolution method to the low-resolution images of tomato diseases to recover detailed appearances, such as lesions on plant organs. We also conducted disease classification using high-resolution, low-resolution, and super-resolution images to evaluate the effectiveness of super-resolution methods in disease classification. Our results indicated that the super-resolution method outperformed conventional image scaling methods in spatial resolution enhancement of tomato disease images. The results of disease classification showed that the accuracy attained was also better by a large margin with super-resolution images than with low-resolution images. These results indicated that our approach not only recovered the information lost in low-resolution images, but also exerted a beneficial influence on further image analysis. The proposed approach will accelerate image-based phenotyping and vigor diagnosis in the field, because it not only saves time to capture images of a crop in a cultivation field but also secures the accuracy of these images for further analysis.

  13. Photonics-Based Microwave Image-Reject Mixer

    Directory of Open Access Journals (Sweden)

    Dan Zhu

    2018-03-01

    Full Text Available Recent developments in photonics-based microwave image-reject mixers (IRMs are reviewed with an emphasis on the pre-filtering method, which applies an optical or electrical filter to remove the undesired image, and the phase cancellation method, which is realized by introducing an additional phase to the converted image and cancelling it through coherent combination without phase shift. Applications of photonics-based microwave IRM in electronic warfare, radar systems and satellite payloads are described. The inherent challenges of implementing photonics-based microwave IRM to meet specific requirements of the radio frequency (RF system are discussed. Developmental trends of the photonics-based microwave IRM are also discussed.

  14. Development of a compact scintillator-based high-resolution Compton camera for molecular imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kishimoto, A., E-mail: daphne3h-aya@ruri.waseda.jp [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Kataoka, J.; Koide, A.; Sueoka, K.; Iwamoto, Y.; Taya, T. [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Ohsuka, S. [Central Research Laboratory, Hamamatsu Photonics K.K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu, Shizuoka (Japan)

    2017-02-11

    The Compton camera, which shows gamma-ray distribution utilizing the kinematics of Compton scattering, is a promising detector capable of imaging across a wide range of energy. In this study, we aim to construct a small-animal molecular imaging system in a wide energy range by using the Compton camera. We developed a compact medical Compton camera based on a Ce-doped Gd{sub 3}Al{sub 2}Ga{sub 3}O{sub 12} (Ce:GAGG) scintillator and multi-pixel photon counter (MPPC). A basic performance confirmed that for 662 keV, the typical energy resolution was 7.4 % (FWHM) and the angular resolution was 4.5° (FWHM). We then used the medical Compton camera to conduct imaging experiments based on a 3-D imaging reconstruction algorithm using the multi-angle data acquisition method. The result confirmed that for a {sup 137}Cs point source at a distance of 4 cm, the image had a spatial resolution of 3.1 mm (FWHM). Furthermore, we succeeded in producing 3-D multi-color image of different simultaneous energy sources ({sup 22}Na [511 keV], {sup 137}Cs [662 keV], and {sup 54}Mn [834 keV]).

  15. PIXEL PATTERN BASED STEGANOGRAPHY ON IMAGES

    Directory of Open Access Journals (Sweden)

    R. Rejani

    2015-02-01

    Full Text Available One of the drawback of most of the existing steganography methods is that it alters the bits used for storing color information. Some of the examples include LSB or MSB based steganography. There are also various existing methods like Dynamic RGB Intensity Based Steganography Scheme, Secure RGB Image Steganography from Pixel Indicator to Triple Algorithm etc that can be used to find out the steganography method used and break it. Another drawback of the existing methods is that it adds noise to the image which makes the image look dull or grainy making it suspicious for a person about existence of a hidden message within the image. To overcome these shortcomings we have come up with a pixel pattern based steganography which involved hiding the message within in image by using the existing RGB values whenever possible at pixel level or with minimum changes. Along with the image a key will also be used to decrypt the message stored at pixel levels. For further protection, both the message stored as well as the key file will be in encrypted format which can have same or different keys or decryption. Hence we call it as a RGB pixel pattern based steganography.

  16. On-Line Multi-Damage Scanning Spatial-Wavenumber Filter Based Imaging Method for Aircraft Composite Structure

    Directory of Open Access Journals (Sweden)

    Yuanqiang Ren

    2017-05-01

    Full Text Available Structural health monitoring (SHM of aircraft composite structure is helpful to increase reliability and reduce maintenance costs. Due to the great effectiveness in distinguishing particular guided wave modes and identifying the propagation direction, the spatial-wavenumber filter technique has emerged as an interesting SHM topic. In this paper, a new scanning spatial-wavenumber filter (SSWF based imaging method for multiple damages is proposed to conduct on-line monitoring of aircraft composite structures. Firstly, an on-line multi-damage SSWF is established, including the fundamental principle of SSWF for multiple damages based on a linear piezoelectric (PZT sensor array, and a corresponding wavenumber-time imaging mechanism by using the multi-damage scattering signal. Secondly, through combining the on-line multi-damage SSWF and a PZT 2D cross-shaped array, an image-mapping method is proposed to conduct wavenumber synthesis and convert the two wavenumber-time images obtained by the PZT 2D cross-shaped array to an angle-distance image, from which the multiple damages can be directly recognized and located. In the experimental validation, both simulated multi-damage and real multi-damage introduced by repeated impacts are performed on a composite plate structure. The maximum localization error is less than 2 cm, which shows good performance of the multi-damage imaging method. Compared with the existing spatial-wavenumber filter based damage evaluation methods, the proposed method requires no more than the multi-damage scattering signal and can be performed without depending on any wavenumber modeling or measuring. Besides, this method locates multiple damages by imaging instead of the geometric method, which helps to improve the signal-to-noise ratio. Thus, it can be easily applied to on-line multi-damage monitoring of aircraft composite structures.

  17. A SPIRAL-BASED DOWNSCALING METHOD FOR GENERATING 30 M TIME SERIES IMAGE DATA

    Directory of Open Access Journals (Sweden)

    B. Liu

    2017-09-01

    high spatial resolution images image by image. Simulated experiment and remote sensing image downscaling experiment were conducted. In simulated experiment, the 30 meters class map dataset Globeland30 was adopted to investigate the effect on avoid the underdetermined problem in downscaling procedure and a comparison between spiral and window was conducted. Further, the MODIS NDVI and Landsat image data was adopted to generate the 30m time series NDVI in remote sensing image downscaling experiment. Simulated experiment results showed that the proposed method had a robust performance in downscaling pixel in heterogeneous region and indicated that it was superior to the traditional window-based methods. The high resolution time series generated may be a benefit to the mapping and updating of land cover data.

  18. Pc-Based Floating Point Imaging Workstation

    Science.gov (United States)

    Guzak, Chris J.; Pier, Richard M.; Chinn, Patty; Kim, Yongmin

    1989-07-01

    The medical, military, scientific and industrial communities have come to rely on imaging and computer graphics for solutions to many types of problems. Systems based on imaging technology are used to acquire and process images, and analyze and extract data from images that would otherwise be of little use. Images can be transformed and enhanced to reveal detail and meaning that would go undetected without imaging techniques. The success of imaging has increased the demand for faster and less expensive imaging systems and as these systems become available, more and more applications are discovered and more demands are made. From the designer's perspective the challenge to meet these demands forces him to attack the problem of imaging from a different perspective. The computing demands of imaging algorithms must be balanced against the desire for affordability and flexibility. Systems must be flexible and easy to use, ready for current applications but at the same time anticipating new, unthought of uses. Here at the University of Washington Image Processing Systems Lab (IPSL) we are focusing our attention on imaging and graphics systems that implement imaging algorithms for use in an interactive environment. We have developed a PC-based imaging workstation with the goal to provide powerful and flexible, floating point processing capabilities, along with graphics functions in an affordable package suitable for diverse environments and many applications.

  19. Direct observation of conductive filament formation in Alq3 based organic resistive memories

    Energy Technology Data Exchange (ETDEWEB)

    Busby, Y., E-mail: yan.busby@unamur.be; Pireaux, J.-J. [Research Center in the Physics of Matter and Radiation (PMR), Laboratoire Interdisciplinaire de Spectroscopie Electronique (LISE), University of Namur, B-5000 Namur (Belgium); Nau, S.; Sax, S. [NanoTecCenter Weiz Forschungsgesellschaft mbH, Franz-Pichler Straße 32, A-8160 Weiz (Austria); List-Kratochvil, E. J. W. [NanoTecCenter Weiz Forschungsgesellschaft mbH, Franz-Pichler Straße 32, A-8160 Weiz (Austria); Institute of Solid State Physics, Graz University of Technology, A-8010 Graz (Austria); Novak, J.; Banerjee, R.; Schreiber, F. [Institute of Applied Physics, Eberhard-Karls-Universität Tübingen, D-72076 Tübingen (Germany)

    2015-08-21

    This work explores resistive switching mechanisms in non-volatile organic memory devices based on tris(8-hydroxyquinolie)aluminum (Alq{sub 3}). Advanced characterization tools are applied to investigate metal diffusion in ITO/Alq{sub 3}/Ag memory device stacks leading to conductive filament formation. The morphology of Alq{sub 3}/Ag layers as a function of the metal evaporation conditions is studied by X-ray reflectivity, while depth profile analysis with X-ray photoelectron spectroscopy and time-of-flight secondary ion mass spectrometry is applied to characterize operational memory elements displaying reliable bistable current-voltage characteristics. 3D images of the distribution of silver inside the organic layer clearly point towards the existence of conductive filaments and allow for the identification of the initial filament formation and inactivation mechanisms during switching of the device. Initial filament formation is suggested to be driven by field assisted diffusion of silver from abundant structures formed during the top electrode evaporation, whereas thermochemical effects lead to local filament inactivation.

  20. Performance Analysis of Segmentation of Hyperspectral Images Based on Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Praveen Agarwal

    2017-06-01

    Full Text Available Image segmentation is a fundamental approach in the field of image processing and based on user’s application .This paper propose an original and simple segmentation strategy based on the EM approach that resolves many informatics problems about hyperspectral images which are observed by airborne sensors. In a first step, to simplify the input color textured image into a color image without texture. The final segmentation is simply achieved by a spatially color segmentation using feature vector with the set of color values contained around the pixel to be classified with some mathematical equations. The spatial constraint allows taking into account the inherent spatial relationships of any image and its color. This approach provides effective PSNR for the segmented image. These results have the better performance as the segmented images are compared with Watershed & Region Growing Algorithm and provide effective segmentation for the Spectral Images & Medical Images.

  1. Understanding the conductive channel evolution in Na:WO(3-x)-based planar devices.

    Science.gov (United States)

    Shang, Dashan; Li, Peining; Wang, Tao; Carria, Egidio; Sun, Jirong; Shen, Baogen; Taubner, Thomas; Valov, Ilia; Waser, Rainer; Wuttig, Matthias

    2015-04-14

    An ion migration process in a solid electrolyte is important for ion-based functional devices, such as fuel cells, batteries, electrochromics, gas sensors, and resistive switching systems. In this study, a planar sandwich structure is prepared by depositing tungsten oxide (WO(3-x)) films on a soda-lime glass substrate, from which Na(+) diffuses into the WO(3-x) films during the deposition. The entire process of Na(+) migration driven by an alternating electric field is visualized in the Na-doped WO(3-x) films in the form of conductive channel by in situ optical imaging combined with infrared spectroscopy and near-field imaging techniques. A reversible change of geometry between a parabolic and a bar channel is observed with the resistance change of the devices. The peculiar channel evolution is interpreted by a thermal-stress-induced mechanical deformation of the films and an asymmetric Na(+) mobility between the parabolic and the bar channels. These results exemplify a typical ion migration process driven by an alternating electric field in a solid electrolyte with a low ion mobility and are expected to be beneficial to improve the controllability of the ion migration in ion-based functional devices, such as resistive switching devices.

  2. A REGION-BASED MULTI-SCALE APPROACH FOR OBJECT-BASED IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    T. Kavzoglu

    2016-06-01

    Full Text Available Within the last two decades, object-based image analysis (OBIA considering objects (i.e. groups of pixels instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient. Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.

  3. СREATING OF BARCODES FOR FACIAL IMAGES BASED ON INTENSITY GRADIENTS

    Directory of Open Access Journals (Sweden)

    G. A. Kukharev

    2014-05-01

    Full Text Available The paper provides analysis of existing approaches to the generating of barcodes and description of the system structure for generating of barcodes from facial images. The method for generating of standard type linear barcodes from facial images is proposed. This method is based on the difference of intensity gradients, which represent images in the form of initial features. Further averaging of these features into a limited number of intervals is performed; the quantization of results into decimal digits from 0 to 9 and table conversion into the standard barcode is done. Testing was conducted on the Face94 database and database of composite faces of different ages. It showed that the proposed method ensures the stability of generated barcodes according to changes of scale, pose and mirroring of facial images, as well as changes of facial expressions and shadows on faces from local lighting. The proposed solutions are computationally low-cost and do not require the use of any specialized image processing software for generating of facial barcodes in real-time systems.

  4. Probability-Based Recognition Framework for Underwater Landmarks Using Sonar Images †.

    Science.gov (United States)

    Lee, Yeongjun; Choi, Jinwoo; Ko, Nak Yong; Choi, Hyun-Taek

    2017-08-24

    This paper proposes a probability-based framework for recognizing underwater landmarks using sonar images. Current recognition methods use a single image, which does not provide reliable results because of weaknesses of the sonar image such as unstable acoustic source, many speckle noises, low resolution images, single channel image, and so on. However, using consecutive sonar images, if the status-i.e., the existence and identity (or name)-of an object is continuously evaluated by a stochastic method, the result of the recognition method is available for calculating the uncertainty, and it is more suitable for various applications. Our proposed framework consists of three steps: (1) candidate selection, (2) continuity evaluation, and (3) Bayesian feature estimation. Two probability methods-particle filtering and Bayesian feature estimation-are used to repeatedly estimate the continuity and feature of objects in consecutive images. Thus, the status of the object is repeatedly predicted and updated by a stochastic method. Furthermore, we develop an artificial landmark to increase detectability by an imaging sonar, which we apply to the characteristics of acoustic waves, such as instability and reflection depending on the roughness of the reflector surface. The proposed method is verified by conducting basin experiments, and the results are presented.

  5. 76 FR 42395 - Business Conduct Standards for Security-Based Swap Dealers and Major Security-Based Swap...

    Science.gov (United States)

    2011-07-18

    ... Business Conduct Standards for Security-Based Swap Dealers and Major Security-Based Swap Participants...-11] RIN 3235-AL10 Business Conduct Standards for Security-Based Swap Dealers and Major Security-Based...'') relating to external business conduct standards for security-based swap dealers (``SBS Dealers'') and major...

  6. Measurable realistic image-based 3D mapping

    Science.gov (United States)

    Liu, W.; Wang, J.; Wang, J. J.; Ding, W.; Almagbile, A.

    2011-12-01

    Maps with 3D visual models are becoming a remarkable feature of 3D map services. High-resolution image data is obtained for the construction of 3D visualized models.The3D map not only provides the capabilities of 3D measurements and knowledge mining, but also provides the virtual experienceof places of interest, such as demonstrated in the Google Earth. Applications of 3D maps are expanding into the areas of architecture, property management, and urban environment monitoring. However, the reconstruction of high quality 3D models is time consuming, and requires robust hardware and powerful software to handle the enormous amount of data. This is especially for automatic implementation of 3D models and the representation of complicated surfacesthat still need improvements with in the visualisation techniques. The shortcoming of 3D model-based maps is the limitation of detailed coverage since a user can only view and measure objects that are already modelled in the virtual environment. This paper proposes and demonstrates a 3D map concept that is realistic and image-based, that enables geometric measurements and geo-location services. Additionally, image-based 3D maps provide more detailed information of the real world than 3D model-based maps. The image-based 3D maps use geo-referenced stereo images or panoramic images. The geometric relationships between objects in the images can be resolved from the geometric model of stereo images. The panoramic function makes 3D maps more interactive with users but also creates an interesting immersive circumstance. Actually, unmeasurable image-based 3D maps already exist, such as Google street view, but only provide virtual experiences in terms of photos. The topographic and terrain attributes, such as shapes and heights though are omitted. This paper also discusses the potential for using a low cost land Mobile Mapping System (MMS) to implement realistic image 3D mapping, and evaluates the positioning accuracy that a measureable

  7. ImageSURF: An ImageJ Plugin for Batch Pixel-Based Image Segmentation Using Random Forests

    Directory of Open Access Journals (Sweden)

    Aidan O'Mara

    2017-11-01

    Full Text Available Image segmentation is a necessary step in automated quantitative imaging. ImageSURF is a macro-compatible ImageJ2/FIJI plugin for pixel-based image segmentation that considers a range of image derivatives to train pixel classifiers which are then applied to image sets of any size to produce segmentations without bias in a consistent, transparent and reproducible manner. The plugin is available from ImageJ update site http://sites.imagej.net/ImageSURF/ and source code from https://github.com/omaraa/ImageSURF. Funding statement: This research was supported by an Australian Government Research Training Program Scholarship.

  8. Emotional textile image classification based on cross-domain convolutional sparse autoencoders with feature selection

    Science.gov (United States)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin

    2017-01-01

    We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.

  9. Evaluation of Soft Tissue Sarcoma Tumors Electrical Conductivity Anisotropy Using Diffusion Tensor Imaging for Numerical Modeling on Electroporation

    Directory of Open Access Journals (Sweden)

    Ghazikhanlou-sani K.

    2016-06-01

    Full Text Available Introduction: There is many ways to assessing the electrical conductivity anisotropy of a tumor. Applying the values of tissue electrical conductivity anisotropy is crucial in numerical modeling of the electric and thermal field distribution in electroporation treatments. This study aims to calculate the tissues electrical conductivity anisotropy in patients with sarcoma tumors using diffusion tensor imaging technique. Materials and Method: A total of 3 subjects were involved in this study. All of patients had clinically apparent sarcoma tumors at the extremities. The T1, T2 and DTI images were performed using a 3-Tesla multi-coil, multi-channel MRI system. The fractional anisotropy (FA maps were performed using the FSL (FMRI software library software regarding the DTI images. The 3D matrix of the FA maps of each area (tumor, normal soft tissue and bone/s was reconstructed and the anisotropy matrix was calculated regarding to the FA values. Result: The mean FA values in direction of main axis in sarcoma tumors were ranged between 0.475–0.690. With assumption of isotropy of the electrical conductivity, the FA value of electrical conductivity at each X, Y and Z coordinate axes would be equal to 0.577. The gathered results showed that there is a mean error band of 20% in electrical conductivity, if the electrical conductivity anisotropy not concluded at the calculations. The comparison of FA values showed that there is a significant statistical difference between the mean FA value of tumor and normal soft tissues (P<0.05. Conclusion: DTI is a feasible technique for the assessment of electrical conductivity anisotropy of tissues. It is crucial to quantify the electrical conductivity anisotropy data of tissues for numerical modeling of electroporation treatments.

  10. An overview of medical image data base

    International Nuclear Information System (INIS)

    Nishihara, Eitaro

    1992-01-01

    Recently, the systematization using computers in medical institutions has advanced, and the introduction of hospital information system has been almost completed in the large hospitals with more than 500 beds. But the objects of the management of the hospital information system are text information, and do not include the management of images of enormous quantity. By the progress of image diagnostic equipment, the digitization of medical images has advanced, but the management of images in hospitals does not utilize the merits of digital images. For the purpose of solving these problems, the picture archiving and communication system (PACS) was proposed about ten years ago, which makes medical images into a data base, and enables the on-line access to images from various places in hospitals. The studies have been continued to realize it. The features of medical image data, the present status of utilizing medical image data, the outline of the PACS, the image data base for the PACS, the problems in the realization of the data base and the technical trend, and the state of actual construction of the PACS are reported. (K.I.)

  11. Quantum Image Encryption Algorithm Based on Image Correlation Decomposition

    Science.gov (United States)

    Hua, Tianxiang; Chen, Jiamin; Pei, Dongju; Zhang, Wenquan; Zhou, Nanrun

    2015-02-01

    A novel quantum gray-level image encryption and decryption algorithm based on image correlation decomposition is proposed. The correlation among image pixels is established by utilizing the superposition and measurement principle of quantum states. And a whole quantum image is divided into a series of sub-images. These sub-images are stored into a complete binary tree array constructed previously and then randomly performed by one of the operations of quantum random-phase gate, quantum revolving gate and Hadamard transform. The encrypted image can be obtained by superimposing the resulting sub-images with the superposition principle of quantum states. For the encryption algorithm, the keys are the parameters of random phase gate, rotation angle, binary sequence and orthonormal basis states. The security and the computational complexity of the proposed algorithm are analyzed. The proposed encryption algorithm can resist brute force attack due to its very large key space and has lower computational complexity than its classical counterparts.

  12. Super-Resolution of Plant Disease Images for the Acceleration of Image-based Phenotyping and Vigor Diagnosis in Agriculture

    Directory of Open Access Journals (Sweden)

    Kyosuke Yamamoto

    2017-11-01

    Full Text Available Unmanned aerial vehicles (UAVs or drones are a very promising branch of technology, and they have been utilized in agriculture—in cooperation with image processing technologies—for phenotyping and vigor diagnosis. One of the problems in the utilization of UAVs for agricultural purposes is the limitation in flight time. It is necessary to fly at a high altitude to capture the maximum number of plants in the limited time available, but this reduces the spatial resolution of the captured images. In this study, we applied a super-resolution method to the low-resolution images of tomato diseases to recover detailed appearances, such as lesions on plant organs. We also conducted disease classification using high-resolution, low-resolution, and super-resolution images to evaluate the effectiveness of super-resolution methods in disease classification. Our results indicated that the super-resolution method outperformed conventional image scaling methods in spatial resolution enhancement of tomato disease images. The results of disease classification showed that the accuracy attained was also better by a large margin with super-resolution images than with low-resolution images. These results indicated that our approach not only recovered the information lost in low-resolution images, but also exerted a beneficial influence on further image analysis. The proposed approach will accelerate image-based phenotyping and vigor diagnosis in the field, because it not only saves time to capture images of a crop in a cultivation field but also secures the accuracy of these images for further analysis.

  13. Quantum Image Steganography and Steganalysis Based On LSQu-Blocks Image Information Concealing Algorithm

    Science.gov (United States)

    A. AL-Salhi, Yahya E.; Lu, Songfeng

    2016-08-01

    Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.

  14. Imperceptible reversible watermarking of radiographic images based on quantum noise masking.

    Science.gov (United States)

    Pan, Wei; Bouslimi, Dalel; Karasad, Mohamed; Cozic, Michel; Coatrieux, Gouenou

    2018-07-01

    Advances in information and communication technologies boost the sharing and remote access to medical images. Along with this evolution, needs in terms of data security are also increased. Watermarking can contribute to better protect images by dissimulating into their pixels some security attributes (e.g., digital signature, user identifier). But, to take full advantage of this technology in healthcare, one key problem to address is to ensure that the image distortion induced by the watermarking process does not endanger the image diagnosis value. To overcome this issue, reversible watermarking is one solution. It allows watermark removal with the exact recovery of the image. Unfortunately, reversibility does not mean that imperceptibility constraints are relaxed. Indeed, once the watermark removed, the image is unprotected. It is thus important to ensure the invisibility of reversible watermark in order to ensure a permanent image protection. We propose a new fragile reversible watermarking scheme for digital radiographic images, the main originality of which stands in masking a reversible watermark into the image quantum noise (the dominant noise in radiographic images). More clearly, in order to ensure the watermark imperceptibility, our scheme differentiates the image black background, where message embedding is conducted into pixel gray values with the well-known histogram shifting (HS) modulation, from the anatomical object, where HS is applied to wavelet detail coefficients, masking the watermark with the image quantum noise. In order to maintain the watermark embedder and reader synchronized in terms of image partitioning and insertion domain, our scheme makes use of different classification processes that are invariant to message embedding. We provide the theoretical performance limits of our scheme into the image quantum noise in terms of image distortion and message size (i.e. capacity). Experiments conducted on more than 800 12 bits radiographic images

  15. Object recognition based on Google's reverse image search and image similarity

    Science.gov (United States)

    Horváth, András.

    2015-12-01

    Image classification is one of the most challenging tasks in computer vision and a general multiclass classifier could solve many different tasks in image processing. Classification is usually done by shallow learning for predefined objects, which is a difficult task and very different from human vision, which is based on continuous learning of object classes and one requires years to learn a large taxonomy of objects which are not disjunct nor independent. In this paper I present a system based on Google image similarity algorithm and Google image database, which can classify a large set of different objects in a human like manner, identifying related classes and taxonomies.

  16. Image retrieval by information fusion based on scalable vocabulary tree and robust Hausdorff distance

    Science.gov (United States)

    Che, Chang; Yu, Xiaoyang; Sun, Xiaoming; Yu, Boyang

    2017-12-01

    In recent years, Scalable Vocabulary Tree (SVT) has been shown to be effective in image retrieval. However, for general images where the foreground is the object to be recognized while the background is cluttered, the performance of the current SVT framework is restricted. In this paper, a new image retrieval framework that incorporates a robust distance metric and information fusion is proposed, which improves the retrieval performance relative to the baseline SVT approach. First, the visual words that represent the background are diminished by using a robust Hausdorff distance between different images. Second, image matching results based on three image signature representations are fused, which enhances the retrieval precision. We conducted intensive experiments on small-scale to large-scale image datasets: Corel-9, Corel-48, and PKU-198, where the proposed Hausdorff metric and information fusion outperforms the state-of-the-art methods by about 13, 15, and 15%, respectively.

  17. Histogram-based quantitative evaluation of endobronchial ultrasonography images of peripheral pulmonary lesion.

    Science.gov (United States)

    Morikawa, Kei; Kurimoto, Noriaki; Inoue, Takeo; Mineshita, Masamichi; Miyazawa, Teruomi

    2015-01-01

    Endobronchial ultrasonography using a guide sheath (EBUS-GS) is an increasingly common bronchoscopic technique, but currently, no methods have been established to quantitatively evaluate EBUS images of peripheral pulmonary lesions. The purpose of this study was to evaluate whether histogram data collected from EBUS-GS images can contribute to the diagnosis of lung cancer. Histogram-based analyses focusing on the brightness of EBUS images were retrospectively conducted: 60 patients (38 lung cancer; 22 inflammatory diseases), with clear EBUS images were included. For each patient, a 400-pixel region of interest was selected, typically located at a 3- to 5-mm radius from the probe, from recorded EBUS images during bronchoscopy. Histogram height, width, height/width ratio, standard deviation, kurtosis and skewness were investigated as diagnostic indicators. Median histogram height, width, height/width ratio and standard deviation were significantly different between lung cancer and benign lesions (all p histogram standard deviation. Histogram standard deviation appears to be the most useful characteristic for diagnosing lung cancer using EBUS images. © 2015 S. Karger AG, Basel.

  18. Novel fiber optic-based needle redox imager for cancer diagnosis

    Science.gov (United States)

    Kanniyappan, Udayakumar; Xu, He N.; Tang, Qinggong; Gaitan, Brandon; Liu, Yi; Li, Lin Z.; Chen, Yu

    2018-02-01

    Despite various technological advancements in cancer diagnosis, the mortality rates were not decreased significantly. We aim to develop a novel optical imaging tool to assist cancer diagnosis effectively. Fluorescence spectroscopy/imaging is a fast, rapid, and minimally invasive technique which has been successfully applied to diagnosing cancerous cells/tissues. Recently, the ratiometric imaging of intrinsic fluorescence of reduced nicotinamide adenine dinucleotide (NADH) and flavin adenine dinucleotide (FAD), as pioneered by Britton Chance and the co-workers in 1950-70's, has gained much attention to quantify the physiological parameters of living cells/tissues. The redox ratio, i.e., FAD/(FAD+NADH) or FAD/NADH, has been shown to be sensitive to various metabolic changes in in vivo and in vitro cells/tissues. Optical redox imaging has also been investigated for providing potential imaging biomarkers for cancer transformation, aggressiveness, and treatment response. Towards this goal, we have designed and developed a novel fiberoptic-based needle redox imager (NRI) that can fit into an 11G clinical coaxial biopsy needle for real time imaging during clinical cancer surgery. In the present study, the device is calibrated with tissue mimicking phantoms of FAD and NADH along with various technical parameters such as sensitivity, dynamic range, linearity, and spatial resolution of the system. We also conducted preliminary imaging of tissues ex vivo for validation. We plan to test the NRI on clinical breast cancer patients. Once validated this device may provide an effective tool for clinical cancer diagnosis.

  19. A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF.

    Directory of Open Access Journals (Sweden)

    Nouman Ali

    Full Text Available With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR, high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT and Speeded-Up Robust Features (SURF. The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration.

  20. Fluid region segmentation in OCT images based on convolution neural network

    Science.gov (United States)

    Liu, Dong; Liu, Xiaoming; Fu, Tianyu; Yang, Zhou

    2017-07-01

    In the retinal image, characteristics of fluid have great significance for diagnosis in eye disease. In the clinical, the segmentation of fluid is usually conducted manually, but is time-consuming and the accuracy is highly depend on the expert's experience. In this paper, we proposed a segmentation method based on convolution neural network (CNN) for segmenting the fluid from fundus image. The B-scans of OCT are segmented into layers, and patches from specific region with annotation are used for training. After the data set being divided into training set and test set, network training is performed and a good segmentation result is obtained, which has a significant advantage over traditional methods such as threshold method.

  1. Understanding images using knowledge based approach

    International Nuclear Information System (INIS)

    Tascini, G.

    1985-01-01

    This paper presents an approach to image understanding focusing on low level image processing and proposes a rule-based approach as part of larger knowledge-based system. The general system has a yerarchical structure that comprises several knowledge-based layers. The main idea is to confine at the lower level the domain independent knowledge and to reserve the higher levels for the domain dependent knowledge, that is for the interpretation

  2. Image Based Rendering and Virtual Reality

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    The Presentation concerns with an overview of Image Based Rendering approaches and their use on Virtual Reality, including Virtual Photography and Cinematography, and Mobile Robot Navigation.......The Presentation concerns with an overview of Image Based Rendering approaches and their use on Virtual Reality, including Virtual Photography and Cinematography, and Mobile Robot Navigation....

  3. An Integrative Object-Based Image Analysis Workflow for Uav Images

    Science.gov (United States)

    Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong

    2016-06-01

    In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  4. AN INTEGRATIVE OBJECT-BASED IMAGE ANALYSIS WORKFLOW FOR UAV IMAGES

    Directory of Open Access Journals (Sweden)

    H. Yu

    2016-06-01

    Full Text Available In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA. More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC. Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya’an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  5. Composite Techniques Based Color Image Compression

    Directory of Open Access Journals (Sweden)

    Zainab Ibrahim Abood

    2017-03-01

    Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.

  6. Developing stereo image based robot control system

    Energy Technology Data Exchange (ETDEWEB)

    Suprijadi,; Pambudi, I. R.; Woran, M.; Naa, C. F; Srigutomo, W. [Department of Physics, FMIPA, InstitutTeknologi Bandung Jl. Ganesha No. 10. Bandung 40132, Indonesia supri@fi.itb.ac.id (Indonesia)

    2015-04-16

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based on stereovision captures.

  7. Linearized image reconstruction method for ultrasound modulated electrical impedance tomography based on power density distribution

    International Nuclear Information System (INIS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2017-01-01

    Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results. (paper)

  8. Medical Image Tamper Detection Based on Passive Image Authentication.

    Science.gov (United States)

    Ulutas, Guzin; Ustubioglu, Arda; Ustubioglu, Beste; V Nabiyev, Vasif; Ulutas, Mustafa

    2017-12-01

    Telemedicine has gained popularity in recent years. Medical images can be transferred over the Internet to enable the telediagnosis between medical staffs and to make the patient's history accessible to medical staff from anywhere. Therefore, integrity protection of the medical image is a serious concern due to the broadcast nature of the Internet. Some watermarking techniques are proposed to control the integrity of medical images. However, they require embedding of extra information (watermark) into image before transmission. It decreases visual quality of the medical image and can cause false diagnosis. The proposed method uses passive image authentication mechanism to detect the tampered regions on medical images. Structural texture information is obtained from the medical image by using local binary pattern rotation invariant (LBPROT) to make the keypoint extraction techniques more successful. Keypoints on the texture image are obtained with scale invariant feature transform (SIFT). Tampered regions are detected by the method by matching the keypoints. The method improves the keypoint-based passive image authentication mechanism (they do not detect tampering when the smooth region is used for covering an object) by using LBPROT before keypoint extraction because smooth regions also have texture information. Experimental results show that the method detects tampered regions on the medical images even if the forged image has undergone some attacks (Gaussian blurring/additive white Gaussian noise) or the forged regions are scaled/rotated before pasting.

  9. Automated pathologies detection in retina digital images based on complex continuous wavelet transform phase angles.

    Science.gov (United States)

    Lahmiri, Salim; Gargour, Christian S; Gabrea, Marcel

    2014-10-01

    An automated diagnosis system that uses complex continuous wavelet transform (CWT) to process retina digital images and support vector machines (SVMs) for classification purposes is presented. In particular, each retina image is transformed into two one-dimensional signals by concatenating image rows and columns separately. The mathematical norm of phase angles found in each one-dimensional signal at each level of CWT decomposition are relied on to characterise the texture of normal images against abnormal images affected by exudates, drusen and microaneurysms. The leave-one-out cross-validation method was adopted to conduct experiments and the results from the SVM show that the proposed approach gives better results than those obtained by other methods based on the correct classification rate, sensitivity and specificity.

  10. The influence of reduced graphene oxide on electrical conductivity of LiFePO4-based composite as cathode material

    International Nuclear Information System (INIS)

    Arifin, Muhammad; Aimon, Akfiny Hasdi; Winata, Toto; Abdullah, Mikrajuddin; Iskandar, Ferry

    2016-01-01

    LiFePO 4 is fascinating cathode active materials for Li-ion batteries application because of their high electrochemical performance such as a stable voltage at 3.45 V and high specific capacity at 170 mAh.g −1 . However, their low intrinsic electronic conductivity and low ionic diffusion are still the hindrance for their further application on Li-ion batteries. Therefore, the efforts to improve their conductivity are very important to elevate their prospecting application as cathode materials. Herein, we reported preparation of additional of reduced Graphene Oxide (rGO) into LiFePO 4 -based composite via hydrothermal method and the influence of rGO on electrical conductivity of LiFePO 4 −based composite by varying mass of rGO in composition. Vibration of LiFePO 4 -based composite was detected on Fourier Transform Infrared Spectroscopy (FTIR) spectra, while single phase of LiFePO 4 nanocrystal was observed on X-Ray Diffraction (XRD) pattern, it furthermore, Scanning Electron Microscopy (SEM) images showed that rGO was distributed around LiFePO4-based composite. Finally, the 4-point probe measurement result confirmed that the optimum electrical conductivity is in additional 2 wt% rGO for range 1 to 2 wt% rGO

  11. Magneto-acousto-electrical Measurement Based Electrical Conductivity Reconstruction for Tissues.

    Science.gov (United States)

    Zhou, Yan; Ma, Qingyu; Guo, Gepu; Tu, Juan; Zhang, Dong

    2018-05-01

    Based on the interaction of ultrasonic excitation and magnetoelectrical induction, magneto-acousto-electrical (MAE) technology was demonstrated to have the capability of differentiating conductivity variations along the acoustic transmission. By applying the characteristics of the MAE voltage, a simplified algorithm of MAE measurement based conductivity reconstruction was developed. With the analyses of acoustic vibration, ultrasound propagation, Hall effect, and magnetoelectrical induction, theoretical and experimental studies of MAE measurement and conductivity reconstruction were performed. The formula of MAE voltage was derived and simplified for the transducer with strong directivity. MAE voltage was simulated for a three-layer gel phantom and the conductivity distribution was reconstructed using the modified Wiener inverse filter and Hilbert transform, which was also verified by experimental measurements. The experimental results are basically consistent with the simulations, and demonstrate that the wave packets of MAE voltage are generated at tissue interfaces with the amplitudes and vibration polarities representing the values and directions of conductivity variations. With the proposed algorithm, the amplitude and polarity of conductivity gradient can be restored and the conductivity distribution can also be reconstructed accurately. The favorable results demonstrate the feasibility of accurate conductivity reconstruction with improved spatial resolution using MAE measurement for tissues with conductivity variations, especially suitable for nondispersive tissues with abrupt conductivity changes. This study demonstrates that the MAE measurement based conductivity reconstruction algorithm can be applied as a new strategy for nondestructive real-time monitoring of conductivity variations in biomedical engineering.

  12. Molecular–Genetic Imaging: A Nuclear Medicine–Based Perspective

    Directory of Open Access Journals (Sweden)

    Ronald G. Blasberg

    2002-07-01

    Full Text Available Molecular imaging is a relatively new discipline, which developed over the past decade, initially driven by in situ reporter imaging technology. Noninvasive in vivo molecular–genetic imaging developed more recently and is based on nuclear (positron emission tomography [PET], gamma camera, autoradiography imaging as well as magnetic resonance (MR and in vivo optical imaging. Molecular–genetic imaging has its roots in both molecular biology and cell biology, as well as in new imaging technologies. The focus of this presentation will be nuclear-based molecular–genetic imaging, but it will comment on the value and utility of combining different imaging modalities. Nuclear-based molecular imaging can be viewed in terms of three different imaging strategies: (1 “indirect” reporter gene imaging; (2 “direct” imaging of endogenous molecules; or (3 “surrogate” or “bio-marker” imaging. Examples of each imaging strategy will be presented and discussed. The rapid growth of in vivo molecular imaging is due to the established base of in vivo imaging technologies, the established programs in molecular and cell biology, and the convergence of these disciplines. The development of versatile and sensitive assays that do not require tissue samples will be of considerable value for monitoring molecular–genetic and cellular processes in animal models of human disease, as well as for studies in human subjects in the future. Noninvasive imaging of molecular–genetic and cellular processes will complement established ex vivo molecular–biological assays that require tissue sampling, and will provide a spatial as well as a temporal dimension to our understanding of various diseases and disease processes.

  13. Shape-based grey-level image interpolation

    International Nuclear Information System (INIS)

    Keh-Shih Chuang; Chun-Yuan Chen; Ching-Kai Yeh

    1999-01-01

    The three-dimensional (3D) object data obtained from a CT scanner usually have unequal sampling frequencies in the x-, y- and z-directions. Generally, the 3D data are first interpolated between slices to obtain isotropic resolution, reconstructed, then operated on using object extraction and display algorithms. The traditional grey-level interpolation introduces a layer of intermediate substance and is not suitable for objects that are very different from the opposite background. The shape-based interpolation method transfers a pixel location to a parameter related to the object shape and the interpolation is performed on that parameter. This process is able to achieve a better interpolation but its application is limited to binary images only. In this paper, we present an improved shape-based interpolation method for grey-level images. The new method uses a polygon to approximate the object shape and performs the interpolation using polygon vertices as references. The binary images representing the shape of the object were first generated via image segmentation on the source images. The target object binary image was then created using regular shape-based interpolation. The polygon enclosing the object for each slice can be generated from the shape of that slice. We determined the relative location in the source slices of each pixel inside the target polygon using the vertices of a polygon as the reference. The target slice grey-level was interpolated from the corresponding source image pixels. The image quality of this interpolation method is better and the mean squared difference is smaller than with traditional grey-level interpolation. (author)

  14. Image content authentication based on channel coding

    Science.gov (United States)

    Zhang, Fan; Xu, Lei

    2008-03-01

    The content authentication determines whether an image has been tampered or not, and if necessary, locate malicious alterations made on the image. Authentication on a still image or a video are motivated by recipient's interest, and its principle is that a receiver must be able to identify the source of this document reliably. Several techniques and concepts based on data hiding or steganography designed as a means for the image authentication. This paper presents a color image authentication algorithm based on convolution coding. The high bits of color digital image are coded by the convolution codes for the tamper detection and localization. The authentication messages are hidden in the low bits of image in order to keep the invisibility of authentication. All communications channels are subject to errors introduced because of additive Gaussian noise in their environment. Data perturbations cannot be eliminated but their effect can be minimized by the use of Forward Error Correction (FEC) techniques in the transmitted data stream and decoders in the receiving system that detect and correct bits in error. This paper presents a color image authentication algorithm based on convolution coding. The message of each pixel is convolution encoded with the encoder. After the process of parity check and block interleaving, the redundant bits are embedded in the image offset. The tamper can be detected and restored need not accessing the original image.

  15. Superpixel-Based Feature for Aerial Image Scene Recognition

    Directory of Open Access Journals (Sweden)

    Hongguang Li

    2018-01-01

    Full Text Available Image scene recognition is a core technology for many aerial remote sensing applications. Different landforms are inputted as different scenes in aerial imaging, and all landform information is regarded as valuable for aerial image scene recognition. However, the conventional features of the Bag-of-Words model are designed using local points or other related information and thus are unable to fully describe landform areas. This limitation cannot be ignored when the aim is to ensure accurate aerial scene recognition. A novel superpixel-based feature is proposed in this study to characterize aerial image scenes. Then, based on the proposed feature, a scene recognition method of the Bag-of-Words model for aerial imaging is designed. The proposed superpixel-based feature that utilizes landform information establishes top-task superpixel extraction of landforms to bottom-task expression of feature vectors. This characterization technique comprises the following steps: simple linear iterative clustering based superpixel segmentation, adaptive filter bank construction, Lie group-based feature quantification, and visual saliency model-based feature weighting. Experiments of image scene recognition are carried out using real image data captured by an unmanned aerial vehicle (UAV. The recognition accuracy of the proposed superpixel-based feature is 95.1%, which is higher than those of scene recognition algorithms based on other local features.

  16. Pedestrian detection from thermal images: A sparse representation based approach

    Science.gov (United States)

    Qi, Bin; John, Vijay; Liu, Zheng; Mita, Seiichi

    2016-05-01

    Pedestrian detection, a key technology in computer vision, plays a paramount role in the applications of advanced driver assistant systems (ADASs) and autonomous vehicles. The objective of pedestrian detection is to identify and locate people in a dynamic environment so that accidents can be avoided. With significant variations introduced by illumination, occlusion, articulated pose, and complex background, pedestrian detection is a challenging task for visual perception. Different from visible images, thermal images are captured and presented with intensity maps based objects' emissivity, and thus have an enhanced spectral range to make human beings perceptible from the cool background. In this study, a sparse representation based approach is proposed for pedestrian detection from thermal images. We first adopted the histogram of sparse code to represent image features and then detect pedestrian with the extracted features in an unimodal and a multimodal framework respectively. In the unimodal framework, two types of dictionaries, i.e. joint dictionary and individual dictionary, are built by learning from prepared training samples. In the multimodal framework, a weighted fusion scheme is proposed to further highlight the contributions from features with higher separability. To validate the proposed approach, experiments were conducted to compare with three widely used features: Haar wavelets (HWs), histogram of oriented gradients (HOG), and histogram of phase congruency (HPC) as well as two classification methods, i.e. AdaBoost and support vector machine (SVM). Experimental results on a publicly available data set demonstrate the superiority of the proposed approach.

  17. Content-based image retrieval with ontological ranking

    Science.gov (United States)

    Tsai, Shen-Fu; Tsai, Min-Hsuan; Huang, Thomas S.

    2010-02-01

    Images are a much more powerful medium of expression than text, as the adage says: "One picture is worth a thousand words." It is because compared with text consisting of an array of words, an image has more degrees of freedom and therefore a more complicated structure. However, the less limited structure of images presents researchers in the computer vision community a tough task of teaching machines to understand and organize images, especially when a limit number of learning examples and background knowledge are given. The advance of internet and web technology in the past decade has changed the way human gain knowledge. People, hence, can exchange knowledge with others by discussing and contributing information on the web. As a result, the web pages in the internet have become a living and growing source of information. One is therefore tempted to wonder whether machines can learn from the web knowledge base as well. Indeed, it is possible to make computer learn from the internet and provide human with more meaningful knowledge. In this work, we explore this novel possibility on image understanding applied to semantic image search. We exploit web resources to obtain links from images to keywords and a semantic ontology constituting human's general knowledge. The former maps visual content to related text in contrast to the traditional way of associating images with surrounding text; the latter provides relations between concepts for machines to understand to what extent and in what sense an image is close to the image search query. With the aid of these two tools, the resulting image search system is thus content-based and moreover, organized. The returned images are ranked and organized such that semantically similar images are grouped together and given a rank based on the semantic closeness to the input query. The novelty of the system is twofold: first, images are retrieved not only based on text cues but their actual contents as well; second, the grouping

  18. Robust Vehicle Detection in Aerial Images Based on Cascaded Convolutional Neural Networks.

    Science.gov (United States)

    Zhong, Jiandan; Lei, Tao; Yao, Guangle

    2017-11-24

    Vehicle detection in aerial images is an important and challenging task. Traditionally, many target detection models based on sliding-window fashion were developed and achieved acceptable performance, but these models are time-consuming in the detection phase. Recently, with the great success of convolutional neural networks (CNNs) in computer vision, many state-of-the-art detectors have been designed based on deep CNNs. However, these CNN-based detectors are inefficient when applied in aerial image data due to the fact that the existing CNN-based models struggle with small-size object detection and precise localization. To improve the detection accuracy without decreasing speed, we propose a CNN-based detection model combining two independent convolutional neural networks, where the first network is applied to generate a set of vehicle-like regions from multi-feature maps of different hierarchies and scales. Because the multi-feature maps combine the advantage of the deep and shallow convolutional layer, the first network performs well on locating the small targets in aerial image data. Then, the generated candidate regions are fed into the second network for feature extraction and decision making. Comprehensive experiments are conducted on the Vehicle Detection in Aerial Imagery (VEDAI) dataset and Munich vehicle dataset. The proposed cascaded detection model yields high performance, not only in detection accuracy but also in detection speed.

  19. D Reconstruction from Uav-Based Hyperspectral Images

    Science.gov (United States)

    Liu, L.; Xu, L.; Peng, J.

    2018-04-01

    Reconstructing the 3D profile from a set of UAV-based images can obtain hyperspectral information, as well as the 3D coordinate of any point on the profile. Our images are captured from the Cubert UHD185 (UHD) hyperspectral camera, which is a new type of high-speed onboard imaging spectrometer. And it can get both hyperspectral image and panchromatic image simultaneously. The panchromatic image have a higher spatial resolution than hyperspectral image, but each hyperspectral image provides considerable information on the spatial spectral distribution of the object. Thus there is an opportunity to derive a high quality 3D point cloud from panchromatic image and considerable spectral information from hyperspectral image. The purpose of this paper is to introduce our processing chain that derives a database which can provide hyperspectral information and 3D position of each point. First, We adopt a free and open-source software, Visual SFM which is based on structure from motion (SFM) algorithm, to recover 3D point cloud from panchromatic image. And then get spectral information of each point from hyperspectral image by a self-developed program written in MATLAB. The production can be used to support further research and applications.

  20. Can state-of-the-art HVS-based objective image quality criteria be used for image reconstruction techniques based on ROI analysis?

    Science.gov (United States)

    Dostal, P.; Krasula, L.; Klima, M.

    2012-06-01

    Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.

  1. Nonlaser-based 3D surface imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  2. Mobile object retrieval in server-based image databases

    Science.gov (United States)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  3. A Novel Optimization-Based Approach for Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Manyu Xiao

    2013-01-01

    Full Text Available Content-based image retrieval is nowadays one of the possible and promising solutions to manage image databases effectively. However, with the large number of images, there still exists a great discrepancy between the users’ expectations (accuracy and efficiency and the real performance in image retrieval. In this work, new optimization strategies are proposed on vocabulary tree building, retrieval, and matching methods. More precisely, a new clustering strategy combining classification and conventional K-Means method is firstly redefined. Then a new matching technique is built to eliminate the error caused by large-scaled scale-invariant feature transform (SIFT. Additionally, a new unit mechanism is proposed to reduce the cost of indexing time. Finally, the numerical results show that excellent performances are obtained in both accuracy and efficiency based on the proposed improvements for image retrieval.

  4. Image processing based detection of lung cancer on CT scan images

    Science.gov (United States)

    Abdillah, Bariqi; Bustamam, Alhadi; Sarwinda, Devvi

    2017-10-01

    In this paper, we implement and analyze the image processing method for detection of lung cancer. Image processing techniques are widely used in several medical problems for picture enhancement in the detection phase to support the early medical treatment. In this research we proposed a detection method of lung cancer based on image segmentation. Image segmentation is one of intermediate level in image processing. Marker control watershed and region growing approach are used to segment of CT scan image. Detection phases are followed by image enhancement using Gabor filter, image segmentation, and features extraction. From the experimental results, we found the effectiveness of our approach. The results show that the best approach for main features detection is watershed with masking method which has high accuracy and robust.

  5. Reinforced Conductive Polyaniline-Paper Composites

    Directory of Open Access Journals (Sweden)

    Jinhua Yan

    2015-05-01

    Full Text Available A method for direct aniline interfacial polymerization on polyamideamine-epichlorohydrin (PAE-reinforced paper substrate is introduced in this paper. Cellulose-based papers with and without reinforcement were considered. The polyaniline (PANI-paper composites had surface resistivity lower than 100 Ω/sq after more than 3 polymerizations. Their mechanical strength and thermal stability were analyzed by tensile tests and thermogravimetric analysis (TGA. Fourier transform infrared (FTIR results revealed that there was strong interaction between NH groups in aniline monomers and OH groups in fibers, which did not disappear until after 3 polymerizations. Scanning electron microscopy (SEM and field emission (FE SEM images showed morphological differences between composites using reinforced and untreated base papers. Conductive composites made with PAE-reinforced base paper had both good thermal stability and good mechanical strength, with high conductivity and a smaller PANI amount.

  6. A multicore based parallel image registration method.

    Science.gov (United States)

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L; Foran, David J

    2009-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform.

  7. Image encryption based on permutation-substitution using chaotic map and Latin Square Image Cipher

    Science.gov (United States)

    Panduranga, H. T.; Naveen Kumar, S. K.; Kiran, HASH(0x22c8da0)

    2014-06-01

    In this paper we presented a image encryption based on permutation-substitution using chaotic map and Latin square image cipher. The proposed method consists of permutation and substitution process. In permutation process, plain image is permuted according to chaotic sequence generated using chaotic map. In substitution process, based on secrete key of 256 bit generate a Latin Square Image Cipher (LSIC) and this LSIC is used as key image and perform XOR operation between permuted image and key image. The proposed method can applied to any plain image with unequal width and height as well and also resist statistical attack, differential attack. Experiments carried out for different images of different sizes. The proposed method possesses large key space to resist brute force attack.

  8. Feature-based Alignment of Volumetric Multi-modal Images

    Science.gov (United States)

    Toews, Matthew; Zöllei, Lilla; Wells, William M.

    2014-01-01

    This paper proposes a method for aligning image volumes acquired from different imaging modalities (e.g. MR, CT) based on 3D scale-invariant image features. A novel method for encoding invariant feature geometry and appearance is developed, based on the assumption of locally linear intensity relationships, providing a solution to poor repeatability of feature detection in different image modalities. The encoding method is incorporated into a probabilistic feature-based model for multi-modal image alignment. The model parameters are estimated via a group-wise alignment algorithm, that iteratively alternates between estimating a feature-based model from feature data, then realigning feature data to the model, converging to a stable alignment solution with few pre-processing or pre-alignment requirements. The resulting model can be used to align multi-modal image data with the benefits of invariant feature correspondence: globally optimal solutions, high efficiency and low memory usage. The method is tested on the difficult RIRE data set of CT, T1, T2, PD and MP-RAGE brain images of subjects exhibiting significant inter-subject variability due to pathology. PMID:24683955

  9. Biometric image enhancement using decision rule based image fusion techniques

    Science.gov (United States)

    Sagayee, G. Mary Amirtha; Arumugam, S.

    2010-02-01

    Introducing biometrics into information systems may result in considerable benefits. Most of the researchers confirmed that the finger print is widely used than the iris or face and more over it is the primary choice for most privacy concerned applications. For finger prints applications, choosing proper sensor is at risk. The proposed work deals about, how the image quality can be improved by introducing image fusion technique at sensor levels. The results of the images after introducing the decision rule based image fusion technique are evaluated and analyzed with its entropy levels and root mean square error.

  10. Content-based image retrieval applied to bone age assessment

    Science.gov (United States)

    Fischer, Benedikt; Brosig, André; Welter, Petra; Grouls, Christoph; Günther, Rolf W.; Deserno, Thomas M.

    2010-03-01

    Radiological bone age assessment is based on local image regions of interest (ROI), such as the epiphysis or the area of carpal bones. These are compared to a standardized reference and scores determining the skeletal maturity are calculated. For computer-aided diagnosis, automatic ROI extraction and analysis is done so far mainly by heuristic approaches. Due to high variations in the imaged biological material and differences in age, gender and ethnic origin, automatic analysis is difficult and frequently requires manual interactions. On the contrary, epiphyseal regions (eROIs) can be compared to previous cases with known age by content-based image retrieval (CBIR). This requires a sufficient number of cases with reliable positioning of the eROI centers. In this first approach to bone age assessment by CBIR, we conduct leaving-oneout experiments on 1,102 left hand radiographs and 15,428 metacarpal and phalangeal eROIs from the USC hand atlas. The similarity of the eROIs is assessed by cross-correlation of 16x16 scaled eROIs. The effects of the number of eROIs, two age computation methods as well as the number of considered CBIR references are analyzed. The best results yield an error rate of 1.16 years and a standard deviation of 0.85 years. As the appearance of the hand varies naturally by up to two years, these results clearly demonstrate the applicability of the CBIR approach for bone age estimation.

  11. Deep Learning-Based Banknote Fitness Classification Using the Reflection Images by a Visible-Light One-Dimensional Line Image Sensor

    Directory of Open Access Journals (Sweden)

    Tuyen Danh Pham

    2018-02-01

    Full Text Available In automatic paper currency sorting, fitness classification is a technique that assesses the quality of banknotes to determine whether a banknote is suitable for recirculation or should be replaced. Studies on using visible-light reflection images of banknotes for evaluating their usability have been reported. However, most of them were conducted under the assumption that the denomination and input direction of the banknote are predetermined. In other words, a pre-classification of the type of input banknote is required. To address this problem, we proposed a deep learning-based fitness-classification method that recognizes the fitness level of a banknote regardless of the denomination and input direction of the banknote to the system, using the reflection images of banknotes by visible-light one-dimensional line image sensor and a convolutional neural network (CNN. Experimental results on the banknote image databases of the Korean won (KRW and the Indian rupee (INR with three fitness levels, and the Unites States dollar (USD with two fitness levels, showed that our method gives better classification accuracy than other methods.

  12. Deep Learning-Based Banknote Fitness Classification Using the Reflection Images by a Visible-Light One-Dimensional Line Image Sensor.

    Science.gov (United States)

    Pham, Tuyen Danh; Nguyen, Dat Tien; Kim, Wan; Park, Sung Ho; Park, Kang Ryoung

    2018-02-06

    In automatic paper currency sorting, fitness classification is a technique that assesses the quality of banknotes to determine whether a banknote is suitable for recirculation or should be replaced. Studies on using visible-light reflection images of banknotes for evaluating their usability have been reported. However, most of them were conducted under the assumption that the denomination and input direction of the banknote are predetermined. In other words, a pre-classification of the type of input banknote is required. To address this problem, we proposed a deep learning-based fitness-classification method that recognizes the fitness level of a banknote regardless of the denomination and input direction of the banknote to the system, using the reflection images of banknotes by visible-light one-dimensional line image sensor and a convolutional neural network (CNN). Experimental results on the banknote image databases of the Korean won (KRW) and the Indian rupee (INR) with three fitness levels, and the Unites States dollar (USD) with two fitness levels, showed that our method gives better classification accuracy than other methods.

  13. Conductivity and transport studies of plasticized chitosan-based proton conducting biopolymer electrolytes

    Science.gov (United States)

    Shukur, M. F.; Yusof, Y. M.; Zawawi, S. M. M.; Illias, H. A.; Kadir, M. F. Z.

    2013-11-01

    This paper focuses on the conductivity and transport properties of chitosan-based solid biopolymer electrolytes containing ammonium thiocyanate (NH4SCN). The sample containing 40 wt% NH4SCN exhibited the highest conductivity value of (1.81 ± 0.50) × 10-4 S cm-1 at room temperature. Conductivity has increased to (1.51 ± 0.12) × 10-3 S cm-1 with the addition of 25 wt% glycerol. The temperature dependence of conductivity for both salted and plasticized systems obeyed the Arrhenius rule. The activation energy (Ea) was calculated for both systems and it is found that the sample with 40 wt% NH4SCN in the salted system obtained an Ea value of 0.148 eV and that for the sample containing 25 wt% glycerol in the plasticized system is 0.139 eV. From the Fourier transform infrared studies, carboxamide and amine bands shifted to lower wavenumbers, indicating that chitosan has interacted with NH4SCN salt. Changes in the C-O stretching vibration band intensity are observed at 1067 cm-1 with the addition of glycerol. The Rice and Roth model was used to explain the transport properties of the salted and plasticized systems.

  14. Image-based reflectance conversion of ASTER and IKONOS ...

    African Journals Online (AJOL)

    Spectral signatures derived from different image-based models for ASTER and IKONOS were inspected visually as first departure. This was followed by comparison of the total accuracy and Kappa index computed from supervised classification of images that were derived from different image-based atmospheric correction ...

  15. HDR Pathological Image Enhancement Based on Improved Bias Field Correction and Guided Image Filter

    Directory of Open Access Journals (Sweden)

    Qingjiao Sun

    2016-01-01

    Full Text Available Pathological image enhancement is a significant topic in the field of pathological image processing. This paper proposes a high dynamic range (HDR pathological image enhancement method based on improved bias field correction and guided image filter (GIF. Firstly, a preprocessing including stain normalization and wavelet denoising is performed for Haematoxylin and Eosin (H and E stained pathological image. Then, an improved bias field correction model is developed to enhance the influence of light for high-frequency part in image and correct the intensity inhomogeneity and detail discontinuity of image. Next, HDR pathological image is generated based on least square method using low dynamic range (LDR image, H and E channel images. Finally, the fine enhanced image is acquired after the detail enhancement process. Experiments with 140 pathological images demonstrate the performance advantages of our proposed method as compared with related work.

  16. Electrical conduction in solid materials physicochemical bases and possible applications

    CERN Document Server

    Suchet, J P

    2013-01-01

    Electrical Conduction in Solid Materials (Physicochemical Bases and Possible Applications) investigates the physicochemical bases and possible applications of electrical conduction in solid materials, with emphasis on conductors, semiconductors, and insulators. Topics range from the interatomic bonds of conductors to the effective atomic charge in conventional semiconductors and magnetic transitions in switching semiconductors. Comprised of 10 chapters, this volume begins with a description of electrical conduction in conductors and semiconductors, metals and alloys, as well as interatomic bon

  17. Developing students’ ideas about lens imaging: teaching experiments with an image-based approach

    Science.gov (United States)

    Grusche, Sascha

    2017-07-01

    Lens imaging is a classic topic in physics education. To guide students from their holistic viewpoint to the scientists’ analytic viewpoint, an image-based approach to lens imaging has recently been proposed. To study the effect of the image-based approach on undergraduate students’ ideas, teaching experiments are performed and evaluated using qualitative content analysis. Some of the students’ ideas have not been reported before, namely those related to blurry lens images, and those developed by the proposed teaching approach. To describe learning pathways systematically, a conception-versus-time coordinate system is introduced, specifying how teaching actions help students advance toward a scientific understanding.

  18. Image denoising based on noise detection

    Science.gov (United States)

    Jiang, Yuanxiang; Yuan, Rui; Sun, Yuqiu; Tian, Jinwen

    2018-03-01

    Because of the noise points in the images, any operation of denoising would change the original information of non-noise pixel. A noise detection algorithm based on fractional calculus was proposed to denoise in this paper. Convolution of the image was made to gain direction gradient masks firstly. Then, the mean gray was calculated to obtain the gradient detection maps. Logical product was made to acquire noise position image next. Comparisons in the visual effect and evaluation parameters after processing, the results of experiment showed that the denoising algorithms based on noise were better than that of traditional methods in both subjective and objective aspects.

  19. Image-based corrosion recognition for ship steel structures

    Science.gov (United States)

    Ma, Yucong; Yang, Yang; Yao, Yuan; Li, Shengyuan; Zhao, Xuefeng

    2018-03-01

    Ship structures are subjected to corrosion inevitably in service. Existed image-based methods are influenced by the noises in images because they recognize corrosion by extracting features. In this paper, a novel method of image-based corrosion recognition for ship steel structures is proposed. The method utilizes convolutional neural networks (CNN) and will not be affected by noises in images. A CNN used to recognize corrosion was designed through fine-turning an existing CNN architecture and trained by datasets built using lots of images. Combining the trained CNN classifier with a sliding window technique, the corrosion zone in an image can be recognized.

  20. Optical colour image watermarking based on phase-truncated linear canonical transform and image decomposition

    Science.gov (United States)

    Su, Yonggang; Tang, Chen; Li, Biyuan; Lei, Zhenkun

    2018-05-01

    This paper presents a novel optical colour image watermarking scheme based on phase-truncated linear canonical transform (PT-LCT) and image decomposition (ID). In this proposed scheme, a PT-LCT-based asymmetric cryptography is designed to encode the colour watermark into a noise-like pattern, and an ID-based multilevel embedding method is constructed to embed the encoded colour watermark into a colour host image. The PT-LCT-based asymmetric cryptography, which can be optically implemented by double random phase encoding with a quadratic phase system, can provide a higher security to resist various common cryptographic attacks. And the ID-based multilevel embedding method, which can be digitally implemented by a computer, can make the information of the colour watermark disperse better in the colour host image. The proposed colour image watermarking scheme possesses high security and can achieve a higher robustness while preserving the watermark’s invisibility. The good performance of the proposed scheme has been demonstrated by extensive experiments and comparison with other relevant schemes.

  1. New mobile methods for dietary assessment: review of image-assisted and image-based dietary assessment methods.

    Science.gov (United States)

    Boushey, C J; Spoden, M; Zhu, F M; Delp, E J; Kerr, D A

    2017-08-01

    For nutrition practitioners and researchers, assessing dietary intake of children and adults with a high level of accuracy continues to be a challenge. Developments in mobile technologies have created a role for images in the assessment of dietary intake. The objective of this review was to examine peer-reviewed published papers covering development, evaluation and/or validation of image-assisted or image-based dietary assessment methods from December 2013 to January 2016. Images taken with handheld devices or wearable cameras have been used to assist traditional dietary assessment methods for portion size estimations made by dietitians (image-assisted methods). Image-assisted approaches can supplement either dietary records or 24-h dietary recalls. In recent years, image-based approaches integrating application technology for mobile devices have been developed (image-based methods). Image-based approaches aim at capturing all eating occasions by images as the primary record of dietary intake, and therefore follow the methodology of food records. The present paper reviews several image-assisted and image-based methods, their benefits and challenges; followed by details on an image-based mobile food record. Mobile technology offers a wide range of feasible options for dietary assessment, which are easier to incorporate into daily routines. The presented studies illustrate that image-assisted methods can improve the accuracy of conventional dietary assessment methods by adding eating occasion detail via pictures captured by an individual (dynamic images). All of the studies reduced underreporting with the help of images compared with results with traditional assessment methods. Studies with larger sample sizes are needed to better delineate attributes with regards to age of user, degree of error and cost.

  2. Binary-space-partitioned images for resolving image-based visibility.

    Science.gov (United States)

    Fu, Chi-Wing; Wong, Tien-Tsin; Tong, Wai-Shun; Tang, Chi-Keung; Hanson, Andrew J

    2004-01-01

    We propose a novel 2D representation for 3D visibility sorting, the Binary-Space-Partitioned Image (BSPI), to accelerate real-time image-based rendering. BSPI is an efficient 2D realization of a 3D BSP tree, which is commonly used in computer graphics for time-critical visibility sorting. Since the overall structure of a BSP tree is encoded in a BSPI, traversing a BSPI is comparable to traversing the corresponding BSP tree. BSPI performs visibility sorting efficiently and accurately in the 2D image space by warping the reference image triangle-by-triangle instead of pixel-by-pixel. Multiple BSPIs can be combined to solve "disocclusion," when an occluded portion of the scene becomes visible at a novel viewpoint. Our method is highly automatic, including a tensor voting preprocessing step that generates candidate image partition lines for BSPIs, filters the noisy input data by rejecting outliers, and interpolates missing information. Our system has been applied to a variety of real data, including stereo, motion, and range images.

  3. High dynamic range image acquisition based on multiplex cameras

    Science.gov (United States)

    Zeng, Hairui; Sun, Huayan; Zhang, Tinghua

    2018-03-01

    High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.

  4. Principal component analysis-based imaging angle determination for 3D motion monitoring using single-slice on-board imaging.

    Science.gov (United States)

    Chen, Ting; Zhang, Miao; Jabbour, Salma; Wang, Hesheng; Barbee, David; Das, Indra J; Yue, Ning

    2018-04-10

    Through-plane motion introduces uncertainty in three-dimensional (3D) motion monitoring when using single-slice on-board imaging (OBI) modalities such as cine MRI. We propose a principal component analysis (PCA)-based framework to determine the optimal imaging plane to minimize the through-plane motion for single-slice imaging-based motion monitoring. Four-dimensional computed tomography (4DCT) images of eight thoracic cancer patients were retrospectively analyzed. The target volumes were manually delineated at different respiratory phases of 4DCT. We performed automated image registration to establish the 4D respiratory target motion trajectories for all patients. PCA was conducted using the motion information to define the three principal components of the respiratory motion trajectories. Two imaging planes were determined perpendicular to the second and third principal component, respectively, to avoid imaging with the primary principal component of the through-plane motion. Single-slice images were reconstructed from 4DCT in the PCA-derived orthogonal imaging planes and were compared against the traditional AP/Lateral image pairs on through-plane motion, residual error in motion monitoring, absolute motion amplitude error and the similarity between target segmentations at different phases. We evaluated the significance of the proposed motion monitoring improvement using paired t test analysis. The PCA-determined imaging planes had overall less through-plane motion compared against the AP/Lateral image pairs. For all patients, the average through-plane motion was 3.6 mm (range: 1.6-5.6 mm) for the AP view and 1.7 mm (range: 0.6-2.7 mm) for the Lateral view. With PCA optimization, the average through-plane motion was 2.5 mm (range: 1.3-3.9 mm) and 0.6 mm (range: 0.2-1.5 mm) for the two imaging planes, respectively. The absolute residual error of the reconstructed max-exhale-to-inhale motion averaged 0.7 mm (range: 0.4-1.3 mm, 95% CI: 0.4-1.1 mm) using

  5. Comparison on Integer Wavelet Transforms in Spherical Wavelet Based Image Based Relighting

    Institute of Scientific and Technical Information of China (English)

    WANGZe; LEEYin; LEUNGChising; WONGTientsin; ZHUYisheng

    2003-01-01

    To provide a good quality rendering in the Image based relighting (IBL) system, tremendous reference images under various illumination conditions are needed. Therefore data compression is essential to enable interactive action. And the rendering speed is another crucial consideration for real applications. Based on Spherical wavelet transform (SWT), this paper presents a quick representation method with Integer wavelet transform (IWT) for the IBL system. It focuses on comparison on different IWTs with the Embedded zerotree wavelet (EZW) used in the IBL system. The whole compression procedure contains two major compression steps. Firstly, SWT is applied to consider the correlation among different reference images. Secondly, the SW transformed images are compressed with IWT based image compression approach. Two IWTs are used and good results are showed in the simulations.

  6. High-speed MRF-based segmentation algorithm using pixonal images

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Hassanpour, H.; Naimi, H. M.

    2013-01-01

    Segmentation is one of the most complicated procedures in the image processing that has important role in the image analysis. In this paper, an improved pixon-based method for image segmentation is proposed. In proposed algorithm, complex partial differential equations (PDEs) is used as a kernel...... function to make pixonal image. Using this kernel function causes noise on images to reduce and an image not to be over-segment when the pixon-based method is used. Utilising the PDE-based method leads to elimination of some unnecessary details and results in a fewer pixon number, faster performance...... and more robustness against unwanted environmental noises. As the next step, the appropriate pixons are extracted and eventually, we segment the image with the use of a Markov random field. The experimental results indicate that the proposed pixon-based approach has a reduced computational load...

  7. First experience with THE AUTOLAP™ SYSTEM: an image-based robotic camera steering device.

    Science.gov (United States)

    Wijsman, Paul J M; Broeders, Ivo A M J; Brenkman, Hylke J; Szold, Amir; Forgione, Antonello; Schreuder, Henk W R; Consten, Esther C J; Draaisma, Werner A; Verheijen, Paul M; Ruurda, Jelle P; Kaufman, Yuval

    2018-05-01

    Robotic camera holders for endoscopic surgery have been available for 20 years but market penetration is low. The current camera holders are controlled by voice, joystick, eyeball tracking, or head movements, and this type of steering has proven to be successful but excessive disturbance of surgical workflow has blocked widespread introduction. The Autolap™ system (MST, Israel) uses a radically different steering concept based on image analysis. This may improve acceptance by smooth, interactive, and fast steering. These two studies were conducted to prove safe and efficient performance of the core technology. A total of 66 various laparoscopic procedures were performed with the AutoLap™ by nine experienced surgeons, in two multi-center studies; 41 cholecystectomies, 13 fundoplications including hiatal hernia repair, 4 endometriosis surgeries, 2 inguinal hernia repairs, and 6 (bilateral) salpingo-oophorectomies. The use of the AutoLap™ system was evaluated in terms of safety, image stability, setup and procedural time, accuracy of imaged-based movements, and user satisfaction. Surgical procedures were completed with the AutoLap™ system in 64 cases (97%). The mean overall setup time of the AutoLap™ system was 4 min (04:08 ± 0.10). Procedure times were not prolonged due to the use of the system when compared to literature average. The reported user satisfaction was 3.85 and 3.96 on a scale of 1 to 5 in two studies. More than 90% of the image-based movements were accurate. No system-related adverse events were recorded while using the system. Safe and efficient use of the core technology of the AutoLap™ system was demonstrated with high image stability and good surgeon satisfaction. The results support further clinical studies that will focus on usability, improved ergonomics and additional image-based features.

  8. Mapping Fire Severity Using Imaging Spectroscopy and Kernel Based Image Analysis

    Science.gov (United States)

    Prasad, S.; Cui, M.; Zhang, Y.; Veraverbeke, S.

    2014-12-01

    Improved spatial representation of within-burn heterogeneity after wildfires is paramount to effective land management decisions and more accurate fire emissions estimates. In this work, we demonstrate feasibility and efficacy of airborne imaging spectroscopy (hyperspectral imagery) for quantifying wildfire burn severity, using kernel based image analysis techniques. Two different airborne hyperspectral datasets, acquired over the 2011 Canyon and 2013 Rim fire in California using the Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) sensor, were used in this study. The Rim Fire, covering parts of the Yosemite National Park started on August 17, 2013, and was the third largest fire in California's history. Canyon Fire occurred in the Tehachapi mountains, and started on September 4, 2011. In addition to post-fire data for both fires, half of the Rim fire was also covered with pre-fire images. Fire severity was measured in the field using Geo Composite Burn Index (GeoCBI). The field data was utilized to train and validate our models, wherein the trained models, in conjunction with imaging spectroscopy data were used for GeoCBI estimation wide geographical regions. This work presents an approach for using remotely sensed imagery combined with GeoCBI field data to map fire scars based on a non-linear (kernel based) epsilon-Support Vector Regression (e-SVR), which was used to learn the relationship between spectra and GeoCBI in a kernel-induced feature space. Classification of healthy vegetation versus fire-affected areas based on morphological multi-attribute profiles was also studied. The availability of pre- and post-fire imaging spectroscopy data over the Rim Fire provided a unique opportunity to evaluate the performance of bi-temporal imaging spectroscopy for assessing post-fire effects. This type of data is currently constrained because of limited airborne acquisitions before a fire, but will become widespread with future spaceborne sensors such as those on

  9. Conductivity and transport studies of plasticized chitosan-based proton conducting biopolymer electrolytes

    International Nuclear Information System (INIS)

    Shukur, M F; Yusof, Y M; Zawawi, S M M; Illias, H A; Kadir, M F Z

    2013-01-01

    This paper focuses on the conductivity and transport properties of chitosan-based solid biopolymer electrolytes containing ammonium thiocyanate (NH 4 SCN). The sample containing 40 wt% NH 4 SCN exhibited the highest conductivity value of (1.81 ± 0.50) × 10 −4  S cm −1 at room temperature. Conductivity has increased to (1.51 ± 0.12) × 10 −3  S cm −1 with the addition of 25 wt% glycerol. The temperature dependence of conductivity for both salted and plasticized systems obeyed the Arrhenius rule. The activation energy (E a ) was calculated for both systems and it is found that the sample with 40 wt% NH 4 SCN in the salted system obtained an E a value of 0.148 eV and that for the sample containing 25 wt% glycerol in the plasticized system is 0.139 eV. From the Fourier transform infrared studies, carboxamide and amine bands shifted to lower wavenumbers, indicating that chitosan has interacted with NH 4 SCN salt. Changes in the C–O stretching vibration band intensity are observed at 1067 cm −1 with the addition of glycerol. The Rice and Roth model was used to explain the transport properties of the salted and plasticized systems. (paper)

  10. Video-based noncooperative iris image segmentation.

    Science.gov (United States)

    Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig

    2011-02-01

    In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.

  11. Object-Based Change Detection in Urban Areas from High Spatial Resolution Images Based on Multiple Features and Ensemble Learning

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2018-02-01

    Full Text Available To improve the accuracy of change detection in urban areas using bi-temporal high-resolution remote sensing images, a novel object-based change detection scheme combining multiple features and ensemble learning is proposed in this paper. Image segmentation is conducted to determine the objects in bi-temporal images separately. Subsequently, three kinds of object features, i.e., spectral, shape and texture, are extracted. Using the image differencing process, a difference image is generated and used as the input for nonlinear supervised classifiers, including k-nearest neighbor, support vector machine, extreme learning machine and random forest. Finally, the results of multiple classifiers are integrated using an ensemble rule called weighted voting to generate the final change detection result. Experimental results of two pairs of real high-resolution remote sensing datasets demonstrate that the proposed approach outperforms the traditional methods in terms of overall accuracy and generates change detection maps with a higher number of homogeneous regions in urban areas. Moreover, the influences of segmentation scale and the feature selection strategy on the change detection performance are also analyzed and discussed.

  12. Image magnification based on similarity analogy

    International Nuclear Information System (INIS)

    Chen Zuoping; Ye Zhenglin; Wang Shuxun; Peng Guohua

    2009-01-01

    Aiming at the high time complexity of the decoding phase in the traditional image enlargement methods based on fractal coding, a novel image magnification algorithm is proposed in this paper, which has the advantage of iteration-free decoding, by using the similarity analogy between an image and its zoom-out and zoom-in. A new pixel selection technique is also presented to further improve the performance of the proposed method. Furthermore, by combining some existing fractal zooming techniques, an efficient image magnification algorithm is obtained, which can provides the image quality as good as the state of the art while greatly decrease the time complexity of the decoding phase.

  13. Complex adaptation-based LDR image rendering for 3D image reconstruction

    Science.gov (United States)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  14. Cogent Confabulation based Expert System for Segmentation and Classification of Natural Landscape Images

    Directory of Open Access Journals (Sweden)

    BRAOVIC, M.

    2017-05-01

    Full Text Available Ever since there has been an increase in the number of automatic wildfire monitoring and surveillance systems in the last few years, natural landscape images have been of great importance. In this paper we propose an expert system for fast segmentation and classification of regions on natural landscape images that is suitable for real-time applications. We focus primarily on Mediterranean landscape images since the Mediterranean area and areas with similar climate are the ones most associated with high wildfire risk. The proposed expert system is based on cogent confabulation theory and knowledge bases that contain information about local and global features, optimal color spaces suitable for classification of certain regions, and context of each class. The obtained results indicate that the proposed expert system significantly outperforms well-known classifiers that it was compared against in both accuracy and speed, and that it is effective and efficient for real-time applications. Additionally, we present a FESB MLID dataset on which we conducted our research and that we made publicly available.

  15. Normal Inverse Gaussian Model-Based Image Denoising in the NSCT Domain

    Directory of Open Access Journals (Sweden)

    Jian Jia

    2015-01-01

    Full Text Available The objective of image denoising is to retain useful details while removing as much noise as possible to recover an original image from its noisy version. This paper proposes a novel normal inverse Gaussian (NIG model-based method that uses a Bayesian estimator to carry out image denoising in the nonsubsampled contourlet transform (NSCT domain. In the proposed method, the NIG model is first used to describe the distributions of the image transform coefficients of each subband in the NSCT domain. Then, the corresponding threshold function is derived from the model using Bayesian maximum a posteriori probability estimation theory. Finally, optimal linear interpolation thresholding algorithm (OLI-Shrink is employed to guarantee a gentler thresholding effect. The results of comparative experiments conducted indicate that the denoising performance of our proposed method in terms of peak signal-to-noise ratio is superior to that of several state-of-the-art methods, including BLS-GSM, K-SVD, BivShrink, and BM3D. Further, the proposed method achieves structural similarity (SSIM index values that are comparable to those of the block-matching 3D transformation (BM3D method.

  16. Fast image matching algorithm based on projection characteristics

    Science.gov (United States)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  17. Thermal infrared imaging of the temporal variability in stomatal conductance for fruit trees

    Science.gov (United States)

    Struthers, Raymond; Ivanova, Anna; Tits, Laurent; Swennen, Rony; Coppin, Pol

    2015-07-01

    Repeated measurements using thermal infrared remote sensing were used to characterize the change in canopy temperature over time and factors that influenced this change on 'Conference' pear trees (Pyrus communis L.). Three different types of sensors were used, a leaf porometer to measure leaf stomatal conductance, a thermal infrared camera to measure the canopy temperature and a meteorological sensor to measure weather variables. Stomatal conductance of water stressed pear was significantly lower than in the control group 9 days after stress began. This decrease in stomatal conductance reduced transpiration, reducing evaporative cooling that increased canopy temperature. Using thermal infrared imaging with wavelengths between 7.5 and13 μm, the first significant difference was measured 18 days after stress began. A second order derivative described the average rate of change of the difference between the stress treatment and control group. The average rate of change for stomatal conductance was 0.06 (mmol m-2 s-1) and for canopy temperature was -0.04 (°C) with respect to days. Thermal infrared remote sensing and data analysis presented in this study demonstrated that the differences in canopy temperatures between the water stress and control treatment due to stomata regulation can be validated.

  18. Jet-Based Local Image Descriptors

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo; Darkner, Sune; Dahl, Anders Lindbjerg

    2012-01-01

    We present a general novel image descriptor based on higherorder differential geometry and investigate the effect of common descriptor choices. Our investigation is twofold in that we develop a jet-based descriptor and perform a comparative evaluation with current state-of-the-art descriptors on ...

  19. Extracting flat-field images from scene-based image sequences using phase correlation

    Energy Technology Data Exchange (ETDEWEB)

    Caron, James N., E-mail: Caron@RSImd.com [Research Support Instruments, 4325-B Forbes Boulevard, Lanham, Maryland 20706 (United States); Montes, Marcos J. [Naval Research Laboratory, Code 7231, 4555 Overlook Avenue, SW, Washington, DC 20375 (United States); Obermark, Jerome L. [Naval Research Laboratory, Code 8231, 4555 Overlook Avenue, SW, Washington, DC 20375 (United States)

    2016-06-15

    Flat-field image processing is an essential step in producing high-quality and radiometrically calibrated images. Flat-fielding corrects for variations in the gain of focal plane array electronics and unequal illumination from the system optics. Typically, a flat-field image is captured by imaging a radiometrically uniform surface. The flat-field image is normalized and removed from the images. There are circumstances, such as with remote sensing, where a flat-field image cannot be acquired in this manner. For these cases, we developed a phase-correlation method that allows the extraction of an effective flat-field image from a sequence of scene-based displaced images. The method uses sub-pixel phase correlation image registration to align the sequence to estimate the static scene. The scene is removed from sequence producing a sequence of misaligned flat-field images. An average flat-field image is derived from the realigned flat-field sequence.

  20. Target Identification Using Harmonic Wavelet Based ISAR Imaging

    Science.gov (United States)

    Shreyamsha Kumar, B. K.; Prabhakar, B.; Suryanarayana, K.; Thilagavathi, V.; Rajagopal, R.

    2006-12-01

    A new approach has been proposed to reduce the computations involved in the ISAR imaging, which uses harmonic wavelet-(HW) based time-frequency representation (TFR). Since the HW-based TFR falls into a category of nonparametric time-frequency (T-F) analysis tool, it is computationally efficient compared to parametric T-F analysis tools such as adaptive joint time-frequency transform (AJTFT), adaptive wavelet transform (AWT), and evolutionary AWT (EAWT). Further, the performance of the proposed method of ISAR imaging is compared with the ISAR imaging by other nonparametric T-F analysis tools such as short-time Fourier transform (STFT) and Choi-Williams distribution (CWD). In the ISAR imaging, the use of HW-based TFR provides similar/better results with significant (92%) computational advantage compared to that obtained by CWD. The ISAR images thus obtained are identified using a neural network-based classification scheme with feature set invariant to translation, rotation, and scaling.

  1. Simultaneous head tissue conductivity and EEG source location estimation.

    Science.gov (United States)

    Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott

    2016-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Image-based RSA: Roentgen stereophotogrammetric analysis based on 2D-3D image registration.

    Science.gov (United States)

    de Bruin, P W; Kaptein, B L; Stoel, B C; Reiber, J H C; Rozing, P M; Valstar, E R

    2008-01-01

    Image-based Roentgen stereophotogrammetric analysis (IBRSA) integrates 2D-3D image registration and conventional RSA. Instead of radiopaque RSA bone markers, IBRSA uses 3D CT data, from which digitally reconstructed radiographs (DRRs) are generated. Using 2D-3D image registration, the 3D pose of the CT is iteratively adjusted such that the generated DRRs resemble the 2D RSA images as closely as possible, according to an image matching metric. Effectively, by registering all 2D follow-up moments to the same 3D CT, the CT volume functions as common ground. In two experiments, using RSA and using a micromanipulator as gold standard, IBRSA has been validated on cadaveric and sawbone scapula radiographs, and good matching results have been achieved. The accuracy was: |mu |RSA but higher than in vivo standard RSA. Because IBRSA does not require radiopaque markers, it adds functionality to the RSA method by opening new directions and possibilities for research, such as dynamic analyses using fluoroscopy on subjects without markers and computer navigation applications.

  3. Improved image retrieval based on fuzzy colour feature vector

    Science.gov (United States)

    Ben-Ahmeida, Ahlam M.; Ben Sasi, Ahmed Y.

    2013-03-01

    One of Image indexing techniques is the Content-Based Image Retrieval which is an efficient way for retrieving images from the image database automatically based on their visual contents such as colour, texture, and shape. In this paper will be discuss how using content-based image retrieval (CBIR) method by colour feature extraction and similarity checking. By dividing the query image and all images in the database into pieces and extract the features of each part separately and comparing the corresponding portions in order to increase the accuracy in the retrieval. The proposed approach is based on the use of fuzzy sets, to overcome the problem of curse of dimensionality. The contribution of colour of each pixel is associated to all the bins in the histogram using fuzzy-set membership functions. As a result, the Fuzzy Colour Histogram (FCH), outperformed the Conventional Colour Histogram (CCH) in image retrieving, due to its speedy results, where were images represented as signatures that took less size of memory, depending on the number of divisions. The results also showed that FCH is less sensitive and more robust to brightness changes than the CCH with better retrieval recall values.

  4. Canny edge-based deformable image registration.

    Science.gov (United States)

    Kearney, Vasant; Huang, Yihui; Mao, Weihua; Yuan, Baohong; Tang, Liping

    2017-02-07

    This work focuses on developing a 2D Canny edge-based deformable image registration (Canny DIR) algorithm to register in vivo white light images taken at various time points. This method uses a sparse interpolation deformation algorithm to sparsely register regions of the image with strong edge information. A stability criterion is enforced which removes regions of edges that do not deform in a smooth uniform manner. Using a synthetic mouse surface ground truth model, the accuracy of the Canny DIR algorithm was evaluated under axial rotation in the presence of deformation. The accuracy was also tested using fluorescent dye injections, which were then used for gamma analysis to establish a second ground truth. The results indicate that the Canny DIR algorithm performs better than rigid registration, intensity corrected Demons, and distinctive features for all evaluation matrices and ground truth scenarios. In conclusion Canny DIR performs well in the presence of the unique lighting and shading variations associated with white-light-based image registration.

  5. Microprocessor based image processing system

    International Nuclear Information System (INIS)

    Mirza, M.I.; Siddiqui, M.N.; Rangoonwala, A.

    1987-01-01

    Rapid developments in the production of integrated circuits and introduction of sophisticated 8,16 and now 32 bit microprocessor based computers, have set new trends in computer applications. Nowadays the users by investing much less money can make optimal use of smaller systems by getting them custom-tailored according to their requirements. During the past decade there have been great advancements in the field of computer Graphics and consequently, 'Image Processing' has emerged as a separate independent field. Image Processing is being used in a number of disciplines. In the Medical Sciences, it is used to construct pseudo color images from computer aided tomography (CAT) or positron emission tomography (PET) scanners. Art, advertising and publishing people use pseudo colours in pursuit of more effective graphics. Structural engineers use Image Processing to examine weld X-rays to search for imperfections. Photographers use Image Processing for various enhancements which are difficult to achieve in a conventional dark room. (author)

  6. Optical conductivity of iron-based superconductors

    International Nuclear Information System (INIS)

    Charnukha, A

    2014-01-01

    The new family of unconventional iron-based superconductors discovered in 2006 immediately relieved their copper-based high-temperature predecessors as the most actively studied superconducting compounds in the world. The experimental and theoretical effort made in order to unravel the mechanism of superconductivity in these materials has been overwhelming. Although our understanding of their microscopic properties has been improving steadily, the pairing mechanism giving rise to superconducting transition temperatures up to 55 K remains elusive. And yet the hope is strong that these materials, which possess a drastically different electronic structure but similarly high transition temperatures compared to the copper-based compounds, will shed essential new light onto the several-decade-old problem of unconventional superconductivity. In this work we review the current understanding of the itinerant-charge-carrier dynamics in the iron-based superconductors and parent compounds largely based on the optical-conductivity data the community has gleaned over the past seven years using such experimental techniques as reflectivity, ellipsometry, and terahertz transmission measurements and analyze the implications of these studies for the microscopic properties of the iron-based materials as well as the mechanism of superconductivity therein. (topical review)

  7. Conductance Effects on Inner Magnetospheric Plasma Morphology: Model Comparisons with IMAGE EUV, MENA, and HENA Data

    Science.gov (United States)

    Liemohn, M.; Ridley, A. J.; Kozyra, J. U.; Gallagher, D. L.; Brandt, P. C.; Henderson, M. G.; Denton, M. H.; Jahn, J. M.; Roelof, E. C.; DeMajistre, R. M.

    2004-01-01

    Modeling results of the inner magnetosphere showing the influence of the ionospheric conductance on the inner magnetospheric electric fields during the April 17, 2002 magnetic storm are presented. Kinetic plasma transport code results are analyzed in combination with observations of the inner magnetospheric plasma populations, in particular those from the IMAGE satellite. Qualitative and quantitative comparisons are made with the observations from EW, MENA, and HENA, covering the entire energy range simulated by the model (0 to 300 keV). The electric field description, and in particular the ionospheric conductance, is the only variable between the simulations. Results from the data-model comparisons are discussed, detailing the strengths and weaknesses of each conductance choice for each energy channel.

  8. LSB Based Quantum Image Steganography Algorithm

    Science.gov (United States)

    Jiang, Nan; Zhao, Na; Wang, Luo

    2016-01-01

    Quantum steganography is the technique which hides a secret message into quantum covers such as quantum images. In this paper, two blind LSB steganography algorithms in the form of quantum circuits are proposed based on the novel enhanced quantum representation (NEQR) for quantum images. One algorithm is plain LSB which uses the message bits to substitute for the pixels' LSB directly. The other is block LSB which embeds a message bit into a number of pixels that belong to one image block. The extracting circuits can regain the secret message only according to the stego cover. Analysis and simulation-based experimental results demonstrate that the invisibility is good, and the balance between the capacity and the robustness can be adjusted according to the needs of applications.

  9. A Subdivision-Based Representation for Vector Image Editing.

    Science.gov (United States)

    Liao, Zicheng; Hoppe, Hugues; Forsyth, David; Yu, Yizhou

    2012-11-01

    Vector graphics has been employed in a wide variety of applications due to its scalability and editability. Editability is a high priority for artists and designers who wish to produce vector-based graphical content with user interaction. In this paper, we introduce a new vector image representation based on piecewise smooth subdivision surfaces, which is a simple, unified and flexible framework that supports a variety of operations, including shape editing, color editing, image stylization, and vector image processing. These operations effectively create novel vector graphics by reusing and altering existing image vectorization results. Because image vectorization yields an abstraction of the original raster image, controlling the level of detail of this abstraction is highly desirable. To this end, we design a feature-oriented vector image pyramid that offers multiple levels of abstraction simultaneously. Our new vector image representation can be rasterized efficiently using GPU-accelerated subdivision. Experiments indicate that our vector image representation achieves high visual quality and better supports editing operations than existing representations.

  10. ZnO-Based Transparent Conductive Thin Films: Doping, Performance, and Processing

    International Nuclear Information System (INIS)

    Liu, Y.; Li, Y.; Zeng, H.

    2013-01-01

    ZnO-based transparent conductive thin films have attracted much attention as a promising substitute material to the currently used indium-tin-oxide thin films in transparent electrode applications. However, the detailed function of the dopants, acting on the electrical and optical properties of ZnO-based transparent conductive thin films, is not clear yet, which has limited the development and practical applications of ZnO transparent conductive thin films. Growth conditions such as substrate type, growth temperature, and ambient atmosphere all play important roles in structural, electrical, and optical properties of films. This paper takes a panoramic view on properties of ZnO thin films and reviews the very recent works on new, efficient, low-temperature, and high-speed deposition technologies. In addition, we highlighted the methods of producing ZnO-based transparent conductive film on flexible substrate, one of the most promising and rapidly emerging research areas. As optimum-processing-parameter conditions are being obtained and their influencing mechanism is becoming clear, we can see that there will be a promising future for ZnO-based transparent conductive films.

  11. Remote sensing image segmentation based on Hadoop cloud platform

    Science.gov (United States)

    Li, Jie; Zhu, Lingling; Cao, Fubin

    2018-01-01

    To solve the problem that the remote sensing image segmentation speed is slow and the real-time performance is poor, this paper studies the method of remote sensing image segmentation based on Hadoop platform. On the basis of analyzing the structural characteristics of Hadoop cloud platform and its component MapReduce programming, this paper proposes a method of image segmentation based on the combination of OpenCV and Hadoop cloud platform. Firstly, the MapReduce image processing model of Hadoop cloud platform is designed, the input and output of image are customized and the segmentation method of the data file is rewritten. Then the Mean Shift image segmentation algorithm is implemented. Finally, this paper makes a segmentation experiment on remote sensing image, and uses MATLAB to realize the Mean Shift image segmentation algorithm to compare the same image segmentation experiment. The experimental results show that under the premise of ensuring good effect, the segmentation rate of remote sensing image segmentation based on Hadoop cloud Platform has been greatly improved compared with the single MATLAB image segmentation, and there is a great improvement in the effectiveness of image segmentation.

  12. Image registration based on virtual frame sequence analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chen, H.; Ng, W.S. [Nanyang Technological University, Computer Integrated Medical Intervention Laboratory, School of Mechanical and Aerospace Engineering, Singapore (Singapore); Shi, D. (Nanyang Technological University, School of Computer Engineering, Singapore, Singpore); Wee, S.B. [Tan Tock Seng Hospital, Department of General Surgery, Singapore (Singapore)

    2007-08-15

    This paper is to propose a new framework for medical image registration with large nonrigid deformations, which still remains one of the biggest challenges for image fusion and further analysis in many medical applications. Registration problem is formulated as to recover a deformation process with the known initial state and final state. To deal with large nonlinear deformations, virtual frames are proposed to be inserted to model the deformation process. A time parameter is introduced and the deformation between consecutive frames is described with a linear affine transformation. Experiments are conducted with simple geometric deformation as well as complex deformations presented in MRI and ultrasound images. All the deformations are characterized with nonlinearity. The positive results demonstrated the effectiveness of this algorithm. The framework proposed in this paper is feasible to register medical images with large nonlinear deformations and is especially useful for sequential images. (orig.)

  13. The influence of reduced graphene oxide on electrical conductivity of LiFePO{sub 4}-based composite as cathode material

    Energy Technology Data Exchange (ETDEWEB)

    Arifin, Muhammad; Aimon, Akfiny Hasdi; Winata, Toto; Abdullah, Mikrajuddin [Physics of Electronic Materials Research Division, Department of Physics, Institut Teknologi Bandung, Bandung 40132 Indonesia (Indonesia); Iskandar, Ferry, E-mail: ferry@fi.itb.ac.id [Physics of Electronic Materials Research Division, Department of Physics, Institut Teknologi Bandung, Bandung 40132 Indonesia (Indonesia); Research Center for Nanoscience and Nanotechnology Institut Teknologi Bandung, Bandung 40132 Indonesia (Indonesia)

    2016-02-08

    LiFePO{sub 4} is fascinating cathode active materials for Li-ion batteries application because of their high electrochemical performance such as a stable voltage at 3.45 V and high specific capacity at 170 mAh.g{sup −1}. However, their low intrinsic electronic conductivity and low ionic diffusion are still the hindrance for their further application on Li-ion batteries. Therefore, the efforts to improve their conductivity are very important to elevate their prospecting application as cathode materials. Herein, we reported preparation of additional of reduced Graphene Oxide (rGO) into LiFePO{sub 4}-based composite via hydrothermal method and the influence of rGO on electrical conductivity of LiFePO{sub 4}−based composite by varying mass of rGO in composition. Vibration of LiFePO{sub 4}-based composite was detected on Fourier Transform Infrared Spectroscopy (FTIR) spectra, while single phase of LiFePO{sub 4} nanocrystal was observed on X-Ray Diffraction (XRD) pattern, it furthermore, Scanning Electron Microscopy (SEM) images showed that rGO was distributed around LiFePO4-based composite. Finally, the 4-point probe measurement result confirmed that the optimum electrical conductivity is in additional 2 wt% rGO for range 1 to 2 wt% rGO.

  14. Image based SAR product simulation for analysis

    Science.gov (United States)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  15. Dialog-based Interactive Image Retrieval

    OpenAIRE

    Guo, Xiaoxiao; Wu, Hui; Cheng, Yu; Rennie, Steven; Feris, Rogerio Schmidt

    2018-01-01

    Existing methods for interactive image retrieval have demonstrated the merit of integrating user feedback, improving retrieval results. However, most current systems rely on restricted forms of user feedback, such as binary relevance responses, or feedback based on a fixed set of relative attributes, which limits their impact. In this paper, we introduce a new approach to interactive image search that enables users to provide feedback via natural language, allowing for more natural and effect...

  16. Skull base tumours part I: Imaging technique, anatomy and anterior skull base tumours

    Energy Technology Data Exchange (ETDEWEB)

    Borges, Alexandra [Instituto Portugues de Oncologia Francisco Gentil, Centro de Lisboa, Servico de Radiologia, Rua Professor Lima Basto, 1093 Lisboa Codex (Portugal)], E-mail: borgesalexandra@clix.pt

    2008-06-15

    Advances in cross-sectional imaging, surgical technique and adjuvant treatment have largely contributed to ameliorate the prognosis, lessen the morbidity and mortality of patients with skull base tumours and to the growing medical investment in the management of these patients. Because clinical assessment of the skull base is limited, cross-sectional imaging became indispensable in the diagnosis, treatment planning and follow-up of patients with suspected skull base pathology and the radiologist is increasingly responsible for the fate of these patients. This review will focus on the advances in imaging technique; contribution to patient's management and on the imaging features of the most common tumours affecting the anterior skull base. Emphasis is given to a systematic approach to skull base pathology based upon an anatomic division taking into account the major tissue constituents in each skull base compartment. The most relevant information that should be conveyed to surgeons and radiation oncologists involved in patient's management will be discussed.

  17. Skull base tumours part I: Imaging technique, anatomy and anterior skull base tumours

    International Nuclear Information System (INIS)

    Borges, Alexandra

    2008-01-01

    Advances in cross-sectional imaging, surgical technique and adjuvant treatment have largely contributed to ameliorate the prognosis, lessen the morbidity and mortality of patients with skull base tumours and to the growing medical investment in the management of these patients. Because clinical assessment of the skull base is limited, cross-sectional imaging became indispensable in the diagnosis, treatment planning and follow-up of patients with suspected skull base pathology and the radiologist is increasingly responsible for the fate of these patients. This review will focus on the advances in imaging technique; contribution to patient's management and on the imaging features of the most common tumours affecting the anterior skull base. Emphasis is given to a systematic approach to skull base pathology based upon an anatomic division taking into account the major tissue constituents in each skull base compartment. The most relevant information that should be conveyed to surgeons and radiation oncologists involved in patient's management will be discussed

  18. Optical image transformation and encryption by phase-retrieval-based double random-phase encoding and compressive ghost imaging

    Science.gov (United States)

    Yuan, Sheng; Yang, Yangrui; Liu, Xuemei; Zhou, Xin; Wei, Zhenzhuo

    2018-01-01

    An optical image transformation and encryption scheme is proposed based on double random-phase encoding (DRPE) and compressive ghost imaging (CGI) techniques. In this scheme, a secret image is first transformed into a binary image with the phase-retrieval-based DRPE technique, and then encoded by a series of random amplitude patterns according to the ghost imaging (GI) principle. Compressive sensing, corrosion and expansion operations are implemented to retrieve the secret image in the decryption process. This encryption scheme takes the advantage of complementary capabilities offered by the phase-retrieval-based DRPE and GI-based encryption techniques. That is the phase-retrieval-based DRPE is used to overcome the blurring defect of the decrypted image in the GI-based encryption, and the CGI not only reduces the data amount of the ciphertext, but also enhances the security of DRPE. Computer simulation results are presented to verify the performance of the proposed encryption scheme.

  19. Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering

    Science.gov (United States)

    Jiang, Lu; Piao, Yan

    2018-04-01

    The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.

  20. Image dissimilarity-based quantification of lung disease from CT

    DEFF Research Database (Denmark)

    Sørensen, Lauge; Loog, Marco; Lo, Pechin

    2010-01-01

    In this paper, we propose to classify medical images using dissimilarities computed between collections of regions of interest. The images are mapped into a dissimilarity space using an image dissimilarity measure, and a standard vector space-based classifier is applied in this space. The classif......In this paper, we propose to classify medical images using dissimilarities computed between collections of regions of interest. The images are mapped into a dissimilarity space using an image dissimilarity measure, and a standard vector space-based classifier is applied in this space...

  1. BEE FORAGE MAPPING BASED ON MULTISPECTRAL IMAGES LANDSAT

    Directory of Open Access Journals (Sweden)

    A. Moskalenko

    2016-10-01

    Full Text Available Possibilities of bee forage identification and mapping based on multispectral images have been shown in the research. Spectral brightness of bee forage has been determined with the use of satellite images. The effectiveness of some methods of image classification for mapping of bee forage is shown. Keywords: bee forage, mapping, multispectral images, image classification.

  2. Preoperative magnetic resonance imaging protocol for endoscopic cranial base image-guided surgery.

    Science.gov (United States)

    Grindle, Christopher R; Curry, Joseph M; Kang, Melissa D; Evans, James J; Rosen, Marc R

    2011-01-01

    Despite the increasing utilization of image-guided surgery, no radiology protocols for obtaining magnetic resonance (MR) imaging of adequate quality are available in the current literature. At our institution, more than 300 endonasal cranial base procedures including pituitary, extended pituitary, and other anterior skullbase procedures have been performed in the past 3 years. To facilitate and optimize preoperative evaluation and assessment, there was a need to develop a magnetic resonance protocol. Retrospective Technical Assessment was performed. Through a collaborative effort between the otolaryngology, neurosurgery, and neuroradiology departments at our institution, a skull base MR image-guided (IGS) protocol was developed with several ends in mind. First, it was necessary to generate diagnostic images useful for the more frequently seen pathologies to improve work flow and limit the expense and inefficiency of case specific MR studies. Second, it was necessary to generate sequences useful for IGS, preferably using sequences that best highlight that lesion. Currently, at our institution, all MR images used for IGS are obtained using this protocol as part of preoperative planning. The protocol that has been developed allows for thin cut precontrast and postcontrast axial cuts that can be used to plan intraoperative image guidance. It also obtains a thin cut T2 axial series that can be compiled separately for intraoperative imaging, or may be fused with computed tomographic images for combined modality. The outlined protocol obtains image sequences effective for diagnostic and operative purposes for image-guided surgery using both T1 and T2 sequences. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Image compression of bone images

    International Nuclear Information System (INIS)

    Hayrapetian, A.; Kangarloo, H.; Chan, K.K.; Ho, B.; Huang, H.K.

    1989-01-01

    This paper reports a receiver operating characteristic (ROC) experiment conducted to compare the diagnostic performance of a compressed bone image with the original. The compression was done on custom hardware that implements an algorithm based on full-frame cosine transform. The compression ratio in this study is approximately 10:1, which was decided after a pilot experiment. The image set consisted of 45 hand images, including normal images and images containing osteomalacia and osteitis fibrosa. Each image was digitized with a laser film scanner to 2,048 x 2,048 x 8 bits. Six observers, all board-certified radiologists, participated in the experiment. For each ROC session, an independent ROC curve was constructed and the area under that curve calculated. The image set was randomized for each session, as was the order for viewing the original and reconstructed images. Analysis of variance was used to analyze the data and derive statistically significant results. The preliminary results indicate that the diagnostic quality of the reconstructed image is comparable to that of the original image

  4. A data grid for imaging-based clinical trials

    Science.gov (United States)

    Zhou, Zheng; Chao, Sander S.; Lee, Jasper; Liu, Brent; Documet, Jorge; Huang, H. K.

    2007-03-01

    Clinical trials play a crucial role in testing new drugs or devices in modern medicine. Medical imaging has also become an important tool in clinical trials because images provide a unique and fast diagnosis with visual observation and quantitative assessment. A typical imaging-based clinical trial consists of: 1) A well-defined rigorous clinical trial protocol, 2) a radiology core that has a quality control mechanism, a biostatistics component, and a server for storing and distributing data and analysis results; and 3) many field sites that generate and send image studies to the radiology core. As the number of clinical trials increases, it becomes a challenge for a radiology core servicing multiple trials to have a server robust enough to administrate and quickly distribute information to participating radiologists/clinicians worldwide. The Data Grid can satisfy the aforementioned requirements of imaging based clinical trials. In this paper, we present a Data Grid architecture for imaging-based clinical trials. A Data Grid prototype has been implemented in the Image Processing and Informatics (IPI) Laboratory at the University of Southern California to test and evaluate performance in storing trial images and analysis results for a clinical trial. The implementation methodology and evaluation protocol of the Data Grid are presented.

  5. Image fusion between whole body FDG PET images and whole body MRI images using a full-automatic mutual information-based multimodality image registration software

    International Nuclear Information System (INIS)

    Uchida, Yoshitaka; Nakano, Yoshitada; Fujibuchi, Toshiou; Isobe, Tomoko; Kazama, Toshiki; Ito, Hisao

    2006-01-01

    We attempted image fusion between whole body PET and whole body MRI of thirty patients using a full-automatic mutual information (MI) -based multimodality image registration software and evaluated accuracy of this method and impact of the coregistrated imaging on diagnostic accuracy. For 25 of 30 fused images in body area, translating gaps were within 6 mm in all axes and rotating gaps were within 2 degrees around all axes. In head and neck area, considerably much gaps caused by difference of head inclination at imaging occurred in 16 patients, however these gaps were able to decrease by fused separately. In 6 patients, diagnostic accuracy using PET/MRI fused images was superior compared by PET image alone. This work shows that whole body FDG PET images and whole body MRI images can be automatically fused using MI-based multimodality image registration software accurately and this technique can add useful information when evaluating FDG PET images. (author)

  6. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    Science.gov (United States)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  7. SU-E-J-237: Image Feature Based DRR and Portal Image Registration

    Energy Technology Data Exchange (ETDEWEB)

    Wang, X; Chang, J [NY Weill Cornell Medical Ctr, NY (United States)

    2014-06-01

    Purpose: Two-dimensional (2D) matching of the kV X-ray and digitally reconstructed radiography (DRR) images is an important setup technique for image-guided radiotherapy (IGRT). In our clinics, mutual information based methods are used for this purpose on commercial linear accelerators, but with often needs for manual corrections. This work proved the feasibility that feature based image transform can be used to register kV and DRR images. Methods: The scale invariant feature transform (SIFT) method was implemented to detect the matching image details (or key points) between the kV and DRR images. These key points represent high image intensity gradients, and thus the scale invariant features. Due to the poor image contrast from our kV image, direct application of the SIFT method yielded many detection errors. To assist the finding of key points, the center coordinates of the kV and DRR images were read from the DICOM header, and the two groups of key points with similar relative positions to their corresponding centers were paired up. Using these points, a rigid transform (with scaling, horizontal and vertical shifts) was estimated. We also artificially introduced vertical and horizontal shifts to test the accuracy of our registration method on anterior-posterior (AP) and lateral pelvic images. Results: The results provided a satisfactory overlay of the transformed kV onto the DRR image. The introduced vs. detected shifts were fit into a linear regression. In the AP image experiments, linear regression analysis showed a slope of 1.15 and 0.98 with an R2 of 0.89 and 0.99 for the horizontal and vertical shifts, respectively. The results are 1.2 and 1.3 with R2 of 0.72 and 0.82 for the lateral image shifts. Conclusion: This work provided an alternative technique for kV to DRR alignment. Further improvements in the estimation accuracy and image contrast tolerance are underway.

  8. Continuous Nondestructive Monitoring Method Using the Reconstructed Three-Dimensional Conductivity Images via GREIT for Tissue Engineering

    Directory of Open Access Journals (Sweden)

    Sujin Ahn

    2014-01-01

    Full Text Available A continuous Nondestructive monitoring method is required to apply proper feedback controls during tissue regeneration. Conductivity is one of valuable information to assess the physiological function and structural formation of regenerated tissues or cultured cells. However, conductivity imaging methods suffered from inherited ill-posed characteristics in image reconstruction, unknown boundary geometry, uncertainty in electrode position, and systematic artifacts. In order to overcome the limitation of microscopic electrical impedance tomography (micro-EIT, we applied a 3D-specific container with a fixed boundary geometry and electrode configuration to maximize the performance of Graz consensus reconstruction algorithm for EIT (GREIT. The separation of driving and sensing electrodes allows us to simplify the hardware complexity and obtain higher measurement accuracy from a large number of small sensing electrodes. We investigated the applicability of the GREIT to 3D micro-EIT images via numerical simulations and large-scale phantom experiments. We could reconstruct multiple objects regardless of the location. The resolution was 5 mm3 with 30 dB SNR and the position error was less than 2.54 mm. This shows that the new micro-EIT system integrated with GREIT is robust with the intended resolution. With further refinement and scaling down to a microscale container, it may be a continuous nondestructive monitoring tool for tissue engineering applications.

  9. Optical image encryption method based on incoherent imaging and polarized light encoding

    Science.gov (United States)

    Wang, Q.; Xiong, D.; Alfalou, A.; Brosseau, C.

    2018-05-01

    We propose an incoherent encoding system for image encryption based on a polarized encoding method combined with an incoherent imaging. Incoherent imaging is the core component of this proposal, in which the incoherent point-spread function (PSF) of the imaging system serves as the main key to encode the input intensity distribution thanks to a convolution operation. An array of retarders and polarizers is placed on the input plane of the imaging structure to encrypt the polarized state of light based on Mueller polarization calculus. The proposal makes full use of randomness of polarization parameters and incoherent PSF so that a multidimensional key space is generated to deal with illegal attacks. Mueller polarization calculus and incoherent illumination of imaging structure ensure that only intensity information is manipulated. Another key advantage is that complicated processing and recording related to a complex-valued signal are avoided. The encoded information is just an intensity distribution, which is advantageous for data storage and transition because information expansion accompanying conventional encryption methods is also avoided. The decryption procedure can be performed digitally or using optoelectronic devices. Numerical simulation tests demonstrate the validity of the proposed scheme.

  10. EIT Imaging Regularization Based on Spectral Graph Wavelets.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut

    2017-09-01

    The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.

  11. Contrast-based sensorless adaptive optics for retinal imaging.

    Science.gov (United States)

    Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T O; He, Zheng; Metha, Andrew

    2015-09-01

    Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes.

  12. Fingerprint Image Enhancement Based on Second Directional Derivative of the Digital Image

    Directory of Open Access Journals (Sweden)

    Onnia Vesa

    2002-01-01

    Full Text Available This paper presents a novel approach of fingerprint image enhancement that relies on detecting the fingerprint ridges as image regions where the second directional derivative of the digital image is positive. A facet model is used in order to approximate the derivatives at each image pixel based on the intensity values of pixels located in a certain neighborhood. We note that the size of this neighborhood has a critical role in achieving accurate enhancement results. Using neighborhoods of various sizes, the proposed algorithm determines several candidate binary representations of the input fingerprint pattern. Subsequently, an output binary ridge-map image is created by selecting image zones, from the available binary image candidates, according to a MAP selection rule. Two public domain collections of fingerprint images are used in order to objectively assess the performance of the proposed fingerprint image enhancement approach.

  13. Figure of merit for macrouniformity based on image quality ruler evaluation and machine learning framework

    Science.gov (United States)

    Wang, Weibao; Overall, Gary; Riggs, Travis; Silveston-Keith, Rebecca; Whitney, Julie; Chiu, George; Allebach, Jan P.

    2013-01-01

    Assessment of macro-uniformity is a capability that is important for the development and manufacture of printer products. Our goal is to develop a metric that will predict macro-uniformity, as judged by human subjects, by scanning and analyzing printed pages. We consider two different machine learning frameworks for the metric: linear regression and the support vector machine. We have implemented the image quality ruler, based on the recommendations of the INCITS W1.1 macro-uniformity team. Using 12 subjects at Purdue University and 20 subjects at Lexmark, evenly balanced with respect to gender, we conducted subjective evaluations with a set of 35 uniform b/w prints from seven different printers with five levels of tint coverage. Our results suggest that the image quality ruler method provides a reliable means to assess macro-uniformity. We then defined and implemented separate features to measure graininess, mottle, large area variation, jitter, and large-scale non-uniformity. The algorithms that we used are largely based on ISO image quality standards. Finally, we used these features computed for a set of test pages and the subjects' image quality ruler assessments of these pages to train the two different predictors - one based on linear regression and the other based on the support vector machine (SVM). Using five-fold cross-validation, we confirmed the efficacy of our predictor.

  14. Matrix-based image reconstruction methods for tomography

    International Nuclear Information System (INIS)

    Llacer, J.; Meng, J.D.

    1984-10-01

    Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures

  15. Parallel CT image reconstruction based on GPUs

    International Nuclear Information System (INIS)

    Flores, Liubov A.; Vidal, Vicent; Mayo, Patricia; Rodenas, Francisco; Verdú, Gumersindo

    2014-01-01

    In X-ray computed tomography (CT) iterative methods are more suitable for the reconstruction of images with high contrast and precision in noisy conditions from a small number of projections. However, in practice, these methods are not widely used due to the high computational cost of their implementation. Nowadays technology provides the possibility to reduce effectively this drawback. It is the goal of this work to develop a fast GPU-based algorithm to reconstruct high quality images from under sampled and noisy projection data. - Highlights: • We developed GPU-based iterative algorithm to reconstruct images. • Iterative algorithms are capable to reconstruct images from under sampled set of projections. • The computer cost of the implementation of the developed algorithm is low. • The efficiency of the algorithm increases for the large scale problems

  16. Infrared Imaging for Inquiry-Based Learning

    Science.gov (United States)

    Xie, Charles; Hazzard, Edmund

    2011-01-01

    Based on detecting long-wavelength infrared (IR) radiation emitted by the subject, IR imaging shows temperature distribution instantaneously and heat flow dynamically. As a picture is worth a thousand words, an IR camera has great potential in teaching heat transfer, which is otherwise invisible. The idea of using IR imaging in teaching was first…

  17. Multispectral image pansharpening based on the contourlet transform

    Energy Technology Data Exchange (ETDEWEB)

    Amro, Israa; Mateos, Javier, E-mail: iamro@correo.ugr.e, E-mail: jmd@decsai.ugr.e [Departamento de Ciencias de la Computacion e I.A., Universidad de Granada, 18071 Granada (Spain)

    2010-02-01

    Pansharpening is a technique that fuses the information of a low resolution multispectral image (MS) and a high resolution panchromatic image (PAN), usually remote sensing images, to provide a high resolution multispectral image. In the literature, this task has been addressed from different points of view being one of the most popular the wavelets based algorithms. Recently, the contourlet transform has been proposed. This transform combines the advantages of the wavelets transform with a more efficient directional information representation. In this paper we propose a new pansharpening method based on contourlets, compare with its wavelet counterpart and assess its performance numerically and visually.

  18. Conducting polymers based counter electrodes for dye-sensitized solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Veerender, P., E-mail: veeru1009@gmail.com, E-mail: veeru1009@gmail.com; Saxena, Vibha, E-mail: veeru1009@gmail.com, E-mail: veeru1009@gmail.com; Gusain, Abhay, E-mail: veeru1009@gmail.com, E-mail: veeru1009@gmail.com; Jha, P., E-mail: veeru1009@gmail.com, E-mail: veeru1009@gmail.com; Koiry, S. P., E-mail: veeru1009@gmail.com, E-mail: veeru1009@gmail.com; Chauhan, A. K., E-mail: veeru1009@gmail.com, E-mail: veeru1009@gmail.com; Aswal, D. K., E-mail: veeru1009@gmail.com, E-mail: veeru1009@gmail.com; Gupta, S. K., E-mail: veeru1009@gmail.com, E-mail: veeru1009@gmail.com [Technical Physics Division, Bhabha Atomic Research Centre, Mumbai - 400085 (India)

    2014-04-24

    Conducting polymer films were synthesized and employed as an alternative to expensive platinum counter electrodes for dye-sensitized solar cells. poly(3,4-ethylenedioxythiophene) poly(styrenesulfonate) (PEDOT:PSS) thin films were spin-coated and polypyrrole films were electrochemically deposited via cyclic voltammetry method on ITO substrates. The morphology of the films were imaged by SEM and AFM. These films show good catalytic activity towards triiodide reduction as compared to Pt/FTO electrodes. Finally the photovoltaic performance of DSSC fabricated using N3 dye were compared with PT/FTO, PEDOT/ITO, and e-PPy counter electrodes.

  19. Tag-Based Social Image Search: Toward Relevant and Diverse Results

    Science.gov (United States)

    Yang, Kuiyuan; Wang, Meng; Hua, Xian-Sheng; Zhang, Hong-Jiang

    Recent years have witnessed a great success of social media websites. Tag-based image search is an important approach to access the image content of interest on these websites. However, the existing ranking methods for tag-based image search frequently return results that are irrelevant or lack of diversity. This chapter presents a diverse relevance ranking scheme which simultaneously takes relevance and diversity into account by exploring the content of images and their associated tags. First, it estimates the relevance scores of images with respect to the query term based on both visual information of images and semantic information of associated tags. Then semantic similarities of social images are estimated based on their tags. Based on the relevance scores and the similarities, the ranking list is generated by a greedy ordering algorithm which optimizes Average Diverse Precision (ADP), a novel measure that is extended from the conventional Average Precision (AP). Comprehensive experiments and user studies demonstrate the effectiveness of the approach.

  20. A novel image fusion algorithm based on 2D scale-mixing complex wavelet transform and Bayesian MAP estimation for multimodal medical images

    Directory of Open Access Journals (Sweden)

    Abdallah Bengueddoudj

    2017-05-01

    Full Text Available In this paper, we propose a new image fusion algorithm based on two-dimensional Scale-Mixing Complex Wavelet Transform (2D-SMCWT. The fusion of the detail 2D-SMCWT coefficients is performed via a Bayesian Maximum a Posteriori (MAP approach by considering a trivariate statistical model for the local neighboring of 2D-SMCWT coefficients. For the approximation coefficients, a new fusion rule based on the Principal Component Analysis (PCA is applied. We conduct several experiments using three different groups of multimodal medical images to evaluate the performance of the proposed method. The obtained results prove the superiority of the proposed method over the state of the art fusion methods in terms of visual quality and several commonly used metrics. Robustness of the proposed method is further tested against different types of noise. The plots of fusion metrics establish the accuracy of the proposed fusion method.

  1. Wiener discrete cosine transform-based image filtering

    Science.gov (United States)

    Pogrebnyak, Oleksiy; Lukin, Vladimir V.

    2012-10-01

    A classical problem of additive white (spatially uncorrelated) Gaussian noise suppression in grayscale images is considered. The main attention is paid to discrete cosine transform (DCT)-based denoising, in particular, to image processing in blocks of a limited size. The efficiency of DCT-based image filtering with hard thresholding is studied for different sizes of overlapped blocks. A multiscale approach that aggregates the outputs of DCT filters having different overlapped block sizes is proposed. Later, a two-stage denoising procedure that presumes the use of the multiscale DCT-based filtering with hard thresholding at the first stage and a multiscale Wiener DCT-based filtering at the second stage is proposed and tested. The efficiency of the proposed multiscale DCT-based filtering is compared to the state-of-the-art block-matching and three-dimensional filter. Next, the potentially reachable multiscale filtering efficiency in terms of output mean square error (MSE) is studied. The obtained results are of the same order as those obtained by Chatterjee's approach based on nonlocal patch processing. It is shown that the ideal Wiener DCT-based filter potential is usually higher when noise variance is high.

  2. [PACS-based endoscope image acquisition workstation].

    Science.gov (United States)

    Liu, J B; Zhuang, T G

    2001-01-01

    A practical PACS-based Endoscope Image Acquisition Workstation is here introduced. By a Multimedia Video Card, the endoscope video is digitized and captured dynamically or statically into computer. This workstation realizes a variety of functions such as the endoscope video's acquisition and display, as well as the editing, processing, managing, storage, printing, communication of related information. Together with other medical image workstation, it can make up the image sources of PACS for hospitals. In addition, it can also act as an independent endoscopy diagnostic system.

  3. Region-Based Color Image Indexing and Retrieval

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper a region-based color image indexing and retrieval algorithm is presented. As a basis for the indexing, a novel K-Means segmentation algorithm is used, modified so as to take into account the coherence of the regions. A new color distance is also defined for this algorithm. Based on ....... Experimental results demonstrate the performance of the algorithm. The development of an intelligent image content-based search engine for the World Wide Web is also presented, as a direct application of the presented algorithm....

  4. Electrical studies on silver based fast ion conducting glassy materials

    International Nuclear Information System (INIS)

    Rao, B. Appa; Kumar, E. Ramesh; Kumari, K. Rajani; Bhikshamaiah, G.

    2014-01-01

    Among all the available fast ion conductors, silver based glasses exhibit high conductivity. Further, glasses containing silver iodide enhances fast ion conducting behavior at room temperature. Glasses of various compositions of silver based fast ion conductors in the AgI−Ag 2 O−[(1−x)B 2 O 3 −xTeO 2 ] (x=0 to1 mol% in steps of 0.2) glassy system have been prepared by melt quenching method. The glassy nature of the compounds has been confirmed by X-ray diffraction. The electrical conductivity (AC) measurements have been carried out in the frequency range of 1 KHz–3MHz by Impedance Analyzer in the temperature range 303–423K. The DC conductivity measurements were also carried out in the temperature range 300–523K. From both AC and DC conductivity studies, it is found that the conductivity increases and activation energy decreases with increasing the concentration of TeO 2 as well as with temperature. The conductivity of the present glass system is found to be of the order of 10 −2 S/cm at room temperature. The ionic transport number of these glasses is found to be 0.999 indicating that these glasses can be used as electrolyte in batteries

  5. A High Precision Laser-Based Autofocus Method Using Biased Image Plane for Microscopy

    Directory of Open Access Journals (Sweden)

    Chao-Chen Gu

    2018-01-01

    Full Text Available This study designs and accomplishes a high precision and robust laser-based autofocusing system, in which a biased image plane is applied. In accordance to the designed optics, a cluster-based circle fitting algorithm is proposed to calculate the radius of the detecting spot from the reflected laser beam as an essential factor to obtain the defocus value. The experiment conduct on the experiment device achieved novel performance of high precision and robustness. Furthermore, the low demand of assembly accuracy makes the proposed method a low-cost and realizable solution for autofocusing technique.

  6. Image-based fingerprint verification system using LabVIEW

    Directory of Open Access Journals (Sweden)

    Sunil K. Singla

    2008-09-01

    Full Text Available Biometric-based identification/verification systems provide a solution to the security concerns in the modern world where machine is replacing human in every aspect of life. Fingerprints, because of their uniqueness, are the most widely used and highly accepted biometrics. Fingerprint biometric systems are either minutiae-based or pattern learning (image based. The minutiae-based algorithm depends upon the local discontinuities in the ridge flow pattern and are used when template size is important while image-based matching algorithm uses both the micro and macro feature of a fingerprint and is used if fast response is required. In the present paper an image-based fingerprint verification system is discussed. The proposed method uses a learning phase, which is not present in conventional image-based systems. The learning phase uses pseudo random sub-sampling, which reduces the number of comparisons needed in the matching stage. This system has been developed using LabVIEW (Laboratory Virtual Instrument Engineering Workbench toolbox version 6i. The availability of datalog files in LabVIEW makes it one of the most promising candidates for its usage as a database. Datalog files can access and manipulate data and complex data structures quickly and easily. It makes writing and reading much faster. After extensive experimentation involving a large number of samples and different learning sizes, high accuracy with learning image size of 100 100 and a threshold value of 700 (1000 being the perfect match has been achieved.

  7. Mosaicing of single plane illumination microscopy images using groupwise registration and fast content-based image fusion

    Science.gov (United States)

    Preibisch, Stephan; Rohlfing, Torsten; Hasak, Michael P.; Tomancak, Pavel

    2008-03-01

    Single Plane Illumination Microscopy (SPIM; Huisken et al., Nature 305(5686):1007-1009, 2004) is an emerging microscopic technique that enables live imaging of large biological specimens in their entirety. By imaging the living biological sample from multiple angles SPIM has the potential to achieve isotropic resolution throughout even relatively large biological specimens. For every angle, however, only a relatively shallow section of the specimen is imaged with high resolution, whereas deeper regions appear increasingly blurred. In order to produce a single, uniformly high resolution image, we propose here an image mosaicing algorithm that combines state of the art groupwise image registration for alignment with content-based image fusion to prevent degrading of the fused image due to regional blurring of the input images. For the registration stage, we introduce an application-specific groupwise transformation model that incorporates per-image as well as groupwise transformation parameters. We also propose a new fusion algorithm based on Gaussian filters, which is substantially faster than fusion based on local image entropy. We demonstrate the performance of our mosaicing method on data acquired from living embryos of the fruit fly, Drosophila, using four and eight angle acquisitions.

  8. Stereo Vision-Based High Dynamic Range Imaging Using Differently-Exposed Image Pair

    Directory of Open Access Journals (Sweden)

    Won-Jae Park

    2017-06-01

    Full Text Available In this paper, a high dynamic range (HDR imaging method based on the stereo vision system is presented. The proposed method uses differently exposed low dynamic range (LDR images captured from a stereo camera. The stereo LDR images are first converted to initial stereo HDR images using the inverse camera response function estimated from the LDR images. However, due to the limited dynamic range of the stereo LDR camera, the radiance values in under/over-exposed regions of the initial main-view (MV HDR image can be lost. To restore these radiance values, the proposed stereo matching and hole-filling algorithms are applied to the stereo HDR images. Specifically, the auxiliary-view (AV HDR image is warped by using the estimated disparity between initial the stereo HDR images and then effective hole-filling is applied to the warped AV HDR image. To reconstruct the final MV HDR, the warped and hole-filled AV HDR image is fused with the initial MV HDR image using the weight map. The experimental results demonstrate objectively and subjectively that the proposed stereo HDR imaging method provides better performance compared to the conventional method.

  9. Pixel extraction based integral imaging with controllable viewing direction

    International Nuclear Information System (INIS)

    Ji, Chao-Chao; Deng, Huan; Wang, Qiong-Hua

    2012-01-01

    We propose pixel extraction based integral imaging with a controllable viewing direction. The proposed integral imaging can provide viewers three-dimensional (3D) images in a very small viewing angle. The viewing angle and the viewing direction of the reconstructed 3D images are controlled by the pixels extracted from an elemental image array. Theoretical analysis and a 3D display experiment of the viewing direction controllable integral imaging are carried out. The experimental results verify the correctness of the theory. A 3D display based on the integral imaging can protect the viewer’s privacy and has huge potential for a television to show multiple 3D programs at the same time. (paper)

  10. Image noise-based dose adaptation in dynamic volume CT of the heart: dose and image quality optimisation in comparison with BMI-based dose adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Odedra, Devang [Queen' s University, School of Medicine, Kingston, ON (Canada); Blobel, Joerg [Toshiba Medical Systems Europe BV, Zoetermeer (Netherlands); University of Toronto, Division of Cardiothoracic Imaging, Department of Medical Imaging, Toronto General Hospital, Toronto, ON (Canada); AlHumayyd, Saad; Durand, Miranda; Jimenez-Juan, Laura; Paul, Narinder [University of Toronto, Division of Cardiothoracic Imaging, Department of Medical Imaging, Toronto General Hospital, Toronto, ON (Canada)

    2014-01-15

    To compare the image quality and radiation dose using image-noise (IN)-based determination of X-ray tube settings compared with a body mass index (BMI)-based protocol during CT coronary angiography (CTCA). Two hundred consecutive patients referred for CTCA to our institution were divided into two groups: BMI-based, 100 patients had CTCA with the X-ray tube current adjusted to the patient's BMI while maintaining a fixed tube potential of 120 kV; IN-based, 100 patients underwent imaging with the X-ray tube current and voltage adjusted to the IN measured within the mid-left ventricle on a pre-acquisition trans-axial image. Two independent cardiac radiologists performed blinded image quality assessment with quantification of the IN and signal-to-noise ratio (SNR) from the mid-LV and qualitative assessment using a three-point score. Radiation dose (CTDI and DLP) was recorded from the console. Results showed: IN (HU): BMI-based, 30.1 ± 9.9; IN-based, 33.1 ± 6.7; 32 % variation reduction (P = 0.001); SNR: BMI-based, 18.6 ± 7.1; IN-based, 15.4 ± 3.7; 48 % variation reduction (P < 0.0001). Visual scores: BMI-based, 2.3 ± 0.6; IN-based, 2.2 ± 0.5 (P = 0.54). Radiation dose: CTDI (mGy), BMI-based, 22.68 ± 8.9; IN-based, 17.16 ± 7.6; 24.3 % reduction (P < 0.001); DLP (mGy.cm), BMI-based, 309.3 ± 127.5; IN-based, 230.6 ± 105.5; 25.4 % reduction (P < 0.001). Image-noise-based stratification of X-ray tube parameters for CTCA results in 32 % improvement in image quality and 25 % reduction in radiation dose compared with a BMI-based protocol. (orig.)

  11. Dictionary-based image reconstruction for superresolution in integrated circuit imaging.

    Science.gov (United States)

    Cilingiroglu, T Berkin; Uyar, Aydan; Tuysuzoglu, Ahmet; Karl, W Clem; Konrad, Janusz; Goldberg, Bennett B; Ünlü, M Selim

    2015-06-01

    Resolution improvement through signal processing techniques for integrated circuit imaging is becoming more crucial as the rapid decrease in integrated circuit dimensions continues. Although there is a significant effort to push the limits of optical resolution for backside fault analysis through the use of solid immersion lenses, higher order laser beams, and beam apodization, signal processing techniques are required for additional improvement. In this work, we propose a sparse image reconstruction framework which couples overcomplete dictionary-based representation with a physics-based forward model to improve resolution and localization accuracy in high numerical aperture confocal microscopy systems for backside optical integrated circuit analysis. The effectiveness of the framework is demonstrated on experimental data.

  12. Simultaneous measurement of thermal conductivity and heat capacity by flash thermal imaging methods

    Science.gov (United States)

    Tao, N.; Li, X. L.; Sun, J. G.

    2017-06-01

    Thermal properties are important for material applications involved with temperature. Although many measurement methods are available, they may not be convenient to use or have not been demonstrated suitable for testing of a wide range of materials. To address this issue, we developed a new method for the nondestructive measurement of the thermal effusivity of bulk materials with uniform property. This method is based on the pulsed thermal imaging-multilayer analysis (PTI-MLA) method that has been commonly used for testing of coating materials. Because the test sample for PTI-MLA has to be in a two-layer configuration, we have found a commonly used commercial tape to construct such test samples with the tape as the first-layer material and the bulk material as the substrate. This method was evaluated for testing of six selected solid materials with a wide range of thermal properties covering most engineering materials. To determine both thermal conductivity and heat capacity, we also measured the thermal diffusivity of these six materials by the well-established flash method using the same experimental instruments with a different system setup. This paper provides a description of these methods, presents detailed experimental tests and data analyses, and discusses measurement results and their comparison with literature values.

  13. Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description.

    Science.gov (United States)

    Gu, Ke; Jakhetiya, Vinit; Qiao, Jun-Fei; Li, Xiaoli; Lin, Weisi; Thalmann, Daniel

    2017-07-28

    New challenges have been brought out along with the emerging of 3D-related technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, etc, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced- and no-reference models.

  14. Intelligent image retrieval based on radiology reports

    Energy Technology Data Exchange (ETDEWEB)

    Gerstmair, Axel; Langer, Mathias; Kotter, Elmar [University Medical Center Freiburg, Department of Diagnostic Radiology, Freiburg (Germany); Daumke, Philipp; Simon, Kai [Averbis GmbH, Freiburg (Germany)

    2012-12-15

    To create an advanced image retrieval and data-mining system based on in-house radiology reports. Radiology reports are semantically analysed using natural language processing (NLP) techniques and stored in a state-of-the-art search engine. Images referenced by sequence and image number in the reports are retrieved from the picture archiving and communication system (PACS) and stored for later viewing. A web-based front end is used as an interface to query for images and show the results with the retrieved images and report text. Using a comprehensive radiological lexicon for the underlying terminology, the search algorithm also finds results for synonyms, abbreviations and related topics. The test set was 108 manually annotated reports analysed by different system configurations. Best results were achieved using full syntactic and semantic analysis with a precision of 0.929 and recall of 0.952. Operating successfully since October 2010, 258,824 reports have been indexed and a total of 405,146 preview images are stored in the database. Data-mining and NLP techniques provide quick access to a vast repository of images and radiology reports with both high precision and recall values. Consequently, the system has become a valuable tool in daily clinical routine, education and research. (orig.)

  15. Intelligent image retrieval based on radiology reports

    International Nuclear Information System (INIS)

    Gerstmair, Axel; Langer, Mathias; Kotter, Elmar; Daumke, Philipp; Simon, Kai

    2012-01-01

    To create an advanced image retrieval and data-mining system based on in-house radiology reports. Radiology reports are semantically analysed using natural language processing (NLP) techniques and stored in a state-of-the-art search engine. Images referenced by sequence and image number in the reports are retrieved from the picture archiving and communication system (PACS) and stored for later viewing. A web-based front end is used as an interface to query for images and show the results with the retrieved images and report text. Using a comprehensive radiological lexicon for the underlying terminology, the search algorithm also finds results for synonyms, abbreviations and related topics. The test set was 108 manually annotated reports analysed by different system configurations. Best results were achieved using full syntactic and semantic analysis with a precision of 0.929 and recall of 0.952. Operating successfully since October 2010, 258,824 reports have been indexed and a total of 405,146 preview images are stored in the database. Data-mining and NLP techniques provide quick access to a vast repository of images and radiology reports with both high precision and recall values. Consequently, the system has become a valuable tool in daily clinical routine, education and research. (orig.)

  16. Content-based Image Hiding Method for Secure Network Biometric Verification

    Directory of Open Access Journals (Sweden)

    Xiangjiu Che

    2011-08-01

    Full Text Available For secure biometric verification, most existing methods embed biometric information directly into the cover image, but content correlation analysis between the biometric image and the cover image is often ignored. In this paper, we propose a novel biometric image hiding approach based on the content correlation analysis to protect the network-based transmitted image. By using principal component analysis (PCA, the content correlation between the biometric image and the cover image is firstly analyzed. Then based on particle swarm optimization (PSO algorithm, some regions of the cover image are selected to represent the biometric image, in which the cover image can carry partial content of the biometric image. As a result of the correlation analysis, the unrepresented part of the biometric image is embedded into the cover image by using the discrete wavelet transform (DWT. Combined with human visual system (HVS model, this approach makes the hiding result perceptually invisible. The extensive experimental results demonstrate that the proposed hiding approach is robust against some common frequency and geometric attacks; it also provides an effective protection for the secure biometric verification.

  17. Ultrafuzziness Optimization Based on Type II Fuzzy Sets for Image Thresholding

    Directory of Open Access Journals (Sweden)

    Hudan Studiawan

    2010-11-01

    Full Text Available Image thresholding is one of the processing techniques to provide high quality preprocessed image. Image vagueness and bad illumination are common obstacles yielding in a poor image thresholding output. By assuming image as fuzzy sets, several different fuzzy thresholding techniques have been proposed to remove these obstacles during threshold selection. In this paper, we proposed an algorithm for thresholding image using ultrafuzziness optimization to decrease uncertainty in fuzzy system by common fuzzy sets like type II fuzzy sets. Optimization was conducted by involving ultrafuzziness measurement for background and object fuzzy sets separately. Experimental results demonstrated that the proposed image thresholding method had good performances for images with high vagueness, low level contrast, and grayscale ambiguity.

  18. UAV-Based Thermal Imaging for High-Throughput Field Phenotyping of Black Poplar Response to Drought

    Directory of Open Access Journals (Sweden)

    Riccardo Ludovisi

    2017-09-01

    Full Text Available Poplars are fast-growing, high-yielding forest tree species, whose cultivation as second-generation biofuel crops is of increasing interest and can efficiently meet emission reduction goals. Yet, breeding elite poplar trees for drought resistance remains a major challenge. Worldwide breeding programs are largely focused on intra/interspecific hybridization, whereby Populus nigra L. is a fundamental parental pool. While high-throughput genotyping has resulted in unprecedented capabilities to rapidly decode complex genetic architecture of plant stress resistance, linking genomics to phenomics is hindered by technically challenging phenotyping. Relying on unmanned aerial vehicle (UAV-based remote sensing and imaging techniques, high-throughput field phenotyping (HTFP aims at enabling highly precise and efficient, non-destructive screening of genotype performance in large populations. To efficiently support forest-tree breeding programs, ground-truthing observations should be complemented with standardized HTFP. In this study, we develop a high-resolution (leaf level HTFP approach to investigate the response to drought of a full-sib F2 partially inbred population (termed here ‘POP6’, whose F1 was obtained from an intraspecific P. nigra controlled cross between genotypes with highly divergent phenotypes. We assessed the effects of two water treatments (well-watered and moderate drought on a population of 4603 trees (503 genotypes hosted in two adjacent experimental plots (1.67 ha by conducting low-elevation (25 m flights with an aerial drone and capturing 7836 thermal infrared (TIR images. TIR images were undistorted, georeferenced, and orthorectified to obtain radiometric mosaics. Canopy temperature (Tc was extracted using two independent semi-automated segmentation techniques, eCognition- and Matlab-based, to avoid the mixed-pixel problem. Overall, results showed that the UAV platform-based thermal imaging enables to effectively assess genotype

  19. UAV-Based Thermal Imaging for High-Throughput Field Phenotyping of Black Poplar Response to Drought.

    Science.gov (United States)

    Ludovisi, Riccardo; Tauro, Flavia; Salvati, Riccardo; Khoury, Sacha; Mugnozza Scarascia, Giuseppe; Harfouche, Antoine

    2017-01-01

    Poplars are fast-growing, high-yielding forest tree species, whose cultivation as second-generation biofuel crops is of increasing interest and can efficiently meet emission reduction goals. Yet, breeding elite poplar trees for drought resistance remains a major challenge. Worldwide breeding programs are largely focused on intra/interspecific hybridization, whereby Populus nigra L. is a fundamental parental pool. While high-throughput genotyping has resulted in unprecedented capabilities to rapidly decode complex genetic architecture of plant stress resistance, linking genomics to phenomics is hindered by technically challenging phenotyping. Relying on unmanned aerial vehicle (UAV)-based remote sensing and imaging techniques, high-throughput field phenotyping (HTFP) aims at enabling highly precise and efficient, non-destructive screening of genotype performance in large populations. To efficiently support forest-tree breeding programs, ground-truthing observations should be complemented with standardized HTFP. In this study, we develop a high-resolution (leaf level) HTFP approach to investigate the response to drought of a full-sib F 2 partially inbred population (termed here 'POP6'), whose F 1 was obtained from an intraspecific P. nigra controlled cross between genotypes with highly divergent phenotypes. We assessed the effects of two water treatments (well-watered and moderate drought) on a population of 4603 trees (503 genotypes) hosted in two adjacent experimental plots (1.67 ha) by conducting low-elevation (25 m) flights with an aerial drone and capturing 7836 thermal infrared (TIR) images. TIR images were undistorted, georeferenced, and orthorectified to obtain radiometric mosaics. Canopy temperature ( T c ) was extracted using two independent semi-automated segmentation techniques, eCognition- and Matlab-based, to avoid the mixed-pixel problem. Overall, results showed that the UAV platform-based thermal imaging enables to effectively assess genotype

  20. A Geometric Dictionary Learning Based Approach for Fluorescence Spectroscopy Image Fusion

    OpenAIRE

    Zhiqin Zhu; Guanqiu Qi; Yi Chai; Penghua Li

    2017-01-01

    In recent years, sparse representation approaches have been integrated into multi-focus image fusion methods. The fused images of sparse-representation-based image fusion methods show great performance. Constructing an informative dictionary is a key step for sparsity-based image fusion method. In order to ensure sufficient number of useful bases for sparse representation in the process of informative dictionary construction, image patches from all source images are classified into different ...

  1. OCML-based colour image encryption

    International Nuclear Information System (INIS)

    Rhouma, Rhouma; Meherzi, Soumaya; Belghith, Safya

    2009-01-01

    The chaos-based cryptographic algorithms have suggested some new ways to develop efficient image-encryption schemes. While most of these schemes are based on low-dimensional chaotic maps, it has been proposed recently to use high-dimensional chaos namely spatiotemporal chaos, which is modelled by one-way coupled-map lattices (OCML). Owing to their hyperchaotic behaviour, such systems are assumed to enhance the cryptosystem security. In this paper, we propose an OCML-based colour image encryption scheme with a stream cipher structure. We use a 192-bit-long external key to generate the initial conditions and the parameters of the OCML. We have made several tests to check the security of the proposed cryptosystem namely, statistical tests including histogram analysis, calculus of the correlation coefficients of adjacent pixels, security test against differential attack including calculus of the number of pixel change rate (NPCR) and unified average changing intensity (UACI), and entropy calculus. The cryptosystem speed is analyzed and tested as well.

  2. Image based 3D city modeling : Comparative study

    Directory of Open Access Journals (Sweden)

    S. P. Singh

    2014-06-01

    Full Text Available 3D city model is a digital representation of the Earth’s surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India. This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can’t do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good

  3. Moving beyond mass-based parameters for conductivity analysis of sulfonated polymers

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yu Seung [Los Alamos National Laboratory; Pivovar, Bryan [NREL

    2009-01-01

    Proton conductivity of polymer electrolytes is critical for fuel cells and has therefore been studied in significant detail. The conductivity of sulfonated polymers has been linked to material characteristics in order to elucidate trends. Mass based measurements based on water uptake and ion exchange capacity are two of the most common material characteristics used to make comparisons between polymer electrolytes, but have significant limitations when correlated to proton conductivity. These limitations arise in part because different polymers can have significantly different densities and conduction happens over length scales more appropriately represented by volume measurements rather than mass. Herein, we establish and review volume related parameters that can be used to compare proton conductivity of different polymer electrolytes. Morphological effects on proton conductivity are also considered. Finally, the impact of these phenomena on designing next generation sulfonated polymers for polymer electrolyte membrane fuel cells is discussed.

  4. Concave omnidirectional imaging device for cylindrical object based on catadioptric panoramic imaging

    Science.gov (United States)

    Wu, Xiaojun; Wu, Yumei; Wen, Peizhi

    2018-03-01

    To obtain information on the outer surface of a cylinder object, we propose a catadioptric panoramic imaging system based on the principle of uniform spatial resolution for vertical scenes. First, the influence of the projection-equation coefficients on the spatial resolution and astigmatism of the panoramic system are discussed, respectively. Through parameter optimization, we obtain the appropriate coefficients for the projection equation, and so the imaging quality of the entire imaging system can reach an optimum value. Finally, the system projection equation is calibrated, and an undistorted rectangular panoramic image is obtained using the cylindrical-surface projection expansion method. The proposed 360-deg panoramic-imaging device overcomes the shortcomings of existing surface panoramic-imaging methods, and it has the advantages of low cost, simple structure, high imaging quality, and small distortion, etc. The experimental results show the effectiveness of the proposed method.

  5. Conducting polymer-based multilayer films for instructive biomaterial coatings

    OpenAIRE

    Hardy, John G; Li, Hetian; Chow, Jacqueline K; Geissler, Sydney A; McElroy, Austin B; Nguy, Lindsey; Hernandez, Derek S; Schmidt, Christine E

    2015-01-01

    Aim: To demonstrate the design, fabrication and testing of conformable conducting biomaterials that encourage cell alignment. Materials & methods: Thin conducting composite biomaterials based on multilayer films of poly (3,4-ethylenedioxythiophene) derivatives, chitosan and gelatin were prepared in a layer-by-layer fashion. Fibroblasts were observed with fluorescence microscopy and their alignment (relative to the dipping direction and direction of electrical current passed through the films)...

  6. Conducting polymer nanocomposite-based supercapacitors

    OpenAIRE

    Liew, Soon Yee; Walsh, Darren A.; Chen, George Z.

    2016-01-01

    The use of nanocomposites of electronically-conducting polymers for supercapacitors has increased significantly over the past years, due to their high capacitances and abilities to withstand many charge-discharge cycles. We have recently been investigating the use of nanocomposites of electronically-conducting polymers containing conducting and non-conducting nanomaterials such as carbon nanotubes and cellulose nanocrystals, for use in supercapacitors. In this contribution, we provide a summa...

  7. Using a web-based image quality assurance reporting system to improve image quality.

    Science.gov (United States)

    Czuczman, Gregory J; Pomerantz, Stuart R; Alkasab, Tarik K; Huang, Ambrose J

    2013-08-01

    The purpose of this study is to show the impact of a web-based image quality assurance reporting system on the rates of three common image quality errors at our institution. A web-based image quality assurance reporting system was developed and used beginning in April 2009. Image quality endpoints were assessed immediately before deployment (period 1), approximately 18 months after deployment of a prototype reporting system (period 2), and approximately 12 months after deployment of a subsequent upgraded department-wide reporting system (period 3). A total of 3067 axillary shoulder radiographs were reviewed for correct orientation, 355 shoulder CT scans were reviewed for correct reformatting of coronal and sagittal images, and 346 sacral MRI scans were reviewed for correct acquisition plane of axial images. Error rates for each review period were calculated and compared using the Fisher exact test. Error rates of axillary shoulder radiograph orientation were 35.9%, 7.2%, and 10.0%, respectively, for the three review periods. The decrease in error rate between periods 1 and 2 was statistically significant (p < 0.0001). Error rates of shoulder CT reformats were 9.8%, 2.7%, and 5.8%, respectively, for the three review periods. The decrease in error rate between periods 1 and 2 was statistically significant (p = 0.03). Error rates for sacral MRI axial sequences were 96.5%, 32.5%, and 3.4%, respectively, for the three review periods. The decrease in error rates between periods 1 and 2 and between periods 2 and 3 was statistically significant (p < 0.0001). A web-based system for reporting image quality errors may be effective for improving image quality.

  8. Liver Segmentation Based on Snakes Model and Improved GrowCut Algorithm in Abdominal CT Image

    Directory of Open Access Journals (Sweden)

    Huiyan Jiang

    2013-01-01

    Full Text Available A novel method based on Snakes Model and GrowCut algorithm is proposed to segment liver region in abdominal CT images. First, according to the traditional GrowCut method, a pretreatment process using K-means algorithm is conducted to reduce the running time. Then, the segmentation result of our improved GrowCut approach is used as an initial contour for the future precise segmentation based on Snakes model. At last, several experiments are carried out to demonstrate the performance of our proposed approach and some comparisons are conducted between the traditional GrowCut algorithm. Experimental results show that the improved approach not only has a better robustness and precision but also is more efficient than the traditional GrowCut method.

  9. Image segmentation algorithm based on T-junctions cues

    Science.gov (United States)

    Qian, Yanyu; Cao, Fengyun; Wang, Lu; Yang, Xuejie

    2016-03-01

    To improve the over-segmentation and over-merge phenomenon of single image segmentation algorithm,a novel approach of combing Graph-Based algorithm and T-junctions cues is proposed in this paper. First, a method by L0 gradient minimization is applied to the smoothing of the target image eliminate artifacts caused by noise and texture detail; Then, the initial over-segmentation result of the smoothing image using the graph-based algorithm; Finally, the final results via a region fusion strategy by t-junction cues. Experimental results on a variety of images verify the new approach's efficiency in eliminating artifacts caused by noise,segmentation accuracy and time complexity has been significantly improved.

  10. An Efficient Evolutionary Based Method For Image Segmentation

    OpenAIRE

    Aslanzadeh, Roohollah; Qazanfari, Kazem; Rahmati, Mohammad

    2017-01-01

    The goal of this paper is to present a new efficient image segmentation method based on evolutionary computation which is a model inspired from human behavior. Based on this model, a four layer process for image segmentation is proposed using the split/merge approach. In the first layer, an image is split into numerous regions using the watershed algorithm. In the second layer, a co-evolutionary process is applied to form centers of finals segments by merging similar primary regions. In the t...

  11. Parallel image encryption algorithm based on discretized chaotic map

    International Nuclear Information System (INIS)

    Zhou Qing; Wong Kwokwo; Liao Xiaofeng; Xiang Tao; Hu Yue

    2008-01-01

    Recently, a variety of chaos-based algorithms were proposed for image encryption. Nevertheless, none of them works efficiently in parallel computing environment. In this paper, we propose a framework for parallel image encryption. Based on this framework, a new algorithm is designed using the discretized Kolmogorov flow map. It fulfills all the requirements for a parallel image encryption algorithm. Moreover, it is secure and fast. These properties make it a good choice for image encryption on parallel computing platforms

  12. A Robust Transform Estimator Based on Residual Analysis and Its Application on UAV Aerial Images

    Directory of Open Access Journals (Sweden)

    Guorong Cai

    2018-02-01

    Full Text Available Estimating the transformation between two images from the same scene is a fundamental step for image registration, image stitching and 3D reconstruction. State-of-the-art methods are mainly based on sorted residual for generating hypotheses. This scheme has acquired encouraging results in many remote sensing applications. Unfortunately, mainstream residual based methods may fail in estimating the transform between Unmanned Aerial Vehicle (UAV low altitude remote sensing images, due to the fact that UAV images always have repetitive patterns and severe viewpoint changes, which produce lower inlier rate and higher pseudo outlier rate than other tasks. We performed extensive experiments and found the main reason is that these methods compute feature pair similarity within a fixed window, making them sensitive to the size of residual window. To solve this problem, three schemes that based on the distribution of residuals are proposed, which are called Relational Window (RW, Sliding Window (SW, Reverse Residual Order (RRO, respectively. Specially, RW employs a relaxation residual window size to evaluate the highest similarity within a relaxation model length. SW fixes the number of overlap models while varying the length of window size. RRO takes the permutation of residual values into consideration to measure similarity, not only including the number of overlap structures, but also giving penalty to reverse number within the overlap structures. Experimental results conducted on our own built UAV high resolution remote sensing images show that the proposed three strategies all outperform traditional methods in the presence of severe perspective distortion due to viewpoint change.

  13. HVS-based medical image compression

    Energy Technology Data Exchange (ETDEWEB)

    Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)

    2005-07-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.

  14. HVS-based medical image compression

    International Nuclear Information System (INIS)

    Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao

    2005-01-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time

  15. Adaptive Image Transmission Scheme over Wavelet-Based OFDM System

    Institute of Scientific and Technical Information of China (English)

    GAOXinying; YUANDongfeng; ZHANGHaixia

    2005-01-01

    In this paper an adaptive image transmission scheme is proposed over Wavelet-based OFDM (WOFDM) system with Unequal error protection (UEP) by the design of non-uniform signal constellation in MLC. Two different data division schemes: byte-based and bitbased, are analyzed and compared. Different bits are protected unequally according to their different contribution to the image quality in bit-based data division scheme, which causes UEP combined with this scheme more powerful than that with byte-based scheme. Simulation results demonstrate that image transmission by UEP with bit-based data division scheme presents much higher PSNR values and surprisingly better image quality. Furthermore, by considering the tradeoff of complexity and BER performance, Haar wavelet with the shortest compactly supported filter length is the most suitable one among orthogonal Daubechies wavelet series in our proposed system.

  16. POOR TEXTURAL IMAGE MATCHING BASED ON GRAPH THEORY

    Directory of Open Access Journals (Sweden)

    S. Chen

    2016-06-01

    Full Text Available Image matching lies at the heart of photogrammetry and computer vision. For poor textural images, the matching result is affected by low contrast, repetitive patterns, discontinuity or occlusion, few or homogeneous textures. Recently, graph matching became popular for its integration of geometric and radiometric information. Focused on poor textural image matching problem, it is proposed an edge-weight strategy to improve graph matching algorithm. A series of experiments have been conducted including 4 typical landscapes: Forest, desert, farmland, and urban areas. And it is experimentally found that our new algorithm achieves better performance. Compared to SIFT, doubled corresponding points were acquired, and the overall recall rate reached up to 68%, which verifies the feasibility and effectiveness of the algorithm.

  17. Subspace-Based Holistic Registration for Low-Resolution Facial Images

    Directory of Open Access Journals (Sweden)

    Boom BJ

    2010-01-01

    Full Text Available Subspace-based holistic registration is introduced as an alternative to landmark-based face registration, which has a poor performance on low-resolution images, as obtained in camera surveillance applications. The proposed registration method finds the alignment by maximizing the similarity score between a probe and a gallery image. We use a novel probabilistic framework for both user-independent as well as user-specific face registration. The similarity is calculated using the probability that the face image is correctly aligned in a face subspace, but additionally we take the probability into account that the face is misaligned based on the residual error in the dimensions perpendicular to the face subspace. We perform extensive experiments on the FRGCv2 database to evaluate the impact that the face registration methods have on face recognition. Subspace-based holistic registration on low-resolution images can improve face recognition in comparison with landmark-based registration on high-resolution images. The performance of the tested face recognition methods after subspace-based holistic registration on a low-resolution version of the FRGC database is similar to that after manual registration.

  18. A Geometric Dictionary Learning Based Approach for Fluorescence Spectroscopy Image Fusion

    Directory of Open Access Journals (Sweden)

    Zhiqin Zhu

    2017-02-01

    Full Text Available In recent years, sparse representation approaches have been integrated into multi-focus image fusion methods. The fused images of sparse-representation-based image fusion methods show great performance. Constructing an informative dictionary is a key step for sparsity-based image fusion method. In order to ensure sufficient number of useful bases for sparse representation in the process of informative dictionary construction, image patches from all source images are classified into different groups based on geometric similarities. The key information of each image-patch group is extracted by principle component analysis (PCA to build dictionary. According to the constructed dictionary, image patches are converted to sparse coefficients by simultaneous orthogonal matching pursuit (SOMP algorithm for representing the source multi-focus images. At last the sparse coefficients are fused by Max-L1 fusion rule and inverted to fused image. Due to the limitation of microscope, the fluorescence image cannot be fully focused. The proposed multi-focus image fusion solution is applied to fluorescence imaging area for generating all-in-focus images. The comparison experimentation results confirm the feasibility and effectiveness of the proposed multi-focus image fusion solution.

  19. Image Segmentation Based on Constrained Spectral Variance Difference and Edge Penalty

    Directory of Open Access Journals (Sweden)

    Bo Chen

    2015-05-01

    Full Text Available Segmentation, which is usually the first step in object-based image analysis (OBIA, greatly influences the quality of final OBIA results. In many existing multi-scale segmentation algorithms, a common problem is that under-segmentation and over-segmentation always coexist at any scale. To address this issue, we propose a new method that integrates the newly developed constrained spectral variance difference (CSVD and the edge penalty (EP. First, initial segments are produced by a fast scan. Second, the generated segments are merged via a global mutual best-fitting strategy using the CSVD and EP as merging criteria. Finally, very small objects are merged with their nearest neighbors to eliminate the remaining noise. A series of experiments based on three sets of remote sensing images, each with different spatial resolutions, were conducted to evaluate the effectiveness of the proposed method. Both visual and quantitative assessments were performed, and the results show that large objects were better preserved as integral entities while small objects were also still effectively delineated. The results were also found to be superior to those from eCongnition’s multi-scale segmentation.

  20. Tunnel conductance of Watson-Crick nucleoside-base pairs from telegraph noise

    International Nuclear Information System (INIS)

    Chang Shuai; He Jin; Lin Lisha; Zhang Peiming; Liang Feng; Huang Shuo; Lindsay, Stuart; Young, Michael

    2009-01-01

    The use of tunneling signals to sequence DNA is presently hampered by the small tunnel conductance of a junction spanning an entire DNA molecule. The design of a readout system that uses a shorter tunneling path requires knowledge of the absolute conductance across base pairs. We have exploited the stochastic switching of hydrogen-bonded DNA base-nucleoside pairs trapped in a tunnel junction to determine the conductance of individual molecular pairs. This conductance is found to be sensitive to the geometry of the junction, but a subset of the data appears to come from unstrained molecular pairs. The conductances determined from these pairs are within a factor of two of the predictions of density functional calculations. The experimental data reproduces the counterintuitive theoretical prediction that guanine-deoxycytidine pairs (3 H-bonds) have a smaller conductance than adenine-thymine pairs (2 H-bonds). A bimodal distribution of switching lifetimes shows that both H-bonds and molecule-metal contacts break.

  1. The tissue velocity imaging and strain rate imaging in the assessment of interatrial electromechanical conduction in patients with sick sinus syndrome before and after pacemaker implantation

    Directory of Open Access Journals (Sweden)

    Xiaozhi Zheng

    2011-05-01

    Full Text Available Tissue velocity imaging (TVI and strain rate imaging (SRI were recently introduced to quantify myocardial mechanical activity in patients receiving cardiac resynchronization therapy. To clear whether atrial-demand-based (AAI (R atrial pacing can fully simulate the electromechanical conduction of physiological state and to clarify which one is more appropriate for the assessment of electromechanical activity of the heart between TVI and SRI, 30 normal subjects and 31 patients with sick sinus syndrome (SSS before and after AAI(R pacemaker implantation (PI were investigated in this study. The results showed that the time intervals (ms, P-SRa assessed by SRI (not P-Va assessed by TVI prolonged step by step from the lateral wall of the right atrium (RA, the interatrial septum (IAS and the left atrium (LA in normal subjects(5.01±0.62, 17.05±3.54 and 45.09±12.26, p<0.01. P-Va and P-SRa did not differ at the RA, IAS and LA in patients with SSS before PI (p>0.05, and they were significant longer than those of normal subjects (p<0.01. However, they shortened to normal levels in patients with SSS after PI and P-SRa showed again the trend of gradually prolonging from the RA, IAS to LA. At the same time, the peak velocities and the peak strain rates during atrial contraction also returned to normal values from lower levels. These data suggested that AAI(R atrial pacing can successfully reverse the abnormal interatrial electromechanical conduction in patients with SSS, and SRI is more appropriate for the assessment of the electromechanical activity of atrial wall than TVI.

  2. Comparison of analyzer-based imaging computed tomography extraction algorithms and application to bone-cartilage imaging

    International Nuclear Information System (INIS)

    Diemoz, Paul C; Bravin, Alberto; Coan, Paola; Glaser, Christian

    2010-01-01

    In x-ray phase-contrast analyzer-based imaging, the contrast is provided by a combination of absorption, refraction and scattering effects. Several extraction algorithms, which attempt to separate and quantify these different physical contributions, have been proposed and applied. In a previous work, we presented a quantitative comparison of five among the most well-known extraction algorithms based on the geometrical optics approximation applied to planar images: diffraction-enhanced imaging (DEI), extended diffraction-enhanced imaging (E-DEI), generalized diffraction-enhanced imaging (G-DEI), multiple-image radiography (MIR) and Gaussian curve fitting (GCF). In this paper, we compare these algorithms in the case of the computed tomography (CT) modality. The extraction algorithms are applied to analyzer-based CT images of both plastic phantoms and biological samples (cartilage-on-bone cylinders). Absorption, refraction and scattering signals are derived. Results obtained with the different algorithms may vary greatly, especially in the case of large refraction angles. We show that ABI-CT extraction algorithms can provide an excellent tool to enhance the visualization of cartilage internal structures, which may find applications in a clinical context. Besides, by using the refraction images, the refractive index decrements for both the cartilage matrix and the cartilage cells have been estimated.

  3. Effect of Microstructure on Electrical Conductivity of Nickel-Base Superalloys

    Science.gov (United States)

    Nagarajan, Balasubramanian; Castagne, Sylvie; Annamalai, Swaminathan; Fan, Zheng; Chan, Wai Luen

    2017-08-01

    Eddy current spectroscopy is one of the promising non-destructive methods for residual stress evaluation along the depth of subsurface-treated nickel-base superalloys, but it is limited by its sensitivity to microstructure. This paper studies the influence of microstructure on the electrical conductivity of two nickel-base alloys, RR1000 and IN100. Different microstructures were attained using heat treatment cycles ranging from solution annealing to aging, with varying aging time and temperature. Eddy current conductivity was measured using conductivity probes of frequencies ranging between 1 and 5 MHz. Qualitative and quantitative characterization of the microstructure was performed using optical and scanning electron microscopes. For the heat treatment conditions between the solution annealing and the peak aging, the electrical conductivity of RR1000 increased by 6.5 pct, which is duly substantiated by the corresponding increase in hardness (12 pct) and the volume fraction of γ' precipitates (41 pct). A similar conductivity rise of 2.6 pct for IN100 is in agreement with the increased volume fraction of γ' precipitates (12.5 pct) despite an insignificant hardening between the heat treatment conditions. The observed results with RR1000 and IN100 highlight the sensitivity of electrical conductivity to the minor microstructure variations, especially the volume fraction of γ' precipitates, within the materials.

  4. A 4DCT imaging-based breathing lung model with relative hysteresis

    Energy Technology Data Exchange (ETDEWEB)

    Miyawaki, Shinjiro; Choi, Sanghun [IIHR – Hydroscience & Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Hoffman, Eric A. [Department of Biomedical Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Medicine, The University of Iowa, Iowa City, IA 52242 (United States); Department of Radiology, The University of Iowa, Iowa City, IA 52242 (United States); Lin, Ching-Long, E-mail: ching-long-lin@uiowa.edu [IIHR – Hydroscience & Engineering, The University of Iowa, Iowa City, IA 52242 (United States); Department of Mechanical and Industrial Engineering, The University of Iowa, 3131 Seamans Center, Iowa City, IA 52242 (United States)

    2016-12-01

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. - Highlights: • We developed a breathing human lung CFD model based on 4D-dynamic CT images. • The 4DCT-based breathing lung model is able to capture lung relative hysteresis. • A new boundary condition for lung model based on one static CT image was proposed. • The difference between lung models based on 4D and static CT images was quantified.

  5. Neutron imaging system based on a video camera

    International Nuclear Information System (INIS)

    Dinca, M.

    2004-01-01

    The non-destructive testing with cold, thermal, epithermal or fast neutrons is nowadays more and more useful because the world-wide level of industrial development requires considerably higher standards of quality of manufactured products and reliability of technological processes especially where any deviation from standards could result in large-scale catastrophic consequences or human loses. Thanks to their properties, easily obtained and very good discrimination of the materials that penetrate, the thermal neutrons are the most used probe. The methods involved for this technique have advanced from neutron radiography based on converter screens and radiological films to neutron radioscopy based on video cameras, that is, from static images to dynamic images. Many neutron radioscopy systems have been used in the past with various levels of success. The quality of an image depends on the quality of the neutron beam and the type of the neutron imaging system. For real time investigations there are involved tube type cameras, CCD cameras and recently CID cameras that capture the image from an appropriate scintillator through the agency of a mirror. The analog signal of the camera is then converted into digital signal by the signal processing technology included into the camera. The image acquisition card or frame grabber from a PC converts the digital signal into an image. The image is formatted and processed by image analysis software. The scanning position of the object is controlled by the computer that commands the electrical motors that move horizontally, vertically and rotate the table of the object. Based on this system, a lot of static image acquisitions, real time non-destructive investigations of dynamic processes and finally, tomographic investigations of the small objects are done in a short time. A system based on a CID camera is presented. Fundamental differences between CCD and CID cameras lie in their pixel readout structure and technique. CIDs

  6. Synergistic Instance-Level Subspace Alignment for Fine-Grained Sketch-Based Image Retrieval.

    Science.gov (United States)

    Li, Ke; Pang, Kaiyue; Song, Yi-Zhe; Hospedales, Timothy M; Xiang, Tao; Zhang, Honggang

    2017-08-25

    We study the problem of fine-grained sketch-based image retrieval. By performing instance-level (rather than category-level) retrieval, it embodies a timely and practical application, particularly with the ubiquitous availability of touchscreens. Three factors contribute to the challenging nature of the problem: (i) free-hand sketches are inherently abstract and iconic, making visual comparisons with photos difficult, (ii) sketches and photos are in two different visual domains, i.e. black and white lines vs. color pixels, and (iii) fine-grained distinctions are especially challenging when executed across domain and abstraction-level. To address these challenges, we propose to bridge the image-sketch gap both at the high-level via parts and attributes, as well as at the low-level, via introducing a new domain alignment method. More specifically, (i) we contribute a dataset with 304 photos and 912 sketches, where each sketch and image is annotated with its semantic parts and associated part-level attributes. With the help of this dataset, we investigate (ii) how strongly-supervised deformable part-based models can be learned that subsequently enable automatic detection of part-level attributes, and provide pose-aligned sketch-image comparisons. To reduce the sketch-image gap when comparing low-level features, we also (iii) propose a novel method for instance-level domain-alignment, that exploits both subspace and instance-level cues to better align the domains. Finally (iv) these are combined in a matching framework integrating aligned low-level features, mid-level geometric structure and high-level semantic attributes. Extensive experiments conducted on our new dataset demonstrate effectiveness of the proposed method.

  7. Can Electrical Resistance Tomography be used for imaging unsaturated moisture flow in cement-based materials with discrete cracks?

    International Nuclear Information System (INIS)

    Smyl, Danny; Rashetnia, Reza; Seppänen, Aku; Pour-Ghaz, Mohammad

    2017-01-01

    Previously, it has been shown that Electrical Resistance Tomography (ERT) can be used for monitoring moisture flow in undamaged cement-based materials. In this work, we investigate whether ERT could be used for imaging three-dimensional (3D) unsaturated moisture flow in cement-based materials that contain discrete cracks. Novel computational methods based on the so-called absolute imaging framework are developed and used in ERT image reconstructions, aiming at a better tolerance of the reconstructed images with respect to the complexity of the conductivity distribution in cracked material. ERT is first tested using specimens with physically simulated cracks of known geometries, and corroborated with numerical simulations of unsaturated moisture flow. Next, specimens with loading-induced cracks are imaged; here, ERT reconstructions are evaluated qualitatively based on visual observations and known properties of unsaturated moisture flow. Results indicate that ERT is a viable method of visualizing 3D unsaturated moisture flow in cement-based materials with discrete cracks. - Highlights: • 3D EIT is developed to visualize water ingress in cracked mortar. • Mortar with different size discrete cracks are used. • The EIT results are corroborated with numerical simulations. • EIT results accurately show the temporal and spatial variation of water content. • EIT is shown to be a viable method to monitor flow in cracks and matrix.

  8. Central Motor Conduction Studies and Diagnostic Magnetic Resonance Imaging in Children with Severe Primary and Secondary Dystonia

    Science.gov (United States)

    McClelland, Verity; Mills, Kerry; Siddiqui, Ata; Selway, Richard; Lin, Jean-Pierre

    2011-01-01

    Aim: Dystonia in childhood has many causes. Imaging may suggest corticospinal tract dysfunction with or without coexistent basal ganglia damage. There are very few published neurophysiological studies on children with dystonia; one previous study has focused on primary dystonia. We investigated central motor conduction in 62 children (34 males, 28…

  9. Remote Sensing Image Enhancement Based on Non-subsampled Shearlet Transform and Parameterized Logarithmic Image Processing Model

    Directory of Open Access Journals (Sweden)

    TAO Feixiang

    2015-08-01

    Full Text Available Aiming at parts of remote sensing images with dark brightness and low contrast, a remote sensing image enhancement method based on non-subsampled Shearlet transform and parameterized logarithmic image processing model is proposed in this paper to improve the visual effects and interpretability of remote sensing images. Firstly, a remote sensing image is decomposed into a low-frequency component and high frequency components by non-subsampled Shearlet transform.Then the low frequency component is enhanced according to PLIP (parameterized logarithmic image processing model, which can improve the contrast of image, while the improved fuzzy enhancement method is used to enhance the high frequency components in order to highlight the information of edges and details. A large number of experimental results show that, compared with five kinds of image enhancement methods such as bidirectional histogram equalization method, the method based on stationary wavelet transform and the method based on non-subsampled contourlet transform, the proposed method has advantages in both subjective visual effects and objective quantitative evaluation indexes such as contrast and definition, which can more effectively improve the contrast of remote sensing image and enhance edges and texture details with better visual effects.

  10. High-speed photoacoustic imaging using an LED-based photoacoustic imaging system

    Science.gov (United States)

    Sato, Naoto; Kuniyil Ajith Singh, Mithun; Shigeta, Yusuke; Hanaoka, Takamitsu; Agano, Toshitaka

    2018-02-01

    Recently we developed a multispectral LED-based photoacoustic/ultrasound imaging system (AcousticX) and have been continuously working on its technical/functional improvements. AcousticX is a linear array ultrasound transducer (128 elements, 10 MHz)-based system in which LED arrays (selectable wavelengths, pulse repetition frequency: 4 kHz, pulse width: tunable from 40 - 100 ns) are fixed on both sides of the transducer to illuminate the tissue for photoacoustic imaging. The ultrasound/photoacoustic data from all 128 elements can be simultaneously acquired, processed and displayed. We already demonstrated our system's capability to perform photoacoustic/ultrasound imaging for dynamic imaging of the tissue at a frame rate of 10 Hz (for example to visualize the pulsation of arteries in vivo in human subjects). In this work, we present the development of a new high-speed imaging mode in AcousticX. In this mode, instead of toggling between ultrasound and photoacoustic measurements, it is possible to continuously acquire only photoacoustic data for 1.5 seconds with a time interval of 1 ms. With this improvement, we can record photoacoustic signals from the whole aperture (38 mm) at fast rate and can be reviewed later at different speeds for analyzing dynamic changes in the photoacoustic signals. We believe that AcousticX with this new high-speed mode opens up a feasible technical path for multiple dynamic studies, for example one which focus on imaging the response of voltage sensitive dyes. We envisage to improve the acquisition speed further in future for exploring ultra-high-speed applications.

  11. An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis

    Science.gov (United States)

    Kim, Yongmin; Alexander, Thomas

    1986-06-01

    In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.

  12. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  13. Using lean Six Sigma to improve hospital based outpatient imaging satisfaction.

    Science.gov (United States)

    McDonald, Angelic P; Kirk, Randy

    2013-01-01

    Within the hospital based imaging department at Methodist Willowbrook, outpatient, inpatient, and emergency patients are all performed on the same equipment with the same staff. The critical nature of the patient is the deciding factor as to who gets done first and in what order procedures are performed. After an aggressive adoption of Intentional Tools, the imaging department was finally able to move from a two year mean Press Ganey, outpatient satisfaction average score of 91.2 and UHC percentile ranking of 37th to a mean average of 92.1 and corresponding UHC ranking of 60th percentile. It was at the 60th percentile ranking that the department flat lined. Using the Six Sigma DMAIC process, opportunity for further improvement was identified. A two week focus pilot was conducted specifically on areas identified through the Six Sigma process. The department was able to jump to 88th percentile ranking and a mean of 93.7. With pay for performance focusing on outpatient satisfaction and a financial incentive to improving and maintaining the highest scores, it was important to know where the imaging department should apply its financial resources to obtain the greatest impact.

  14. Aspect-Aided Dynamic Non-Negative Sparse Representation-Based Microwave Image Classification

    Directory of Open Access Journals (Sweden)

    Xinzheng Zhang

    2016-09-01

    Full Text Available Classification of target microwave images is an important application in much areas such as security, surveillance, etc. With respect to the task of microwave image classification, a recognition algorithm based on aspect-aided dynamic non-negative least square (ADNNLS sparse representation is proposed. Firstly, an aspect sector is determined, the center of which is the estimated aspect angle of the testing sample. The training samples in the aspect sector are divided into active atoms and inactive atoms by smooth self-representative learning. Secondly, for each testing sample, the corresponding active atoms are selected dynamically, thereby establishing dynamic dictionary. Thirdly, the testing sample is represented with ℓ 1 -regularized non-negative sparse representation under the corresponding dynamic dictionary. Finally, the class label of the testing sample is identified by use of the minimum reconstruction error. Verification of the proposed algorithm was conducted using the Moving and Stationary Target Acquisition and Recognition (MSTAR database which was acquired by synthetic aperture radar. Experiment results validated that the proposed approach was able to capture the local aspect characteristics of microwave images effectively, thereby improving the classification performance.

  15. Smart Images Search based on Visual Features Fusion

    International Nuclear Information System (INIS)

    Saad, M.H.

    2013-01-01

    Image search engines attempt to give fast and accurate access to the wide range of the huge amount images available on the Internet. There have been a number of efforts to build search engines based on the image content to enhance search results. Content-Based Image Retrieval (CBIR) systems have achieved a great interest since multimedia files, such as images and videos, have dramatically entered our lives throughout the last decade. CBIR allows automatically extracting target images according to objective visual contents of the image itself, for example its shapes, colors and textures to provide more accurate ranking of the results. The recent approaches of CBIR differ in terms of which image features are extracted to be used as image descriptors for matching process. This thesis proposes improvements of the efficiency and accuracy of CBIR systems by integrating different types of image features. This framework addresses efficient retrieval of images in large image collections. A comparative study between recent CBIR techniques is provided. According to this study; image features need to be integrated to provide more accurate description of image content and better image retrieval accuracy. In this context, this thesis presents new image retrieval approaches that provide more accurate retrieval accuracy than previous approaches. The first proposed image retrieval system uses color, texture and shape descriptors to form the global features vector. This approach integrates the yc b c r color histogram as a color descriptor, the modified Fourier descriptor as a shape descriptor and modified Edge Histogram as a texture descriptor in order to enhance the retrieval results. The second proposed approach integrates the global features vector, which is used in the first approach, with the SURF salient point technique as local feature. The nearest neighbor matching algorithm with a proposed similarity measure is applied to determine the final image rank. The second approach

  16. Tissues segmentation based on multi spectral medical images

    Science.gov (United States)

    Li, Ya; Wang, Ying

    2017-11-01

    Each band image contains the most obvious tissue feature according to the optical characteristics of different tissues in different specific bands for multispectral medical images. In this paper, the tissues were segmented by their spectral information at each multispectral medical images. Four Local Binary Patter descriptors were constructed to extract blood vessels based on the gray difference between the blood vessels and their neighbors. The segmented tissue in each band image was merged to a clear image.

  17. A diagnostic imaging approach for online characterization of multi-impact in aircraft composite structures based on a scanning spatial-wavenumber filter of guided wave

    Science.gov (United States)

    Ren, Yuanqiang; Qiu, Lei; Yuan, Shenfang; Su, Zhongqing

    2017-06-01

    Monitoring of impact and multi-impact in particular in aircraft composite structures has been an intensive research topic in the field of guided-wave-based structural health monitoring (SHM). Compared with the majority of existing methods such as those using signal features in the time-, frequency- or joint time-frequency domain, the approach based on spatial-wavenumber filter of guided wave shows superb advantage in effectively distinguishing particular wave modes and identifying their propagation direction relative to the sensor array. However, there exist two major issues when conducting online characterization of multi-impact event. Firstly, the spatial-wavenumber filter should be realized in the situation that the wavenumber of high spatial resolution of the complicated multi-impact signal cannot be measured or modeled. Secondly, it's difficult to identify the multiple impacts and realize multi-impact localization due to the overlapping of wavenumbers. To address these issues, a scanning spatial-wavenumber filter based diagnostic imaging method for online characterization of multi-impact event is proposed to conduct multi-impact imaging and localization in this paper. The principle of the scanning filter for multi-impact is developed first to conduct spatial-wavenumber filtering and to achieve wavenumber-time imaging of the multiple impacts. Then, a feature identification method of multi-impact based on eigenvalue decomposition and wavenumber searching is presented to estimate the number of impacts and calculate the wavenumber of the multi-impact signal, and an image mapping method is proposed as well to convert the wavenumber-time image to an angle-distance image to distinguish and locate the multiple impacts. A series of multi-impact events are applied to a carbon fiber laminate plate to validate the proposed methods. The validation results show that the localization of the multiple impacts are well achieved.

  18. Image based rendering of iterated function systems

    NARCIS (Netherlands)

    Wijk, van J.J.; Saupe, D.

    2004-01-01

    A fast method to generate fractal imagery is presented. Iterated function systems (IFS) are based on repeatedly copying transformed images. We show that this can be directly translated into standard graphics operations: Each image is generated by texture mapping and blending copies of the previous

  19. Efficient Image Blur in Web-Based Applications

    DEFF Research Database (Denmark)

    Kraus, Martin

    2010-01-01

    Scripting languages require the use of high-level library functions to implement efficient image processing; thus, real-time image blur in web-based applications is a challenging task unless specific library functions are available for this purpose. We present a pyramid blur algorithm, which can ...

  20. A fractal-based image encryption system

    KAUST Repository

    Abd-El-Hafiz, S. K.

    2014-12-01

    This study introduces a novel image encryption system based on diffusion and confusion processes in which the image information is hidden inside the complex details of fractal images. A simplified encryption technique is, first, presented using a single-fractal image and statistical analysis is performed. A general encryption system utilising multiple fractal images is, then, introduced to improve the performance and increase the encryption key up to hundreds of bits. This improvement is achieved through several parameters: feedback delay, multiplexing and independent horizontal or vertical shifts. The effect of each parameter is studied separately and, then, they are combined to illustrate their influence on the encryption quality. The encryption quality is evaluated using different analysis techniques such as correlation coefficients, differential attack measures, histogram distributions, key sensitivity analysis and the National Institute of Standards and Technology (NIST) statistical test suite. The obtained results show great potential compared to other techniques.

  1. Silhouette-based approach of 3D image reconstruction for automated image acquisition using robotic arm

    Science.gov (United States)

    Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.

    2017-06-01

    This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.

  2. [Mentalization Based Treatment of an Adolescent Girl with Conduct Disorder].

    Science.gov (United States)

    Reiter, Melanie; Bock, Astrid; Althoff, Marie-Luise; Taubner, Svenja; Sevecke, Kathrin

    2017-05-01

    Mentalization Based Treatment of an Adolescent Girl with Conduct Disorder This paper will give a short overview on the theoretical concept of mentalization and its specific characteristics in adolescence. A previous study on Mentalization based treatment for adolescents (MBT-A) demonstrated the effectiveness of MBT-A for the treatment of adolescents with symptoms of deliberate self-harm (Rossouw u. Fonagy, 2012). Based on the results of this study Taubner, Gablonski, Sevecke, and Volkert (in preparation) developed a manual for mentalization based treatment for adolescents with conduct disorders (MBT-CD). This manual represents the foundation for a future study on the efficacy of the MBT-A for this specific disorder in young people. The present case report demonstrates the application of specific MBT interventions, as well as the therapeutic course over one year in a 16-year old girl who fulfilled all criteria of a conduct disorder. During the course of treatment, the de-escalating relationship-oriented therapeutic approach can be considered as a great strength of MBT-A, especially for patients with conduct disorders. The clinical picture, as well as the psychological assessment, showed a positive progress over the course of treatment. Despite frequent escalations, forced placements due to acute endangerment of self and others, and a precarious situation with the patient's place of residence towards the end of therapy, MBT-A treatment enabled the patient to continually use the evolved mentalizing capabilities as a resource.

  3. Scene matching based on non-linear pre-processing on reference image and sensed image

    Institute of Scientific and Technical Information of China (English)

    Zhong Sheng; Zhang Tianxu; Sang Nong

    2005-01-01

    To solve the heterogeneous image scene matching problem, a non-linear pre-processing method for the original images before intensity-based correlation is proposed. The result shows that the proper matching probability is raised greatly. Especially for the low S/N image pairs, the effect is more remarkable.

  4. Comparing Four Touch-Based Interaction Techniques for an Image-Based Audience Response System

    NARCIS (Netherlands)

    Jorritsma, Wiard; Prins, Jonatan T.; van Ooijen, Peter M. A.

    2015-01-01

    This study aimed to determine the most appropriate touch-based interaction technique for I2Vote, an image-based audience response system for radiology education in which users need to accurately mark a target on a medical image. Four plausible techniques were identified: land-on, take-off,

  5. Graphene-based ultrasonic detector for photoacoustic imaging

    Science.gov (United States)

    Yang, Fan; Song, Wei; Zhang, Chonglei; Fang, Hui; Min, Changjun; Yuan, Xiaocong

    2018-03-01

    Taking advantage of optical absorption imaging contrast, photoacoustic imaging technology is able to map the volumetric distribution of the optical absorption properties within biological tissues. Unfortunately, traditional piezoceramics-based transducers used in most photoacoustic imaging setups have inadequate frequency response, resulting in both poor depth resolution and inaccurate quantification of the optical absorption information. Instead of the piezoelectric ultrasonic transducer, we develop a graphene-based optical sensor for detecting photoacoustic pressure. The refractive index in the coupling medium is modulated due to photoacoustic pressure perturbation, which creates the variation of the polarization-sensitive optical absorption property of the graphene. As a result, the photoacoustic detection is realized through recording the reflectance intensity difference of polarization light. The graphene-based detector process an estimated noise-equivalentpressure (NEP) sensitivity of 550 Pa over 20-MHz bandwidth with a nearby linear pressure response from 11.0 kPa to 53.0 kPa. Further, a graphene-based photoacoustic microscopy is built, and non-invasively reveals the microvascular anatomy in mouse ears label-freely.

  6. An improved image non-blind image deblurring method based on FoEs

    Science.gov (United States)

    Zhu, Qidan; Sun, Lei

    2013-03-01

    Traditional non-blind image deblurring algorithms always use maximum a posterior(MAP). MAP estimates involving natural image priors can reduce the ripples effectively in contrast to maximum likelihood(ML). However, they have been found lacking in terms of restoration performance. Based on this issue, we utilize MAP with KL penalty to replace traditional MAP. We develop an image reconstruction algorithm that minimizes the KL divergence between the reference distribution and the prior distribution. The approximate KL penalty can restrain over-smooth caused by MAP. We use three groups of images and Harris corner detection to prove our method. The experimental results show that our algorithm of non-blind image restoration can effectively reduce the ringing effect and exhibit the state-of-the-art deblurring results.

  7. Vision communications based on LED array and imaging sensor

    Science.gov (United States)

    Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.

  8. Image registration assessment in radiotherapy image guidance based on control chart monitoring.

    Science.gov (United States)

    Xia, Wenyao; Breen, Stephen L

    2018-04-01

    Image guidance with cone beam computed tomography in radiotherapy can guarantee the precision and accuracy of patient positioning prior to treatment delivery. During the image guidance process, operators need to take great effort to evaluate the image guidance quality before correcting a patient's position. This work proposes an image registration assessment method based on control chart monitoring to reduce the effort taken by the operator. According to the control chart plotted by daily registration scores of each patient, the proposed method can quickly detect both alignment errors and image quality inconsistency. Therefore, the proposed method can provide a clear guideline for the operators to identify unacceptable image quality and unacceptable image registration with minimal effort. Experimental results demonstrate that by using control charts from a clinical database of 10 patients undergoing prostate radiotherapy, the proposed method can quickly identify out-of-control signals and find special cause of out-of-control registration events.

  9. Image Blocking Encryption Algorithm Based on Laser Chaos Synchronization

    Directory of Open Access Journals (Sweden)

    Shu-Ying Wang

    2016-01-01

    Full Text Available In view of the digital image transmission security, based on laser chaos synchronization and Arnold cat map, a novel image encryption scheme is proposed. Based on pixel values of plain image a parameter is generated to influence the secret key. Sequences of the drive system and response system are pretreated by the same method and make image blocking encryption scheme for plain image. Finally, pixels position are scrambled by general Arnold transformation. In decryption process, the chaotic synchronization accuracy is fully considered and the relationship between the effect of synchronization and decryption is analyzed, which has characteristics of high precision, higher efficiency, simplicity, flexibility, and better controllability. The experimental results show that the encryption algorithm image has high security and good antijamming performance.

  10. Reconfigurable pipelined sensing for image-based control

    NARCIS (Netherlands)

    Medina, R.; Stuijk, S.; Goswami, D.; Basten, T.

    2016-01-01

    Image-based control systems are becoming common in domains such as robotics, healthcare and industrial automation. Coping with a long sample period because of the latency of the image processing algorithm is an open challenge. Modern multi-core platforms allow to address this challenge by pipelining

  11. Image dissimilarity-based quantification of lung disease from CT

    DEFF Research Database (Denmark)

    Sørensen, Lauge; Loog, Marco; Lo, Pechin Chien Pau

    2010-01-01

    In this paper, we propose to classify medical images using dissimilarities computed between collections of regions of interest. The images are mapped into a dissimilarity space using an image dissimilarity measure, and a standard vector space-based classifier is applied in this space. The classif......In this paper, we propose to classify medical images using dissimilarities computed between collections of regions of interest. The images are mapped into a dissimilarity space using an image dissimilarity measure, and a standard vector space-based classifier is applied in this space...

  12. An Image Encryption Method Based on Bit Plane Hiding Technology

    Institute of Scientific and Technical Information of China (English)

    LIU Bin; LI Zhitang; TU Hao

    2006-01-01

    A novel image hiding method based on the correlation analysis of bit plane is described in this paper. Firstly, based on the correlation analysis, different bit plane of a secret image is hided in different bit plane of several different open images. And then a new hiding image is acquired by a nesting "Exclusive-OR" operation on those images obtained from the first step. At last, by employing image fusion technique, the final hiding result is achieved. The experimental result shows that the method proposed in this paper is effective.

  13. A kernel-based multi-feature image representation for histopathology image classification

    International Nuclear Information System (INIS)

    Moreno J; Caicedo J Gonzalez F

    2010-01-01

    This paper presents a novel strategy for building a high-dimensional feature space to represent histopathology image contents. Histogram features, related to colors, textures and edges, are combined together in a unique image representation space using kernel functions. This feature space is further enhanced by the application of latent semantic analysis, to model hidden relationships among visual patterns. All that information is included in the new image representation space. Then, support vector machine classifiers are used to assign semantic labels to images. Processing and classification algorithms operate on top of kernel functions, so that; the structure of the feature space is completely controlled using similarity measures and a dual representation. The proposed approach has shown a successful performance in a classification task using a dataset with 1,502 real histopathology images in 18 different classes. The results show that our approach for histological image classification obtains an improved average performance of 20.6% when compared to a conventional classification approach based on SVM directly applied to the original kernel.

  14. A KERNEL-BASED MULTI-FEATURE IMAGE REPRESENTATION FOR HISTOPATHOLOGY IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    J Carlos Moreno

    2010-09-01

    Full Text Available This paper presents a novel strategy for building a high-dimensional feature space to represent histopathology image contents. Histogram features, related to colors, textures and edges, are combined together in a unique image representation space using kernel functions. This feature space is further enhanced by the application of Latent Semantic Analysis, to model hidden relationships among visual patterns. All that information is included in the new image representation space. Then, Support Vector Machine classifiers are used to assign semantic labels to images. Processing and classification algorithms operate on top of kernel functions, so that, the structure of the feature space is completely controlled using similarity measures and a dual representation. The proposed approach has shown a successful performance in a classification task using a dataset with 1,502 real histopathology images in 18 different classes. The results show that our approach for histological image classification obtains an improved average performance of 20.6% when compared to a conventional classification approach based on SVM directly applied to the original kernel.

  15. New clathrin-based nanoplatforms for magnetic resonance imaging.

    Directory of Open Access Journals (Sweden)

    Gordana D Vitaliano

    Full Text Available Magnetic Resonance Imaging (MRI has high spatial resolution, but low sensitivity for visualization of molecular targets in the central nervous system (CNS. Our goal was to develop a new MRI method with the potential for non-invasive molecular brain imaging. We herein introduce new bio-nanotechnology approaches for designing CNS contrast media based on the ubiquitous clathrin cell protein.The first approach utilizes three-legged clathrin triskelia modified to carry 81 gadolinium chelates. The second approach uses clathrin cages self-assembled from triskelia and designed to carry 432 gadolinium chelates. Clathrin triskelia and cages were characterized by size, structure, protein concentration, and chelate and gadolinium contents. Relaxivity was evaluated at 0.47 T. A series of studies were conducted to ascertain whether fluorescent-tagged clathrin nanoplatforms could cross the blood brain barriers (BBB unaided following intranasal, intravenous, and intraperitoneal routes of administration. Clathrin nanoparticles can be constituted as triskelia (18.5 nm in size, and as cages assembled from them (55 nm. The mean chelate: clathrin heavy chain molar ratio was 27.04±4.8: 1 for triskelia, and 4.2±1.04: 1 for cages. Triskelia had ionic relaxivity of 16 mM(-1 s(-1, and molecular relaxivity of 1,166 mM(-1 s(-1, while cages had ionic relaxivity of 81 mM(-1 s(-1 and molecular relaxivity of 31,512 mM(-1 s(-1. Thus, cages exhibited 20 times higher ionic relaxivity and 8,000-fold greater molecular relaxivity than gadopentetate dimeglumine. Clathrin nanoplatforms modified with fluorescent tags were able to cross or bypass the BBB without enhancements following intravenous, intraperitoneal and intranasal administration in rats.Use of clathrin triskelia and cages as carriers of CNS contrast media represents a new approach. This new biocompatible protein-based nanotechnology demonstrated suitable physicochemical properties to warrant further in vivo imaging and

  16. NONINVASIVE OPTICAL IMAGING OF STAPHYLOCOCCUS AUREUS INFECTION IN VIVO USING AN ANTIMICROBIAL PEPTIDE FRAGMENT BASED NEAR-INFRARED FLUORESCENT PROBES

    Directory of Open Access Journals (Sweden)

    CUICUI LIU

    2013-07-01

    Full Text Available The diagnosis of bacterial infections remains a major challenge in medicine. Optical imaging of bacterial infection in living animals is usually conducted with genetic reporters such as light-emitting enzymes or fluorescent proteins. However, there are many circumstances where genetic reporters are not applicable, and there is an urgent need for exogenous synthetic probes that can selectively target bacteria. Optical imaging of bacteria in vivo is much less developed than methods such as radioimaging and MRI. Furthermore near-infrared (NIR dyes with emission wavelengths in the region of 650–900 nm can propagate through two or more centimeters of tissue and may enable deeper tissue imaging if sensitive detection techniques are employed. Here we constructed an antimicrobial peptide fragment UBI29-41-based near-infrared fluorescent imaging probe. The probe is composed of UBI29-41 conjugated to a near infrared dye ICG-Der-02. UBI29-41 is a cationic antimicrobial peptide that targets the anionic surfaces of bacterial cells. The probe allows detection of Staphylococcus aureus infection (5 × 107 cells in a mouse local infection model using whole animal near-infrared fluorescence imaging. Furthermore, we demonstrate that the UBI29-41-based imaging probe can selectively accumulate within bacteria. The significantly higher accumulation in bacterial infection suggests that UBI29-41-based imaging probe may be a promising imaging agent to detect bacterial infections.

  17. Comic image understanding based on polygon detection

    Science.gov (United States)

    Li, Luyuan; Wang, Yongtao; Tang, Zhi; Liu, Dong

    2013-01-01

    Comic image understanding aims to automatically decompose scanned comic page images into storyboards and then identify the reading order of them, which is the key technique to produce digital comic documents that are suitable for reading on mobile devices. In this paper, we propose a novel comic image understanding method based on polygon detection. First, we segment a comic page images into storyboards by finding the polygonal enclosing box of each storyboard. Then, each storyboard can be represented by a polygon, and the reading order of them is determined by analyzing the relative geometric relationship between each pair of polygons. The proposed method is tested on 2000 comic images from ten printed comic series, and the experimental results demonstrate that it works well on different types of comic images.

  18. Electronically conductive perovskite-based oxide nanoparticles and films for optical sensing applications

    Science.gov (United States)

    Ohodnicki, Jr., Paul R; Schultz, Andrew M

    2015-04-28

    The disclosure relates to a method of detecting a change in a chemical composition by contacting a electronically conducting perovskite-based metal oxide material with a monitored stream, illuminating the electronically conducting perovskite-based metal oxide with incident light, collecting exiting light, monitoring an optical signal based on a comparison of the incident light and the exiting light, and detecting a shift in the optical signal. The electronically conducting perovskite-based metal oxide has a perovskite-based crystal structure and an electronic conductivity of at least 10.sup.-1 S/cm, where parameters are specified at the gas stream temperature. The electronically conducting perovskite-based metal oxide has an empirical formula A.sub.xB.sub.yO.sub.3-.delta., where A is at least a first element at the A-site, B is at least a second element at the B-site, and where 0.8perovskite-based oxides include but are not limited to La.sub.1-xSr.sub.xCoO.sub.3, La.sub.1-xSr.sub.xMnO.sub.3, LaCrO.sub.3, LaNiO.sub.3, La.sub.1-xSr.sub.xMn.sub.1-yCr.sub.yO.sub.3, SrFeO.sub.3, SrVO.sub.3, La-doped SrTiO.sub.3, Nb-doped SrTiO.sub.3, and SrTiO.sub.3-.delta..

  19. Gold nanorod-incorporated gelatin-based conductive hydrogels for engineering cardiac tissue constructs.

    Science.gov (United States)

    Navaei, Ali; Saini, Harpinder; Christenson, Wayne; Sullivan, Ryan Tanner; Ros, Robert; Nikkhah, Mehdi

    2016-09-01

    The development of advanced biomaterials is a crucial step to enhance the efficacy of tissue engineering strategies for treatment of myocardial infarction. Specific characteristics of biomaterials including electrical conductivity, mechanical robustness and structural integrity need to be further enhanced to promote the functionalities of cardiac cells. In this work, we fabricated UV-crosslinkable gold nanorod (GNR)-incorporated gelatin methacrylate (GelMA) hybrid hydrogels with enhanced material and biological properties for cardiac tissue engineering. Embedded GNRs promoted electrical conductivity and mechanical stiffness of the hydrogel matrix. Cardiomyocytes seeded on GelMA-GNR hybrid hydrogels exhibited excellent cell retention, viability, and metabolic activity. The increased cell adhesion resulted in abundance of locally organized F-actin fibers, leading to the formation of an integrated tissue layer on the GNR-embedded hydrogels. Immunostained images of integrin β-1 confirmed improved cell-matrix interaction on the hybrid hydrogels. Notably, homogeneous distribution of cardiac specific markers (sarcomeric α-actinin and connexin 43), were observed on GelMA-GNR hydrogels as a function of GNRs concentration. Furthermore, the GelMA-GNR hybrids supported synchronous tissue-level beating of cardiomyocytes. Similar observations were also noted by, calcium transient assay that demonstrated the rhythmic contraction of the cardiomyocytes on GelMA-GNR hydrogels as compared to pure GelMA. Thus, the findings of this study clearly demonstrated that functional cardiac patches with superior electrical and mechanical properties can be developed using nanoengineered GelMA-GNR hybrid hydrogels. In this work, we developed gold nanorod (GNR) incorporated gelatin-based hydrogels with suitable electrical conductivity and mechanical stiffness for engineering functional cardiac tissue constructs (e.g. cardiac patches). The synthesized conductive hybrid hydrogels properly

  20. Histological image classification using biologically interpretable shape-based features

    International Nuclear Information System (INIS)

    Kothari, Sonal; Phan, John H; Young, Andrew N; Wang, May D

    2013-01-01

    Automatic cancer diagnostic systems based on histological image classification are important for improving therapeutic decisions. Previous studies propose textural and morphological features for such systems. These features capture patterns in histological images that are useful for both cancer grading and subtyping. However, because many of these features lack a clear biological interpretation, pathologists may be reluctant to adopt these features for clinical diagnosis. We examine the utility of biologically interpretable shape-based features for classification of histological renal tumor images. Using Fourier shape descriptors, we extract shape-based features that capture the distribution of stain-enhanced cellular and tissue structures in each image and evaluate these features using a multi-class prediction model. We compare the predictive performance of the shape-based diagnostic model to that of traditional models, i.e., using textural, morphological and topological features. The shape-based model, with an average accuracy of 77%, outperforms or complements traditional models. We identify the most informative shapes for each renal tumor subtype from the top-selected features. Results suggest that these shapes are not only accurate diagnostic features, but also correlate with known biological characteristics of renal tumors. Shape-based analysis of histological renal tumor images accurately classifies disease subtypes and reveals biologically insightful discriminatory features. This method for shape-based analysis can be extended to other histological datasets to aid pathologists in diagnostic and therapeutic decisions

  1. An image adaptive, wavelet-based watermarking of digital images

    Science.gov (United States)

    Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia

    2007-12-01

    In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.

  2. Fuzzy Logic-Based Histogram Equalization for Image Contrast Enhancement

    Directory of Open Access Journals (Sweden)

    V. Magudeeswaran

    2013-01-01

    Full Text Available Fuzzy logic-based histogram equalization (FHE is proposed for image contrast enhancement. The FHE consists of two stages. First, fuzzy histogram is computed based on fuzzy set theory to handle the inexactness of gray level values in a better way compared to classical crisp histograms. In the second stage, the fuzzy histogram is divided into two subhistograms based on the median value of the original image and then equalizes them independently to preserve image brightness. The qualitative and quantitative analyses of proposed FHE algorithm are evaluated using two well-known parameters like average information contents (AIC and natural image quality evaluator (NIQE index for various images. From the qualitative and quantitative measures, it is interesting to see that this proposed method provides optimum results by giving better contrast enhancement and preserving the local information of the original image. Experimental result shows that the proposed method can effectively and significantly eliminate washed-out appearance and adverse artifacts induced by several existing methods. The proposed method has been tested using several images and gives better visual quality as compared to the conventional methods.

  3. LINE-BASED MULTI-IMAGE MATCHING FOR FAÇADE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    T. A. Teo

    2012-07-01

    Full Text Available This research integrates existing LOD 2 building models and multiple close-range images for façade structural lines extraction. The major works are orientation determination and multiple image matching. In the orientation determination, Speeded Up Robust Features (SURF is applied to extract tie points automatically. Then, tie points and control points are combined for block adjustment. An object-based multi-images matching is proposed to extract the façade structural lines. The 2D lines in image space are extracted by Canny operator followed by Hough transform. The role of LOD 2 building models is to correct the tilt displacement of image from different views. The wall of LOD 2 model is also used to generate hypothesis planes for similarity measurement. Finally, average normalized cross correlation is calculated to obtain the best location in object space. The test images are acquired by a nonmetric camera Nikon D2X. The total number of image is 33. The experimental results indicate that the accuracy of orientation determination is about 1 pixel from 2515 tie points and 4 control points. It also indicates that line-based matching is more flexible than point-based matching.

  4. Advanced Camera Image Cropping Approach for CNN-Based End-to-End Controls on Sustainable Computing

    Directory of Open Access Journals (Sweden)

    Yunsick Sung

    2018-03-01

    Full Text Available Recent research on deep learning has been applied to a diversity of fields. In particular, numerous studies have been conducted on self-driving vehicles using end-to-end approaches based on images captured by a single camera. End-to-end controls learn the output vectors of output devices directly from the input vectors of available input devices. In other words, an end-to-end approach learns not by analyzing the meaning of input vectors, but by extracting optimal output vectors based on input vectors. Generally, when end-to-end control is applied to self-driving vehicles, the steering wheel and pedals are controlled autonomously by learning from the images captured by a camera. However, high-resolution images captured from a car cannot be directly used as inputs to Convolutional Neural Networks (CNNs owing to memory limitations; the image size needs to be efficiently reduced. Therefore, it is necessary to extract features from captured images automatically and to generate input images by merging the parts of the images that contain the extracted features. This paper proposes a learning method for end-to-end control that generates input images for CNNs by extracting road parts from input images, identifying the edges of the extracted road parts, and merging the parts of the images that contain the detected edges. In addition, a CNN model for end-to-end control is introduced. Experiments involving the Open Racing Car Simulator (TORCS, a sustainable computing environment for cars, confirmed the effectiveness of the proposed method for self-driving by comparing the accumulated difference in the angle of the steering wheel in the images generated by it with those of resized images containing the entire captured area and cropped images containing only a part of the captured area. The results showed that the proposed method reduced the accumulated difference by 0.839% and 0.850% compared to those yielded by the resized images and cropped images

  5. Effective Five Directional Partial Derivatives-Based Image Smoothing and a Parallel Structure Design.

    Science.gov (United States)

    Choongsang Cho; Sangkeun Lee

    2016-04-01

    Image smoothing has been used for image segmentation, image reconstruction, object classification, and 3D content generation. Several smoothing approaches have been used at the pre-processing step to retain the critical edge, while removing noise and small details. However, they have limited performance, especially in removing small details and smoothing discrete regions. Therefore, to provide fast and accurate smoothing, we propose an effective scheme that uses a weighted combination of the gradient, Laplacian, and diagonal derivatives of a smoothed image. In addition, to reduce computational complexity, we designed and implemented a parallel processing structure for the proposed scheme on a graphics processing unit (GPU). For an objective evaluation of the smoothing performance, the images were linearly quantized into several layers to generate experimental images, and the quantized images were smoothed using several methods for reconstructing the smoothly changed shape and intensity of the original image. Experimental results showed that the proposed scheme has higher objective scores and better successful smoothing performance than similar schemes, while preserving and removing critical and trivial details, respectively. For computational complexity, the proposed smoothing scheme running on a GPU provided 18 and 16 times lower complexity than the proposed smoothing scheme running on a CPU and the L0-based smoothing scheme, respectively. In addition, a simple noise reduction test was conducted to show the characteristics of the proposed approach; it reported that the presented algorithm outperforms the state-of-the art algorithms by more than 5.4 dB. Therefore, we believe that the proposed scheme can be a useful tool for efficient image smoothing.

  6. IMAGE ANALYSIS BASED ON EDGE DETECTION TECHNIQUES

    Institute of Scientific and Technical Information of China (English)

    纳瑟; 刘重庆

    2002-01-01

    A method that incorporates edge detection technique, Markov Random field (MRF), watershed segmentation and merging techniques was presented for performing image segmentation and edge detection tasks. It first applies edge detection technique to obtain a Difference In Strength (DIS) map. An initial segmented result is obtained based on K-means clustering technique and the minimum distance. Then the region process is modeled by MRF to obtain an image that contains different intensity regions. The gradient values are calculated and then the watershed technique is used. DIS calculation is used for each pixel to define all the edges (weak or strong) in the image. The DIS map is obtained. This help as priority knowledge to know the possibility of the region segmentation by the next step (MRF), which gives an image that has all the edges and regions information. In MRF model,gray level l, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The segmentation results are improved by using watershed algorithm. After all pixels of the segmented regions are processed, a map of primitive region with edges is generated. The edge map is obtained using a merge process based on averaged intensity mean values. A common edge detectors that work on (MRF) segmented image are used and the results are compared. The segmentation and edge detection result is one closed boundary per actual region in the image.

  7. Adaptive radiotherapy based on contrast enhanced cone beam CT imaging

    International Nuclear Information System (INIS)

    Soevik, Aaste; Skogmo, Hege K.; Roedal, Jan; Lervaag, Christoffer; Eilertsen, Karsten; Malinen, Eirik

    2010-01-01

    Cone beam CT (CBCT) imaging has become an integral part of radiation therapy, with images typically used for offline or online patient setup corrections based on bony anatomy co-registration. Ideally, the co-registration should be based on tumor localization. However, soft tissue contrast in CBCT images may be limited. In the present work, contrast enhanced CBCT (CECBCT) images were used for tumor visualization and treatment adaptation. Material and methods. A spontaneous canine maxillary tumor was subjected to repeated cone beam CT imaging during fractionated radiotherapy (10 fractions in total). At five of the treatment fractions, CECBCT images, employing an iodinated contrast agent, were acquired, as well as pre-contrast CBCT images. The tumor was clearly visible in post-contrast minus pre-contrast subtraction images, and these contrast images were used to delineate gross tumor volumes. IMRT dose plans were subsequently generated. Four different strategies were explored: 1) fully adapted planning based on each CECBCT image series, 2) planning based on images acquired at the first treatment fraction and patient repositioning following bony anatomy co-registration, 3) as for 2), but with patient repositioning based on co-registering contrast images, and 4) a strategy with no patient repositioning or treatment adaptation. The equivalent uniform dose (EUD) and tumor control probability (TCP) calculations to estimate treatment outcome for each strategy. Results. Similar translation vectors were found when bony anatomy and contrast enhancement co-registration were compared. Strategy 1 gave EUDs closest to the prescription dose and the highest TCP. Strategies 2 and 3 gave EUDs and TCPs close to that of strategy 1, with strategy 3 being slightly better than strategy 2. Even greater benefits from strategies 1 and 3 are expected with increasing tumor movement or deformation during treatment. The non-adaptive strategy 4 was clearly inferior to all three adaptive strategies

  8. Design of CMOS imaging system based on FPGA

    Science.gov (United States)

    Hu, Bo; Chen, Xiaolai

    2017-10-01

    In order to meet the needs of engineering applications for high dynamic range CMOS camera under the rolling shutter mode, a complete imaging system is designed based on the CMOS imaging sensor NSC1105. The paper decides CMOS+ADC+FPGA+Camera Link as processing architecture and introduces the design and implementation of the hardware system. As for camera software system, which consists of CMOS timing drive module, image acquisition module and transmission control module, the paper designs in Verilog language and drives it to work properly based on Xilinx FPGA. The ISE 14.6 emulator ISim is used in the simulation of signals. The imaging experimental results show that the system exhibits a 1280*1024 pixel resolution, has a frame frequency of 25 fps and a dynamic range more than 120dB. The imaging quality of the system satisfies the requirement of the index.

  9. An Integrated Dictionary-Learning Entropy-Based Medical Image Fusion Framework

    Directory of Open Access Journals (Sweden)

    Guanqiu Qi

    2017-10-01

    Full Text Available Image fusion is widely used in different areas and can integrate complementary and relevant information of source images captured by multiple sensors into a unitary synthetic image. Medical image fusion, as an important image fusion application, can extract the details of multiple images from different imaging modalities and combine them into an image that contains complete and non-redundant information for increasing the accuracy of medical diagnosis and assessment. The quality of the fused image directly affects medical diagnosis and assessment. However, existing solutions have some drawbacks in contrast, sharpness, brightness, blur and details. This paper proposes an integrated dictionary-learning and entropy-based medical image-fusion framework that consists of three steps. First, the input image information is decomposed into low-frequency and high-frequency components by using a Gaussian filter. Second, low-frequency components are fused by weighted average algorithm and high-frequency components are fused by the dictionary-learning based algorithm. In the dictionary-learning process of high-frequency components, an entropy-based algorithm is used for informative blocks selection. Third, the fused low-frequency and high-frequency components are combined to obtain the final fusion results. The results and analyses of comparative experiments demonstrate that the proposed medical image fusion framework has better performance than existing solutions.

  10. Wavelet-based de-noising algorithm for images acquired with parallel magnetic resonance imaging (MRI)

    International Nuclear Information System (INIS)

    Delakis, Ioannis; Hammad, Omer; Kitney, Richard I

    2007-01-01

    Wavelet-based de-noising has been shown to improve image signal-to-noise ratio in magnetic resonance imaging (MRI) while maintaining spatial resolution. Wavelet-based de-noising techniques typically implemented in MRI require that noise displays uniform spatial distribution. However, images acquired with parallel MRI have spatially varying noise levels. In this work, a new algorithm for filtering images with parallel MRI is presented. The proposed algorithm extracts the edges from the original image and then generates a noise map from the wavelet coefficients at finer scales. The noise map is zeroed at locations where edges have been detected and directional analysis is also used to calculate noise in regions of low-contrast edges that may not have been detected. The new methodology was applied on phantom and brain images and compared with other applicable de-noising techniques. The performance of the proposed algorithm was shown to be comparable with other techniques in central areas of the images, where noise levels are high. In addition, finer details and edges were maintained in peripheral areas, where noise levels are low. The proposed methodology is fully automated and can be applied on final reconstructed images without requiring sensitivity profiles or noise matrices of the receiver coils, therefore making it suitable for implementation in a clinical MRI setting

  11. Example-Based Image Colorization Using Locality Consistent Sparse Representation.

    Science.gov (United States)

    Bo Li; Fuchen Zhao; Zhuo Su; Xiangguo Liang; Yu-Kun Lai; Rosin, Paul L

    2017-11-01

    Image colorization aims to produce a natural looking color image from a given gray-scale image, which remains a challenging problem. In this paper, we propose a novel example-based image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target gray-scale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features, and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation, which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target gray-scale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms the state-of-the-art methods, both visually and quantitatively using a user study.

  12. Preliminary Development of Conductivity based Test Method for Industrial Radiography Film Developer Solution

    International Nuclear Information System (INIS)

    Zainuddin, N.S.; Manah, N.S.A.; Khairul Anuar Mohd Salleh; Noorhazleena Azaman

    2015-01-01

    The strength of industrial radiography film developer solution is one of the most important aspects in radiography film processing. The developer solution reacts with the exposed film to visualize the latent image through chemical-film reaction. As the developer is repeatedly used, the strength decreases until a point where it cannot yield the required film optical density value. This work attempts to investigate the developer solution strength through its conductivity. Obtained data are cross correlated to the required industrial radiography optical density range. Through the experiment, the conductivity of the developer solution decreased as the number of the film processed increase. Thus, the desired optical density of the film cannot be achieved. The conductivity of developer is measured and recorded at interval of six films developed. The optical density of every film is recorded to analyze the change in optical density as the conductivity decreases. Through the procedure, it is suggested that as the conductivity decreases, the optical density of film decreased. Ultimately, the strength level of the developer solution can be determined. (author)

  13. Improved thermal conductivity of Ag decorated carbon nanotubes water based nanofluids

    Energy Technology Data Exchange (ETDEWEB)

    Farbod, Mansoor, E-mail: farbod_m@scu.ac.ir; Ahangarpour, Ameneh

    2016-12-16

    The effect of Ag decoration of carbon nanotubes on thermal conductivity enhancement of Ag decorated MWCNTs water based nanofluids has been investigated. The pristine and functionalized MWCNTs were decorated with Ag nanoparticles by mass ratios of 1%, 2% and 4% and used to prepare water based nanofluids with 0.1 vol.%. An enhancement of 1–20.4 percent in thermal conductivity was observed. It was found that the decoration of functionalized MWCNTs can increase the thermal conductivity about 0.16–8.02 percent compared to the undecorated ones. The maximum enhancement of 20.4% was measured for the sample containing 4 wt.% Ag at 40 °C. - Highlights: • MWCNTs were decorated with Ag nanoparticles by the mass ratios of 1, 2 and 4%. • The decorated CNTs were used to prepare water based nanofluids with 0.1 Vol.%. • 1–20.4% increase was observed in thermal conductivity (TC) compared to pure water. • Ag decorated CNTs increased TC of nanofluid up to 8% compared to CNTs nanofluid.

  14. Radionuclide-Based Cancer Imaging Targeting the Carcinoembryonic Antigen

    Directory of Open Access Journals (Sweden)

    Hao Hong

    2008-01-01

    Full Text Available Carcinoembryonic antigen (CEA, highly expressed in many cancer types, is an important target for cancer diagnosis and therapy. Radionuclide-based imaging techniques (gamma camera, single photon emission computed tomography [SPECT] and positron emission tomography [PET] have been extensively explored for CEA-targeted cancer imaging both preclinically and clinically. Briefly, these studies can be divided into three major categories: antibody-based, antibody fragment-based and pretargeted imaging. Radiolabeled anti-CEA antibodies, reported the earliest among the three categories, typically gave suboptimal tumor contrast due to the prolonged circulation life time of intact antibodies. Subsequently, a number of engineered anti-CEA antibody fragments (e.g. Fab’, scFv, minibody, diabody and scFv-Fc have been labeled with a variety of radioisotopes for CEA imaging, many of which have entered clinical investigation. CEA-Scan (a 99mTc-labeled anti-CEA Fab’ fragment has already been approved by the United States Food and Drug Administration for cancer imaging. Meanwhile, pretargeting strategies have also been developed for CEA imaging which can give much better tumor contrast than the other two methods, if the system is designed properly. In this review article, we will summarize the current state-of-the-art of radionuclide-based cancer imaging targeting CEA. Generally, isotopes with short half-lives (e.g. 18F and 99mTc are more suitable for labeling small engineered antibody fragments while the isotopes with longer half-lives (e.g. 123I and 111In are needed for antibody labeling to match its relatively long circulation half-life. With further improvement in tumor targeting efficacy and radiolabeling strategies, novel CEA-targeted agents may play an important role in cancer patient management, paving the way to “personalized medicine”.

  15. Evaluation of the Anisotropic Radiative Conductivity of a Low-Density Carbon Fiber Material from Realistic Microscale Imaging

    Science.gov (United States)

    Nouri, Nima; Panerai, Francesco; Tagavi, Kaveh A.; Mansour, Nagi N.; Martin, Alexandre

    2015-01-01

    The radiative heat transfer inside a low-density carbon fiber insulator is analyzed using a three-dimensional direct simulation model. A robust procedure is presented for the numerical calculation of the geometric configuration factor to compute the radiative energy exchange processes among the small discretized surface areas of the fibrous material. The methodology is applied to a polygonal mesh of a fibrous insulator obtained from three-dimensional microscale imaging of the real material. The anisotropic values of the radiative conductivity are calculated for that geometry. The results yield both directional and thermal dependence of the radiative conductivity.

  16. Image Transform Based on the Distribution of Representative Colors for Color Deficient

    Science.gov (United States)

    Ohata, Fukashi; Kudo, Hiroaki; Matsumoto, Tetsuya; Takeuchi, Yoshinori; Ohnishi, Noboru

    This paper proposes the method to convert digital image containing distinguishing difficulty sets of colors into the image with high visibility. We set up four criteria, automatically processing by a computer, retaining continuity in color space, not making images into lower visible for people with normal color vision, and not making images not originally having distinguishing difficulty sets of colors into lower visible. We conducted the psychological experiment. We obtained the result that the visibility of a converted image had been improved at 60% for 40 images, and we confirmed the main criterion of the continuity in color space was kept.

  17. Water Extraction in High Resolution Remote Sensing Image Based on Hierarchical Spectrum and Shape Features

    International Nuclear Information System (INIS)

    Li, Bangyu; Zhang, Hui; Xu, Fanjiang

    2014-01-01

    This paper addresses the problem of water extraction from high resolution remote sensing images (including R, G, B, and NIR channels), which draws considerable attention in recent years. Previous work on water extraction mainly faced two difficulties. 1) It is difficult to obtain accurate position of water boundary because of using low resolution images. 2) Like all other image based object classification problems, the phenomena of ''different objects same image'' or ''different images same object'' affects the water extraction. Shadow of elevated objects (e.g. buildings, bridges, towers and trees) scattered in the remote sensing image is a typical noise objects for water extraction. In many cases, it is difficult to discriminate between water and shadow in a remote sensing image, especially in the urban region. We propose a water extraction method with two hierarchies: the statistical feature of spectral characteristic based on image segmentation and the shape feature based on shadow removing. In the first hierarchy, the Statistical Region Merging (SRM) algorithm is adopted for image segmentation. The SRM includes two key steps: one is sorting adjacent regions according to a pre-ascertained sort function, and the other one is merging adjacent regions based on a pre-ascertained merging predicate. The sort step is done one time during the whole processing without considering changes caused by merging which may cause imprecise results. Therefore, we modify the SRM with dynamic sort processing, which conducts sorting step repetitively when there is large adjacent region changes after doing merging. To achieve robust segmentation, we apply the merging region with six features (four remote sensing image bands, Normalized Difference Water Index (NDWI), and Normalized Saturation-value Difference Index (NSVDI)). All these features contribute to segment image into region of object. NDWI and NSVDI are discriminate between water and

  18. Physics-Based Image Segmentation Using First Order Statistical Properties and Genetic Algorithm for Inductive Thermography Imaging.

    Science.gov (United States)

    Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun

    2018-05-01

    Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.

  19. New calibration technique for KCD-based megavoltage imaging

    Science.gov (United States)

    Samant, Sanjiv S.; Zheng, Wei; DiBianca, Frank A.; Zeman, Herbert D.; Laughter, Joseph S.

    1999-05-01

    In megavoltage imaging, current commercial electronic portal imaging devices (EPIDs), despite having the advantage of immediate digital imaging over film, suffer from poor image contrast and spatial resolution. The feasibility of using a kinestatic charge detector (KCD) as an EPID to provide superior image contrast and spatial resolution for portal imaging has already been demonstrated in a previous paper. The KCD system had the additional advantage of requiring an extremely low dose per acquired image, allowing for superior imaging to be reconstructed form a single linac pulse per image pixel. The KCD based images utilized a dose of two orders of magnitude less that for EPIDs and film. Compared with the current commercial EPIDs and film, the prototype KCD system exhibited promising image qualities, despite being handicapped by the use of a relatively simple image calibration technique, and the performance limits of medical linacs on the maximum linac pulse frequency and energy flux per pulse delivered. This image calibration technique fixed relative image pixel values based on a linear interpolation of extrema provided by an air-water calibration, and accounted only for channel-to-channel variations. The counterpart of this for area detectors is the standard flat fielding method. A comprehensive calibration protocol has been developed. The new technique additionally corrects for geometric distortions due to variations in the scan velocity, and timing artifacts caused by mis-synchronization between the linear accelerator and the data acquisition system (DAS). The role of variations in energy flux (2 - 3%) on imaging is demonstrated to be not significant for the images considered. The methodology is presented, and the results are discussed for simulated images. It also allows for significant improvements in the signal-to- noise ratio (SNR) by increasing the dose using multiple images without having to increase the linac pulse frequency or energy flux per pulse. The

  20. Pathfinder: multiresolution region-based searching of pathology images using IRM.

    OpenAIRE

    Wang, J. Z.

    2000-01-01

    The fast growth of digitized pathology slides has created great challenges in research on image database retrieval. The prevalent retrieval technique involves human-supplied text annotations to describe slide contents. These pathology images typically have very high resolution, making it difficult to search based on image content. In this paper, we present Pathfinder, an efficient multiresolution region-based searching system for high-resolution pathology image libraries. The system uses wave...

  1. Integration of piezo-capacitive and piezo-electric nanoweb based pressure sensors for imaging of static and dynamic pressure distribution.

    Science.gov (United States)

    Jeong, Y J; Oh, T I; Woo, E J; Kim, K J

    2017-07-01

    Recently, highly flexible and soft pressure distribution imaging sensor is in great demand for tactile sensing, gait analysis, ubiquitous life-care based on activity recognition, and therapeutics. In this study, we integrate the piezo-capacitive and piezo-electric nanowebs with the conductive fabric sheets for detecting static and dynamic pressure distributions on a large sensing area. Electrical impedance tomography (EIT) and electric source imaging are applied for reconstructing pressure distribution images from measured current-voltage data on the boundary of the hybrid fabric sensor. We evaluated the piezo-capacitive nanoweb sensor, piezo-electric nanoweb sensor, and hybrid fabric sensor. The results show the feasibility of static and dynamic pressure distribution imaging from the boundary measurements of the fabric sensors.

  2. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    Science.gov (United States)

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  3. Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification.

    Science.gov (United States)

    Diamant, Idit; Klang, Eyal; Amitai, Michal; Konen, Eli; Goldberger, Jacob; Greenspan, Hayit

    2017-06-01

    We present a novel variant of the bag-of-visual-words (BoVW) method for automated medical image classification. Our approach improves the BoVW model by learning a task-driven dictionary of the most relevant visual words per task using a mutual information-based criterion. Additionally, we generate relevance maps to visualize and localize the decision of the automatic classification algorithm. These maps demonstrate how the algorithm works and show the spatial layout of the most relevant words. We applied our algorithm to three different tasks: chest x-ray pathology identification (of four pathologies: cardiomegaly, enlarged mediastinum, right consolidation, and left consolidation), liver lesion classification into four categories in computed tomography (CT) images and benign/malignant clusters of microcalcifications (MCs) classification in breast mammograms. Validation was conducted on three datasets: 443 chest x-rays, 118 portal phase CT images of liver lesions, and 260 mammography MCs. The proposed method improves the classical BoVW method for all tested applications. For chest x-ray, area under curve of 0.876 was obtained for enlarged mediastinum identification compared to 0.855 using classical BoVW (with p-value 0.01). For MC classification, a significant improvement of 4% was achieved using our new approach (with p-value = 0.03). For liver lesion classification, an improvement of 6% in sensitivity and 2% in specificity were obtained (with p-value 0.001). We demonstrated that classification based on informative selected set of words results in significant improvement. Our new BoVW approach shows promising results in clinically important domains. Additionally, it can discover relevant parts of images for the task at hand without explicit annotations for training data. This can provide computer-aided support for medical experts in challenging image analysis tasks.

  4. Characterization of lens based photoacoustic imaging system

    Directory of Open Access Journals (Sweden)

    Kalloor Joseph Francis

    2017-12-01

    Full Text Available Some of the challenges in translating photoacoustic (PA imaging to clinical applications includes limited view of the target tissue, low signal to noise ratio and the high cost of developing real-time systems. Acoustic lens based PA imaging systems, also known as PA cameras are a potential alternative to conventional imaging systems in these scenarios. The 3D focusing action of lens enables real-time C-scan imaging with a 2D transducer array. In this paper, we model the underlying physics in a PA camera in the mathematical framework of an imaging system and derive a closed form expression for the point spread function (PSF. Experimental verification follows including the details on how to design and fabricate the lens inexpensively. The system PSF is evaluated over a 3D volume that can be imaged by this PA camera. Its utility is demonstrated by imaging phantom and an ex vivo human prostate tissue sample.

  5. Characterization of lens based photoacoustic imaging system.

    Science.gov (United States)

    Francis, Kalloor Joseph; Chinni, Bhargava; Channappayya, Sumohana S; Pachamuthu, Rajalakshmi; Dogra, Vikram S; Rao, Navalgund

    2017-12-01

    Some of the challenges in translating photoacoustic (PA) imaging to clinical applications includes limited view of the target tissue, low signal to noise ratio and the high cost of developing real-time systems. Acoustic lens based PA imaging systems, also known as PA cameras are a potential alternative to conventional imaging systems in these scenarios. The 3D focusing action of lens enables real-time C-scan imaging with a 2D transducer array. In this paper, we model the underlying physics in a PA camera in the mathematical framework of an imaging system and derive a closed form expression for the point spread function (PSF). Experimental verification follows including the details on how to design and fabricate the lens inexpensively. The system PSF is evaluated over a 3D volume that can be imaged by this PA camera. Its utility is demonstrated by imaging phantom and an ex vivo human prostate tissue sample.

  6. MIDA: A Multimodal Imaging-Based Detailed Anatomical Model of the Human Head and Neck.

    Directory of Open Access Journals (Sweden)

    Maria Ida Iacono

    Full Text Available Computational modeling and simulations are increasingly being used to complement experimental testing for analysis of safety and efficacy of medical devices. Multiple voxel- and surface-based whole- and partial-body models have been proposed in the literature, typically with spatial resolution in the range of 1-2 mm and with 10-50 different tissue types resolved. We have developed a multimodal imaging-based detailed anatomical model of the human head and neck, named "MIDA". The model was obtained by integrating three different magnetic resonance imaging (MRI modalities, the parameters of which were tailored to enhance the signals of specific tissues: i structural T1- and T2-weighted MRIs; a specific heavily T2-weighted MRI slab with high nerve contrast optimized to enhance the structures of the ear and eye; ii magnetic resonance angiography (MRA data to image the vasculature, and iii diffusion tensor imaging (DTI to obtain information on anisotropy and fiber orientation. The unique multimodal high-resolution approach allowed resolving 153 structures, including several distinct muscles, bones and skull layers, arteries and veins, nerves, as well as salivary glands. The model offers also a detailed characterization of eyes, ears, and deep brain structures. A special automatic atlas-based segmentation procedure was adopted to include a detailed map of the nuclei of the thalamus and midbrain into the head model. The suitability of the model to simulations involving different numerical methods, discretization approaches, as well as DTI-based tensorial electrical conductivity, was examined in a case-study, in which the electric field was generated by transcranial alternating current stimulation. The voxel- and the surface-based versions of the models are freely available to the scientific community.

  7. Image Recommendation Algorithm Using Feature-Based Collaborative Filtering

    Science.gov (United States)

    Kim, Deok-Hwan

    As the multimedia contents market continues its rapid expansion, the amount of image contents used in mobile phone services, digital libraries, and catalog service is increasing remarkably. In spite of this rapid growth, users experience high levels of frustration when searching for the desired image. Even though new images are profitable to the service providers, traditional collaborative filtering methods cannot recommend them. To solve this problem, in this paper, we propose feature-based collaborative filtering (FBCF) method to reflect the user's most recent preference by representing his purchase sequence in the visual feature space. The proposed approach represents the images that have been purchased in the past as the feature clusters in the multi-dimensional feature space and then selects neighbors by using an inter-cluster distance function between their feature clusters. Various experiments using real image data demonstrate that the proposed approach provides a higher quality recommendation and better performance than do typical collaborative filtering and content-based filtering techniques.

  8. Image annotation based on positive-negative instances learning

    Science.gov (United States)

    Zhang, Kai; Hu, Jiwei; Liu, Quan; Lou, Ping

    2017-07-01

    Automatic image annotation is now a tough task in computer vision, the main sense of this tech is to deal with managing the massive image on the Internet and assisting intelligent retrieval. This paper designs a new image annotation model based on visual bag of words, using the low level features like color and texture information as well as mid-level feature as SIFT, and mixture the pic2pic, label2pic and label2label correlation to measure the correlation degree of labels and images. We aim to prune the specific features for each single label and formalize the annotation task as a learning process base on Positive-Negative Instances Learning. Experiments are performed using the Corel5K Dataset, and provide a quite promising result when comparing with other existing methods.

  9. SU-F-I-08: CT Image Ring Artifact Reduction Based On Prior Image

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, C; Qi, H; Chen, Z; Wu, S; Xu, Y; Zhou, L [Southern Medical University, Guangzhou, Guangdong (China)

    2016-06-15

    Purpose: In computed tomography (CT) system, CT images with ring artifacts will be reconstructed when some adjacent bins of detector don’t work. The ring artifacts severely degrade CT image quality. We present a useful CT ring artifacts reduction based on projection data correction, aiming at estimating the missing data of projection data accurately, thus removing the ring artifacts of CT images. Methods: The method consists of ten steps: 1) Identification of abnormal pixel line in projection sinogram; 2) Linear interpolation within the pixel line of projection sinogram; 3) FBP reconstruction using interpolated projection data; 4) Filtering FBP image using mean filter; 5) Forwarding projection of filtered FBP image; 6) Subtraction forwarded projection from original projection; 7) Linear interpolation of abnormal pixel line area in the subtraction projection; 8) Adding the interpolated subtraction projection on the forwarded projection; 9) FBP reconstruction using corrected projection data; 10) Return to step 4 until the pre-set iteration number is reached. The method is validated on simulated and real data to restore missing projection data and reconstruct ring artifact-free CT images. Results: We have studied impact of amount of dead bins of CT detector on the accuracy of missing data estimation in projection sinogram. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, three iterations are sufficient to restore projection data and reconstruct ring artifact-free images when the dead bins rating is under 30%. The dead-bin-induced artifacts are substantially reduced. More iteration number is needed to reconstruct satisfactory images while the rating of dead bins increases. Similar results were found for a real head phantom case. Conclusion: A practical CT image ring artifact correction scheme based on projection data is developed. This method can produce ring artifact-free CT images feasibly and effectively.

  10. SU-F-I-08: CT Image Ring Artifact Reduction Based On Prior Image

    International Nuclear Information System (INIS)

    Yuan, C; Qi, H; Chen, Z; Wu, S; Xu, Y; Zhou, L

    2016-01-01

    Purpose: In computed tomography (CT) system, CT images with ring artifacts will be reconstructed when some adjacent bins of detector don’t work. The ring artifacts severely degrade CT image quality. We present a useful CT ring artifacts reduction based on projection data correction, aiming at estimating the missing data of projection data accurately, thus removing the ring artifacts of CT images. Methods: The method consists of ten steps: 1) Identification of abnormal pixel line in projection sinogram; 2) Linear interpolation within the pixel line of projection sinogram; 3) FBP reconstruction using interpolated projection data; 4) Filtering FBP image using mean filter; 5) Forwarding projection of filtered FBP image; 6) Subtraction forwarded projection from original projection; 7) Linear interpolation of abnormal pixel line area in the subtraction projection; 8) Adding the interpolated subtraction projection on the forwarded projection; 9) FBP reconstruction using corrected projection data; 10) Return to step 4 until the pre-set iteration number is reached. The method is validated on simulated and real data to restore missing projection data and reconstruct ring artifact-free CT images. Results: We have studied impact of amount of dead bins of CT detector on the accuracy of missing data estimation in projection sinogram. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, three iterations are sufficient to restore projection data and reconstruct ring artifact-free images when the dead bins rating is under 30%. The dead-bin-induced artifacts are substantially reduced. More iteration number is needed to reconstruct satisfactory images while the rating of dead bins increases. Similar results were found for a real head phantom case. Conclusion: A practical CT image ring artifact correction scheme based on projection data is developed. This method can produce ring artifact-free CT images feasibly and effectively.

  11. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  12. Discriminative Projection Selection Based Face Image Hashing

    Science.gov (United States)

    Karabat, Cagatay; Erdogan, Hakan

    Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.

  13. RMB identification based on polarization parameters inversion imaging

    Science.gov (United States)

    Liu, Guoyan; Gao, Kun; Liu, Xuefeng; Ni, Guoqiang

    2016-10-01

    Social order is threatened by counterfeit money. Conventional anti-counterfeit technology is much too old to identify its authenticity or not. The intrinsic difference between genuine notes and counterfeit notes is its paper tissue. In this paper a new technology of detecting RMB is introduced, the polarization parameter indirect microscopic imaging technique. A conventional reflection microscopic system is used as the basic optical system, and inserting into it with polarization-modulation mechanics. The near-field structural characteristics can be delivered by optical wave and material coupling. According to coupling and conduction physics, calculate the changes of optical wave parameters, then get the curves of the intensity of the image. By analyzing near-field polarization parameters in nanoscale, finally calculate indirect polarization parameter imaging of the fiber of the paper tissue in order to identify its authenticity.

  14. A Reliable Image Watermarking Scheme Based on Redistributed Image Normalization and SVD

    Directory of Open Access Journals (Sweden)

    Musrrat Ali

    2016-01-01

    Full Text Available Digital image watermarking is the process of concealing secret information in a digital image for protecting its rightful ownership. Most of the existing block based singular value decomposition (SVD digital watermarking schemes are not robust to geometric distortions, such as rotation in an integer multiple of ninety degree and image flipping, which change the locations of the pixels but don’t make any changes to the pixel’s intensity of the image. Also, the schemes have used a constant scaling factor to give the same weightage to the coefficients of different magnitudes that results in visible distortion in some regions of the watermarked image. Therefore, to overcome the problems mentioned here, this paper proposes a novel image watermarking scheme by incorporating the concepts of redistributed image normalization and variable scaling factor depending on the coefficient’s magnitude to be embedded. Furthermore, to enhance the security and robustness the watermark is shuffled by using the piecewise linear chaotic map before the embedding. To investigate the robustness of the scheme several attacks are applied to seriously distort the watermarked image. Empirical analysis of the results has demonstrated the efficiency of the proposed scheme.

  15. Anatomical based registration of multi-sector x-ray images for panorama reconstruction

    Science.gov (United States)

    Ben-Zikri, Yehuda Kfir; Mendez, Stacy; Linte, Cristian A.

    2017-03-01

    Accurate measurement of long limb alignment is an essential stage of the pre-operative planning of realignment surgery. This alignment is quantified according to the hip-knee-ankle (HKA) angle of the mechanical axis of the lower extremity and is measured based on a full-length weight-bearing X-ray or standard computed radiography (CR) image of the patient in standing position. Due to the limited field-of-view of the traditionally employed digital X-ray imaging systems, several sector images are required to capture the posture of a standing individual. These sector images need to then be "stitched" together to reconstruct the standing posture. To eliminate user-induced variability and time constraints associated with the traditional manual "stitching" protocol, we have created an image processing application to automate the stitching process, when there are no reliable external markers available in the images, by only relying on the most reliable anatomical content of the image. The application starts with a rough segmentation of the tibia and the sector images are then registered by evaluating the DICE coefficient between the edges of these corresponding bones along the medial edge. The identified translations are then used to register the original sector images into the standing panorama image. To test the robustness of our method, we randomly selected 40 datasets from a variant database consisting of nearly 100 patient X-ray images acquired for patient screening as part of a multi-site clinical trial. The resulting horizontal and vertical translation values from the automated registration were compared to the homologous translations recorded during the manual panorama generation conducted by a knowledgeable X-ray imaging technician. The mean and standard deviation of the differences for the horizontal translation parameters was -0:27+/-1:14 mm and 0:31+/-1:86 mm for the left and right tibia, respectively. The vertical translation differences for the left and

  16. Defogging of road images using gain coefficient-based trilateral filter

    Science.gov (United States)

    Singh, Dilbag; Kumar, Vijay

    2018-01-01

    Poor weather conditions are responsible for most of the road accidents year in and year out. Poor weather conditions, such as fog, degrade the visibility of objects. Thus, it becomes difficult for drivers to identify the vehicles in a foggy environment. The dark channel prior (DCP)-based defogging techniques have been found to be an efficient way to remove fog from road images. However, it produces poor results when image objects are inherently similar to airlight and no shadow is cast on them. To eliminate this problem, a modified restoration model-based DCP is developed to remove the fog from road images. The transmission map is also refined by developing a gain coefficient-based trilateral filter. Thus, the proposed technique has an ability to remove fog from road images in an effective manner. The proposed technique is compared with seven well-known defogging techniques on two benchmark foggy images datasets and five real-time foggy images. The experimental results demonstrate that the proposed approach is able to remove the different types of fog from roadside images as well as significantly improve the image's visibility. It also reveals that the restored image has little or no artifacts.

  17. Verifying the hypothesis of disconnection syndrome in patients with conduction aphasia using diffusion tensor imaging

    Institute of Scientific and Technical Information of China (English)

    Yanqin Guo; Jing Xu; Yindong Yang

    2007-01-01

    BACKGROUND: It is thought in disconnection theory that connection of anterior and posterior language function areas, i.e. the lesion of arcuate fasciculus causes conduction aphasia.OBJECTIVE: To verify the theory of disconnection elicited by repetition disorder in patients with conduction aphasia by comparing the characteristics of diffusion tensor imaging between healthy persons and patients with conduction aphasia.DESIGN: Case-control observation.SETTING: Department of Neurology, Hongqi Hospital Affiliated to Mudanjiang Medical College.PARTICIPANTS: Five male patients with cerebral infarction-involved arcuate fasciculus conduction aphasia, averaged (43±2) years, who hospitalized in the Department of Neurology, Hongqi Hospital Affiliated to Mudanjiang Medical College from February 2004 to February 2005 were involved in this experiment. The involved patients were all confirmed as cerebral infarction by skull CT and MRI, and met the diagnosis criteria revised in 1995 4th Cerebrovascular Conference. They were examined by the method of Aphasia Battery of Chinese (ABC) edited by Surong Gao. The results were poorer than auditory comprehension disproportionately, and consistented with the mode of conduction aphasia. Another 5 male healthy persons, averaged (43 ± 1 ) years, who were physicians receiving further training in the Department of Neurology, Beijing Tiantan Hospital were also involved in this experiment. Informed consents of detected items were obtained from all the subjects.METHODS: All the subjects were performed handedness assessment with assessment criteria of handedness formulated by Department of Neurology, First Hospital Affiliated to Beijing Medical University. Arcuate fasciculus of involved patients and health controls were analyzed with diffusion tensor imaging (DTI) and divided into 3 parts (anterior, middle and posterior segments) for determining FA value (mean value was obtained after three times of measurements), and a comparison of FA value was

  18. Defining the value of magnetic resonance imaging in prostate brachytherapy using time-driven activity-based costing.

    Science.gov (United States)

    Thaker, Nikhil G; Orio, Peter F; Potters, Louis

    Magnetic resonance imaging (MRI) simulation and planning for prostate brachytherapy (PBT) may deliver potential clinical benefits but at an unknown cost to the provider and healthcare system. Time-driven activity-based costing (TDABC) is an innovative bottom-up costing tool in healthcare that can be used to measure the actual consumption of resources required over the full cycle of care. TDABC analysis was conducted to compare patient-level costs for an MRI-based versus traditional PBT workflow. TDABC cost was only 1% higher for the MRI-based workflow, and utilization of MRI allowed for cost shifting from other imaging modalities, such as CT and ultrasound, to MRI during the PBT process. Future initiatives will be required to follow the costs of care over longer periods of time to determine if improvements in outcomes and toxicities with an MRI-based approach lead to lower resource utilization and spending over the long-term. Understanding provider costs will become important as healthcare reform transitions to value-based purchasing and other alternative payment models. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  19. Feature Selection for Object-Based Classification of High-Resolution Remote Sensing Images Based on the Combination of a Genetic Algorithm and Tabu Search

    Directory of Open Access Journals (Sweden)

    Lei Shi

    2018-01-01

    Full Text Available In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA and tabu search (TS is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy.

  20. Feature Selection for Object-Based Classification of High-Resolution Remote Sensing Images Based on the Combination of a Genetic Algorithm and Tabu Search

    Science.gov (United States)

    Shi, Lei; Wan, Youchuan; Gao, Xianjun

    2018-01-01

    In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA) and tabu search (TS) is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy. PMID:29581721

  1. Proton Conductivity and Operational Features Of PBI-Based Membranes

    DEFF Research Database (Denmark)

    Qingfeng, Li; Jensen, Jens Oluf; Precht Noyé, Pernille

    2005-01-01

    As an approach to high temperature operation of PEMFCs, acid-doped PBI membranes are under active development. The membrane exhibits high proton conductivity under low water contents at temperatures up to 200°C. Mechanisms of proton conduction for the membranes have been proposed. Based on the me...... on the membranes fuel cell tests have been demonstrated. Operating features of the PBI cell include no humidification, high CO tolerance, better heat utilization and possible integration with fuel processing units. Issues for further development are also discussed....

  2. Choroidal vasculature characteristics based choroid segmentation for enhanced depth imaging optical coherence tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Qiang; Niu, Sijie [School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094 (China); Yuan, Songtao; Fan, Wen, E-mail: fanwen1029@163.com; Liu, Qinghuai [Department of Ophthalmology, The First Affiliated Hospital with Nanjing Medical University, Nanjing 210029 (China)

    2016-04-15

    Purpose: In clinical research, it is important to measure choroidal thickness when eyes are affected by various diseases. The main purpose is to automatically segment choroid for enhanced depth imaging optical coherence tomography (EDI-OCT) images with five B-scans averaging. Methods: The authors present an automated choroid segmentation method based on choroidal vasculature characteristics for EDI-OCT images with five B-scans averaging. By considering the large vascular of the Haller’s layer neighbor with the choroid-sclera junction (CSJ), the authors measured the intensity ascending distance and a maximum intensity image in the axial direction from a smoothed and normalized EDI-OCT image. Then, based on generated choroidal vessel image, the authors constructed the CSJ cost and constrain the CSJ search neighborhood. Finally, graph search with smooth constraints was utilized to obtain the CSJ boundary. Results: Experimental results with 49 images from 10 eyes in 8 normal persons and 270 images from 57 eyes in 44 patients with several stages of diabetic retinopathy and age-related macular degeneration demonstrate that the proposed method can accurately segment the choroid of EDI-OCT images with five B-scans averaging. The mean choroid thickness difference and overlap ratio between the authors’ proposed method and manual segmentation drawn by experts were −11.43 μm and 86.29%, respectively. Conclusions: Good performance was achieved for normal and pathologic eyes, which proves that the authors’ method is effective for the automated choroid segmentation of the EDI-OCT images with five B-scans averaging.

  3. Choroidal vasculature characteristics based choroid segmentation for enhanced depth imaging optical coherence tomography images

    International Nuclear Information System (INIS)

    Chen, Qiang; Niu, Sijie; Yuan, Songtao; Fan, Wen; Liu, Qinghuai

    2016-01-01

    Purpose: In clinical research, it is important to measure choroidal thickness when eyes are affected by various diseases. The main purpose is to automatically segment choroid for enhanced depth imaging optical coherence tomography (EDI-OCT) images with five B-scans averaging. Methods: The authors present an automated choroid segmentation method based on choroidal vasculature characteristics for EDI-OCT images with five B-scans averaging. By considering the large vascular of the Haller’s layer neighbor with the choroid-sclera junction (CSJ), the authors measured the intensity ascending distance and a maximum intensity image in the axial direction from a smoothed and normalized EDI-OCT image. Then, based on generated choroidal vessel image, the authors constructed the CSJ cost and constrain the CSJ search neighborhood. Finally, graph search with smooth constraints was utilized to obtain the CSJ boundary. Results: Experimental results with 49 images from 10 eyes in 8 normal persons and 270 images from 57 eyes in 44 patients with several stages of diabetic retinopathy and age-related macular degeneration demonstrate that the proposed method can accurately segment the choroid of EDI-OCT images with five B-scans averaging. The mean choroid thickness difference and overlap ratio between the authors’ proposed method and manual segmentation drawn by experts were −11.43 μm and 86.29%, respectively. Conclusions: Good performance was achieved for normal and pathologic eyes, which proves that the authors’ method is effective for the automated choroid segmentation of the EDI-OCT images with five B-scans averaging.

  4. Characterization of conductive nanobiomaterials derived from viral assemblies by low-voltage STEM imaging and Raman scattering

    International Nuclear Information System (INIS)

    Plascencia-Villa, Germán; Bahena, Daniel; José-Yacamán, Miguel; Carreño-Fuentes, Liliana; Palomares, Laura A; Ramírez, Octavio T

    2014-01-01

    New technologies require the development of novel nanomaterials that need to be fully characterized to achieve their potential. High-resolution low-voltage scanning transmission electron microscopy (STEM) has proven to be a very powerful technique in nanotechnology, but its use for the characterization of nanobiomaterials has been limited. Rotavirus VP6 self-assembles into nanotubular assemblies that possess an intrinsic affinity for Au ions. This property was exploited to produce hybrid nanobiomaterials by the in situ functionalization of recombinant VP6 nanotubes with gold nanoparticles. In this work, Raman spectroscopy and advanced analytical electron microscopy imaging with spherical aberration-corrected (Cs) STEM and nanodiffraction at low-voltage doses were employed to characterize nanobiomaterials. STEM imaging revealed the precise structure and arrangement of the protein templates, as well as the nanostructure and atomic arrangement of gold nanoparticles with high spatial sub-Angstrom resolution and avoided radiation damage. The imaging was coupled with backscattered electron imaging, ultra-high resolution scanning electron microscopy and x-ray spectroscopy. The hybrid nanobiomaterials that were obtained showed unique properties as bioelectronic conductive devices and showed enhanced Raman scattering by their precise arrangement into superlattices, displaying the utility of viral assemblies as functional integrative self-assembled nanomaterials for novel applications. (paper)

  5. Fluorescence based molecular in vivo imaging

    International Nuclear Information System (INIS)

    Ebert, Bernd

    2008-01-01

    Molecular imaging represents a modern research area that allows the in vivo study of molecular biological process kinetics using appropriate probes and visualization methods. This methodology may be defined- apart from the contrast media injection - as non-abrasive. In order to reach an in vivo molecular process imaging as accurate as possible the effects of the used probes on the biological should not be too large. The contrast media as important part of the molecular imaging can significantly contribute to the understanding of molecular processes and to the development of tailored diagnostics and therapy. Since more than 15 years PTB is developing optic imaging systems that may be used for fluorescence based visualization of tissue phantoms, small animal models and the localization of tumors and their predecessors, and for the early recognition of inflammatory processes in clinical trials. Cellular changes occur during many diseases, thus the molecular imaging might be of importance for the early diagnosis of chronic inflammatory diseases. Fluorescent dyes can be used as unspecific or also as specific contrast media, which allow enhanced detection sensitivity

  6. FUZZY BASED CONTRAST STRETCHING FOR MEDICAL IMAGE ENHANCEMENT

    Directory of Open Access Journals (Sweden)

    T.C. Raja Kumar

    2011-07-01

    Full Text Available Contrast Stretching is an important part in medical image processing applications. Contrast is the difference between two adjacent pixels. Fuzzy statistical values are analyzed and better results are produced in the spatial domain of the input image. The histogram mapping produces the resultant image with less impulsive noise and smooth nature. The probabilities of gray values are generated and the fuzzy set is determined from the position of the input image pixel. The result indicates the good performance of the proposed fuzzy based stretching. The inverse transform of the real values are mapped with the input image to generate the fuzzy statistics. This approach gives a flexible image enhancement for medical images in the presence of noises.

  7. Machine-Learning-Based Future Received Signal Strength Prediction Using Depth Images for mmWave Communications

    OpenAIRE

    Okamoto, Hironao; Nishio, Takayuki; Nakashima, Kota; Koda, Yusuke; Yamamoto, Koji; Morikura, Masahiro; Asai, Yusuke; Miyatake, Ryo

    2018-01-01

    This paper discusses a machine-learning (ML)-based future received signal strength (RSS) prediction scheme using depth camera images for millimeter-wave (mmWave) networks. The scheme provides the future RSS prediction of any mmWave links within the camera's view, including links where nodes are not transmitting frames. This enables network controllers to conduct network operations before line-of-sight path blockages degrade the RSS. Using the ML techniques, the prediction scheme automatically...

  8. Region-based multifocus image fusion for the precise acquisition of Pap smear images.

    Science.gov (United States)

    Tello-Mijares, Santiago; Bescós, Jesús

    2018-05-01

    A multifocus image fusion method to obtain a single focused image from a sequence of microscopic high-magnification Papanicolau source (Pap smear) images is presented. These images, captured each in a different position of the microscope lens, frequently show partially focused cells or parts of cells, which makes them unpractical for the direct application of image analysis techniques. The proposed method obtains a focused image with a high preservation of original pixels information while achieving a negligible visibility of the fusion artifacts. The method starts by identifying the best-focused image of the sequence; then, it performs a mean-shift segmentation over this image; the focus level of the segmented regions is evaluated in all the images of the sequence, and best-focused regions are merged in a single combined image; finally, this image is processed with an adaptive artifact removal process. The combination of a region-oriented approach, instead of block-based approaches, and a minimum modification of the value of focused pixels in the original images achieve a highly contrasted image with no visible artifacts, which makes this method especially convenient for the medical imaging domain. The proposed method is compared with several state-of-the-art alternatives over a representative dataset. The experimental results show that our proposal obtains the best and more stable quality indicators. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  9. Brain medical image diagnosis based on corners with importance-values.

    Science.gov (United States)

    Gao, Linlin; Pan, Haiwei; Li, Qing; Xie, Xiaoqin; Zhang, Zhiqiang; Han, Jinming; Zhai, Xiao

    2017-11-21

    Brain disorders are one of the top causes of human death. Generally, neurologists analyze brain medical images for diagnosis. In the image analysis field, corners are one of the most important features, which makes corner detection and matching studies essential. However, existing corner detection studies do not consider the domain information of brain. This leads to many useless corners and the loss of significant information. Regarding corner matching, the uncertainty and structure of brain are not employed in existing methods. Moreover, most corner matching studies are used for 3D image registration. They are inapplicable for 2D brain image diagnosis because of the different mechanisms. To address these problems, we propose a novel corner-based brain medical image classification method. Specifically, we automatically extract multilayer texture images (MTIs) which embody diagnostic information from neurologists. Moreover, we present a corner matching method utilizing the uncertainty and structure of brain medical images and a bipartite graph model. Finally, we propose a similarity calculation method for diagnosis. Brain CT and MRI image sets are utilized to evaluate the proposed method. First, classifiers are trained in N-fold cross-validation analysis to produce the best θ and K. Then independent brain image sets are tested to evaluate the classifiers. Moreover, the classifiers are also compared with advanced brain image classification studies. For the brain CT image set, the proposed classifier outperforms the comparison methods by at least 8% on accuracy and 2.4% on F1-score. Regarding the brain MRI image set, the proposed classifier is superior to the comparison methods by more than 7.3% on accuracy and 4.9% on F1-score. Results also demonstrate that the proposed method is robust to different intensity ranges of brain medical image. In this study, we develop a robust corner-based brain medical image classifier. Specifically, we propose a corner detection

  10. Initial Investigation of Software-Based Bone-Suppressed Imaging

    International Nuclear Information System (INIS)

    Park, Eunpyeong; Youn, Hanbean; Kim, Ho Kyung

    2015-01-01

    Chest radiography is the most widely used imaging modality in medicine. However, the diagnostic performance of chest radiography is deteriorated by the anatomical background of the patient. So, dual energy imaging (DEI) has recently been emerged and demonstrated an improved. However, the typical DEI requires more than two projections, hence causing additional patient dose. The motion artifact is another concern in the DEI. In this study, we investigate DEI-like bone-suppressed imaging based on the post processing of a single radiograph. To obtain bone-only images, we use the artificial neural network (ANN) method with the error backpropagation-based machine learning approach. The computational load of learning process of the ANN is too heavy for a practical implementation because we use the gradient descent method for the error backpropagation. We will use a more advanced error propagation method for the learning process

  11. Knowledge-based analysis and understanding of 3D medical images

    International Nuclear Information System (INIS)

    Dhawan, A.P.; Juvvadi, S.

    1988-01-01

    The anatomical three-dimensional (3D) medical imaging modalities, such as X-ray CT and MRI, have been well recognized in the diagnostic radiology for several years while the nuclear medicine modalities, such as PET, have just started making a strong impact through functional imaging. Though PET images provide the functional information about the human organs, they are hard to interpret because of the lack of anatomical information. The authors objective is to develop a knowledge-based biomedical image analysis system which can interpret the anatomical images (such as CT). The anatomical information thus obtained can then be used in analyzing PET images of the same patient. This will not only help in interpreting PET images but it will also provide a means of studying the correlation between the anatomical and functional imaging. This paper presents the preliminary results of the knowledge based biomedical image analysis system for interpreting CT images of the chest

  12. Distinguishing Adolescents With Conduct Disorder From Typically Developing Youngsters Based on Pattern Classification of Brain Structural MRI

    Directory of Open Access Journals (Sweden)

    Jianing Zhang

    2018-04-01

    Full Text Available Background: Conduct disorder (CD is a mental disorder diagnosed in childhood or adolescence that presents antisocial behaviors, and is associated with structural alterations in brain. However, whether these structural alterations can distinguish CD from healthy controls (HCs remains unknown. Here, we quantified these structural differences and explored the classification ability of these quantitative features based on machine learning (ML.Materials and Methods: High-resolution 3D structural magnetic resonance imaging (sMRI was acquired from 60 CD subjects and 60 age-matched HCs. Voxel-based morphometry (VBM was used to assess the regional gray matter (GM volume difference. The significantly different regional GM volumes were then extracted as features, and input into three ML classifiers: logistic regression, random forest and support vector machine (SVM. We trained and tested these ML models for classifying CD from HCs by using fivefold cross-validation (CV.Results: Eight brain regions with abnormal GM volumes were detected, which mainly distributed in the frontal lobe, parietal lobe, anterior cingulate, cerebellum posterior lobe, lingual gyrus, and insula areas. We found that these ML models achieved comparable classification performance, with accuracy of 77.9 ∼ 80.4%, specificity of 73.3 ∼ 80.4%, sensitivity of 75.4 ∼ 87.5%, and area under the receiver operating characteristic curve (AUC of 0.76 ∼ 0.80.Conclusion: Based on sMRI and ML, the regional GM volumes may be used as potential imaging biomarkers for stable and accurate classification of CD.

  13. A Novel Image Stream Cipher Based On Dynamic Substitution

    OpenAIRE

    Elsharkawi, A.; El-Sagheer, R. M.; Akah, H.; Taha, H.

    2016-01-01

    Recently, many chaos-based stream cipher algorithms have been developed. Traditional chaos stream cipher is based on XORing a generated secure random number sequence based on chaotic maps (e.g. logistic map, Bernoulli Map, Tent Map etc.) with the original image to get the encrypted image, This type of stream cipher seems to be vulnerable to chosen plaintext attacks. This paper introduces a new stream cipher algorithm based on dynamic substitution box. The new algorithm uses one substitution b...

  14. Chaos-based partial image encryption scheme based on linear fractional and lifting wavelet transforms

    Science.gov (United States)

    Belazi, Akram; Abd El-Latif, Ahmed A.; Diaconu, Adrian-Viorel; Rhouma, Rhouma; Belghith, Safya

    2017-01-01

    In this paper, a new chaos-based partial image encryption scheme based on Substitution-boxes (S-box) constructed by chaotic system and Linear Fractional Transform (LFT) is proposed. It encrypts only the requisite parts of the sensitive information in Lifting-Wavelet Transform (LWT) frequency domain based on hybrid of chaotic maps and a new S-box. In the proposed encryption scheme, the characteristics of confusion and diffusion are accomplished in three phases: block permutation, substitution, and diffusion. Then, we used dynamic keys instead of fixed keys used in other approaches, to control the encryption process and make any attack impossible. The new S-box was constructed by mixing of chaotic map and LFT to insure the high confidentiality in the inner encryption of the proposed approach. In addition, the hybrid compound of S-box and chaotic systems strengthened the whole encryption performance and enlarged the key space required to resist the brute force attacks. Extensive experiments were conducted to evaluate the security and efficiency of the proposed approach. In comparison with previous schemes, the proposed cryptosystem scheme showed high performances and great potential for prominent prevalence in cryptographic applications.

  15. Comparison of Subset-Based Local and Finite Element-Based Global Digital Image Correlation

    KAUST Repository

    Pan, Bing; Wang, B.; Lubineau, Gilles; Moussawi, Ali

    2015-01-01

    Digital image correlation (DIC) techniques require an image matching algorithm to register the same physical points represented in different images. Subset-based local DIC and finite element-based (FE-based) global DIC are the two primary image matching methods that have been extensively investigated and regularly used in the field of experimental mechanics. Due to its straightforward implementation and high efficiency, subset-based local DIC has been used in almost all commercial DIC packages. However, it is argued by some researchers that FE-based global DIC offers better accuracy because of the enforced continuity between element nodes. We propose a detailed performance comparison between these different DIC algorithms both in terms of measurement accuracy and computational efficiency. Then, by measuring displacements of the same calculation points using the same calculation algorithms (e.g., correlation criterion, initial guess estimation, subpixel interpolation, optimization algorithm and convergence conditions) and identical calculation parameters (e.g., subset or element size), the performances of subset-based local DIC and two FE-based global DIC approaches are carefully compared in terms of measurement error and computational efficiency using both numerical tests and real experiments. A detailed examination of the experimental results reveals that, when subset (element) size is not very small and the local deformation within a subset (element) can be well approximated by the shape function used, standard subset-based local DIC approach not only provides better results in measured displacements, but also demonstrates much higher computation efficiency. However, several special merits of FE-based global DIC approaches are indicated.

  16. Comparison of Subset-Based Local and Finite Element-Based Global Digital Image Correlation

    KAUST Repository

    Pan, Bing

    2015-02-12

    Digital image correlation (DIC) techniques require an image matching algorithm to register the same physical points represented in different images. Subset-based local DIC and finite element-based (FE-based) global DIC are the two primary image matching methods that have been extensively investigated and regularly used in the field of experimental mechanics. Due to its straightforward implementation and high efficiency, subset-based local DIC has been used in almost all commercial DIC packages. However, it is argued by some researchers that FE-based global DIC offers better accuracy because of the enforced continuity between element nodes. We propose a detailed performance comparison between these different DIC algorithms both in terms of measurement accuracy and computational efficiency. Then, by measuring displacements of the same calculation points using the same calculation algorithms (e.g., correlation criterion, initial guess estimation, subpixel interpolation, optimization algorithm and convergence conditions) and identical calculation parameters (e.g., subset or element size), the performances of subset-based local DIC and two FE-based global DIC approaches are carefully compared in terms of measurement error and computational efficiency using both numerical tests and real experiments. A detailed examination of the experimental results reveals that, when subset (element) size is not very small and the local deformation within a subset (element) can be well approximated by the shape function used, standard subset-based local DIC approach not only provides better results in measured displacements, but also demonstrates much higher computation efficiency. However, several special merits of FE-based global DIC approaches are indicated.

  17. Parallel content-based sub-image retrieval using hierarchical searching.

    Science.gov (United States)

    Yang, Lin; Qi, Xin; Xing, Fuyong; Kurc, Tahsin; Saltz, Joel; Foran, David J

    2014-04-01

    The capacity to systematically search through large image collections and ensembles and detect regions exhibiting similar morphological characteristics is central to pathology diagnosis. Unfortunately, the primary methods used to search digitized, whole-slide histopathology specimens are slow and prone to inter- and intra-observer variability. The central objective of this research was to design, develop, and evaluate a content-based image retrieval system to assist doctors for quick and reliable content-based comparative search of similar prostate image patches. Given a representative image patch (sub-image), the algorithm will return a ranked ensemble of image patches throughout the entire whole-slide histology section which exhibits the most similar morphologic characteristics. This is accomplished by first performing hierarchical searching based on a newly developed hierarchical annular histogram (HAH). The set of candidates is then further refined in the second stage of processing by computing a color histogram from eight equally divided segments within each square annular bin defined in the original HAH. A demand-driven master-worker parallelization approach is employed to speed up the searching procedure. Using this strategy, the query patch is broadcasted to all worker processes. Each worker process is dynamically assigned an image by the master process to search for and return a ranked list of similar patches in the image. The algorithm was tested using digitized hematoxylin and eosin (H&E) stained prostate cancer specimens. We have achieved an excellent image retrieval performance. The recall rate within the first 40 rank retrieved image patches is ∼90%. Both the testing data and source code can be downloaded from http://pleiad.umdnj.edu/CBII/Bioinformatics/.

  18. Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation

    International Nuclear Information System (INIS)

    Wang, Yan; Zhou, Jiliu; Zhang, Pei; An, Le; Ma, Guangkai; Kang, Jiayin; Shi, Feng; Shen, Dinggang; Wu, Xi; Lalush, David S; Lin, Weili

    2016-01-01

    Positron emission tomography (PET) has been widely used in clinical diagnosis for diseases and disorders. To obtain high-quality PET images requires a standard-dose radionuclide (tracer) injection into the human body, which inevitably increases risk of radiation exposure. One possible solution to this problem is to predict the standard-dose PET image from its low-dose counterpart and its corresponding multimodal magnetic resonance (MR) images. Inspired by the success of patch-based sparse representation (SR) in super-resolution image reconstruction, we propose a mapping-based SR (m-SR) framework for standard-dose PET image prediction. Compared with the conventional patch-based SR, our method uses a mapping strategy to ensure that the sparse coefficients, estimated from the multimodal MR images and low-dose PET image, can be applied directly to the prediction of standard-dose PET image. As the mapping between multimodal MR images (or low-dose PET image) and standard-dose PET images can be particularly complex, one step of mapping is often insufficient. To this end, an incremental refinement framework is therefore proposed. Specifically, the predicted standard-dose PET image is further mapped to the target standard-dose PET image, and then the SR is performed again to predict a new standard-dose PET image. This procedure can be repeated for prediction refinement of the iterations. Also, a patch selection based dictionary construction method is further used to speed up the prediction process. The proposed method is validated on a human brain dataset. The experimental results show that our method can outperform benchmark methods in both qualitative and quantitative measures. (paper)

  19. Entropy-Based Block Processing for Satellite Image Registration

    Directory of Open Access Journals (Sweden)

    Ikhyun Lee

    2012-11-01

    Full Text Available Image registration is an important task in many computer vision applications such as fusion systems, 3D shape recovery and earth observation. Particularly, registering satellite images is challenging and time-consuming due to limited resources and large image size. In such scenario, state-of-the-art image registration methods such as scale-invariant feature transform (SIFT may not be suitable due to high processing time. In this paper, we propose an algorithm based on block processing via entropy to register satellite images. The performance of the proposed method is evaluated using different real images. The comparative analysis shows that it not only reduces the processing time but also enhances the accuracy.

  20. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance

    Science.gov (United States)

    2017-01-01

    This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529

  1. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance

    Directory of Open Access Journals (Sweden)

    Liyun Zhuang

    2017-01-01

    Full Text Available This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE, which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE. Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image.

  2. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance.

    Science.gov (United States)

    Zhuang, Liyun; Guan, Yepeng

    2017-01-01

    This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image.

  3. Effectiveness of mindfulness-based cognitive therapy on clinical syndrome and body image in women with bulimia nervosa

    Directory of Open Access Journals (Sweden)

    Marjan Moradi

    2017-08-01

    Full Text Available Introduction: The purpose of present research was to investigate the effectiveness of mindfulness-based cognitive therapy on clinical syndrome and body image in women with bulimia nervosa disorder. Materials and Methods: This is a quasi-experimental study with pre-test, post-test, and control group. The study population consisted of all women who referred to two nutrition and diet therapy clinics in Mashhad between February and May 2015, among which 30 women with inclusion and exclusion criteria were selected as the sample using convenience sampling. The 30 participants were randomly assigned to two 15-person groups. The first group received mindfulness-based cognitive therapy and the second group was the control group that was placed on a waiting list. Binge Eating questionnaire (Gormally, 1982, Fisher’s image inventory and Depression Anxiety Stress Scale (DASS-21 were used to collect data. Data analysis was conducted using analysis of covariance in SPSS. Results: Based on the test results, mindfulness-based cognitive therapy significantly reduced depression (P

  4. Optical image encryption scheme with multiple light paths based on compressive ghost imaging

    Science.gov (United States)

    Zhu, Jinan; Yang, Xiulun; Meng, Xiangfeng; Wang, Yurong; Yin, Yongkai; Sun, Xiaowen; Dong, Guoyan

    2018-02-01

    An optical image encryption method with multiple light paths is proposed based on compressive ghost imaging. In the encryption process, M random phase-only masks (POMs) are generated by means of logistic map algorithm, and these masks are then uploaded to the spatial light modulator (SLM). The collimated laser light is divided into several beams by beam splitters as it passes through the SLM, and the light beams illuminate the secret images, which are converted into sparse images by discrete wavelet transform beforehand. Thus, the secret images are simultaneously encrypted into intensity vectors by ghost imaging. The distances between the SLM and secret images vary and can be used as the main keys with original POM and the logistic map algorithm coefficient in the decryption process. In the proposed method, the storage space can be significantly decreased and the security of the system can be improved. The feasibility, security and robustness of the method are further analysed through computer simulations.

  5. Task-based statistical image reconstruction for high-quality cone-beam CT

    Science.gov (United States)

    Dang, Hao; Webster Stayman, J.; Xu, Jennifer; Zbijewski, Wojciech; Sisniega, Alejandro; Mow, Michael; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.

    2017-11-01

    Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR—viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ({{d}\\prime} ). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in {{d}\\prime} , and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a

  6. A simple method for detecting tumor in T2-weighted MRI brain images. An image-based analysis

    International Nuclear Information System (INIS)

    Lau, Phooi-Yee; Ozawa, Shinji

    2006-01-01

    The objective of this paper is to present a decision support system which uses a computer-based procedure to detect tumor blocks or lesions in digitized medical images. The authors developed a simple method with a low computation effort to detect tumors on T2-weighted Magnetic Resonance Imaging (MRI) brain images, focusing on the connection between the spatial pixel value and tumor properties from four different perspectives: cases having minuscule differences between two images using a fixed block-based method, tumor shape and size using the edge and binary images, tumor properties based on texture values using spatial pixel intensity distribution controlled by a global discriminate value, and the occurrence of content-specific tumor pixel for threshold images. Measurements of the following medical datasets were performed: different time interval images, and different brain disease images on single and multiple slice images. Experimental results have revealed that our proposed technique incurred an overall error smaller than those in other proposed methods. In particular, the proposed method allowed decrements of false alarm and missed alarm errors, which demonstrate the effectiveness of our proposed technique. In this paper, we also present a prototype system, known as PCB, to evaluate the performance of the proposed methods by actual experiments, comparing the detection accuracy and system performance. (author)

  7. COMPREHENSIVE COMPARISON OF TWO IMAGE-BASED POINT CLOUDS FROM AERIAL PHOTOS WITH AIRBORNE LIDAR FOR LARGE-SCALE MAPPING

    Directory of Open Access Journals (Sweden)

    E. Widyaningrum

    2017-09-01

    Full Text Available The integration of computer vision and photogrammetry to generate three-dimensional (3D information from images has contributed to a wider use of point clouds, for mapping purposes. Large-scale topographic map production requires 3D data with high precision and accuracy to represent the real conditions of the earth surface. Apart from LiDAR point clouds, the image-based matching is also believed to have the ability to generate reliable and detailed point clouds from multiple-view images. In order to examine and analyze possible fusion of LiDAR and image-based matching for large-scale detailed mapping purposes, point clouds are generated by Semi Global Matching (SGM and by Structure from Motion (SfM. In order to conduct comprehensive and fair comparison, this study uses aerial photos and LiDAR data that were acquired at the same time. Qualitative and quantitative assessments have been applied to evaluate LiDAR and image-matching point clouds data in terms of visualization, geometric accuracy, and classification result. The comparison results conclude that LiDAR is the best data for large-scale mapping.

  8. Evaluation of imaging protocol for ECT based on CS image reconstruction algorithm

    International Nuclear Information System (INIS)

    Zhou Xiaolin; Yun Mingkai; Cao Xuexiang; Liu Shuangquan; Wang Lu; Huang Xianchao; Wei Long

    2014-01-01

    Single-photon emission computerized tomography and positron emission tomography are essential medical imaging tools, for which the sampling angle number and scan time should be carefully chosen to give a good compromise between image quality and radiopharmaceutical dose. In this study, the image quality of different acquisition protocols was evaluated via varied angle number and count number per angle with Monte Carlo simulation data. It was shown that, when similar imaging counts were used, the factor of acquisition counts was more important than that of the sampling number in emission computerized tomography. To further reduce the activity requirement and the scan duration, an iterative image reconstruction algorithm for limited-view and low-dose tomography based on compressed sensing theory has been developed. The total variation regulation was added to the reconstruction process to improve the signal to noise Ratio and reduce artifacts caused by the limited angle sampling. Maximization of the maximum likelihood of the estimated image and the measured data and minimization of the total variation of the image are alternatively implemented. By using this advanced algorithm, the reconstruction process is able to achieve image quality matching or exceed that of normal scans with only half of the injection radiopharmaceutical dose. (authors)

  9. Hyperspectral image compressing using wavelet-based method

    Science.gov (United States)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  10. Influence of Conductive and Semi-Conductive Nanoparticles on the Dielectric Response of Natural Ester-Based Nanofluid Insulation

    Directory of Open Access Journals (Sweden)

    M. Z. H. Makmud

    2018-02-01

    Full Text Available Nowadays, studies of alternative liquid insulation in high voltage apparatus have become increasingly important due to higher concerns regarding safety, sustainable resources and environmentally friendly issues. To fulfil this demand, natural ester has been extensively studied and it can become a potential product to replace mineral oil in power transformers. In addition, the incorporation of nanoparticles has been remarkable in producing improved characteristics of insulating oil. Although much extensive research has been carried out, there is no general agreement on the influence on the dielectric response of base oil due to the addition of different amounts and conductivity types of nanoparticle concentrations. Therefore, in this work, a natural ester-based nanofluid was prepared by a two-step method using iron oxide (Fe2O3 and titanium dioxide (TiO2 as the conductive and semi-conductive nanoparticles, respectively. The concentration amount of each nanoparticle types was varied at 0.01, 0.1 and 1.0 g/L. The nanofluid samples were characterised by visual inspection, morphology and the dynamic light scattering (DLS method before the dielectric response measurement was carried out for frequency-dependent spectroscopy (FDS, current-voltage (I-V, and dielectric breakdown (BD strength. The results show that the dielectric spectra and I-V curves of nanofluid-based iron oxide increases with the increase of iron oxide nanoparticle loading, while for titanium dioxide, it exhibits a decreasing response. The dielectric BD strength is enhanced for both types of nanoparticles at 0.01 g/L concentration. However, the increasing amount of nanoparticles at 0.1 and 1.0 g/L led to a contrary dielectric BD response. Thus, the results indicate that the augmentation of conductive nanoparticles in the suspension can lead to overlapping mechanisms. Consequently, this reduces the BD strength compared to pristine materials during electron injection in high electric

  11. Novel Fingertip Image-Based Heart Rate Detection Methods for a Smartphone

    Directory of Open Access Journals (Sweden)

    Rifat Zaman

    2017-02-01

    Full Text Available We hypothesize that our fingertip image-based heart rate detection methods using smartphone reliably detect the heart rhythm and rate of subjects. We propose fingertip curve line movement-based and fingertip image intensity-based detection methods, which both use the movement of successive fingertip images obtained from smartphone cameras. To investigate the performance of the proposed methods, heart rhythm and rate of the proposed methods are compared to those of the conventional method, which is based on average image pixel intensity. Using a smartphone, we collected 120 s pulsatile time series data from each recruited subject. The results show that the proposed fingertip curve line movement-based method detects heart rate with a maximum deviation of 0.0832 Hz and 0.124 Hz using time- and frequency-domain based estimation, respectively, compared to the conventional method. Moreover, another proposed fingertip image intensity-based method detects heart rate with a maximum deviation of 0.125 Hz and 0.03 Hz using time- and frequency-based estimation, respectively.

  12. Fast image acquisition and processing on a TV camera-based portal imaging system

    International Nuclear Information System (INIS)

    Baier, K.; Meyer, J.

    2005-01-01

    The present paper describes the fast acquisition and processing of portal images directly from a TV camera-based portal imaging device (Siemens Beamview Plus trademark). This approach employs not only hard- and software included in the standard package installed by the manufacturer (in particular the frame grabber card and the Matrox(tm) Intellicam interpreter software), but also a software tool developed in-house for further processing and analysis of the images. The technical details are presented, including the source code for the Matrox trademark interpreter script that enables the image capturing process. With this method it is possible to obtain raw images directly from the frame grabber card at an acquisition rate of 15 images per second. The original configuration by the manufacturer allows the acquisition of only a few images over the course of a treatment session. The approach has a wide range of applications, such as quality assurance (QA) of the radiation beam, real-time imaging, real-time verification of intensity-modulated radiation therapy (IMRT) fields, and generation of movies of the radiation field (fluoroscopy mode). (orig.)

  13. Illumination compensation in ground based hyperspectral imaging

    Science.gov (United States)

    Wendel, Alexander; Underwood, James

    2017-07-01

    Hyperspectral imaging has emerged as an important tool for analysing vegetation data in agricultural applications. Recently, low altitude and ground based hyperspectral imaging solutions have come to the fore, providing very high resolution data for mapping and studying large areas of crops in detail. However, these platforms introduce a unique set of challenges that need to be overcome to ensure consistent, accurate and timely acquisition of data. One particular problem is dealing with changes in environmental illumination while operating with natural light under cloud cover, which can have considerable effects on spectral shape. In the past this has been commonly achieved by imaging known reference targets at the time of data acquisition, direct measurement of irradiance, or atmospheric modelling. While capturing a reference panel continuously or very frequently allows accurate compensation for illumination changes, this is often not practical with ground based platforms, and impossible in aerial applications. This paper examines the use of an autonomous unmanned ground vehicle (UGV) to gather high resolution hyperspectral imaging data of crops under natural illumination. A process of illumination compensation is performed to extract the inherent reflectance properties of the crops, despite variable illumination. This work adapts a previously developed subspace model approach to reflectance and illumination recovery. Though tested on a ground vehicle in this paper, it is applicable to low altitude unmanned aerial hyperspectral imagery also. The method uses occasional observations of reference panel training data from within the same or other datasets, which enables a practical field protocol that minimises in-field manual labour. This paper tests the new approach, comparing it against traditional methods. Several illumination compensation protocols for high volume ground based data collection are presented based on the results. The findings in this paper are

  14. A novel multiphoton microscopy images segmentation method based on superpixel and watershed.

    Science.gov (United States)

    Wu, Weilin; Lin, Jinyong; Wang, Shu; Li, Yan; Liu, Mingyu; Liu, Gaoqiang; Cai, Jianyong; Chen, Guannan; Chen, Rong

    2017-04-01

    Multiphoton microscopy (MPM) imaging technique based on two-photon excited fluorescence (TPEF) and second harmonic generation (SHG) shows fantastic performance for biological imaging. The automatic segmentation of cellular architectural properties for biomedical diagnosis based on MPM images is still a challenging issue. A novel multiphoton microscopy images segmentation method based on superpixels and watershed (MSW) is presented here to provide good segmentation results for MPM images. The proposed method uses SLIC superpixels instead of pixels to analyze MPM images for the first time. The superpixels segmentation based on a new distance metric combined with spatial, CIE Lab color space and phase congruency features, divides the images into patches which keep the details of the cell boundaries. Then the superpixels are used to reconstruct new images by defining an average value of superpixels as image pixels intensity level. Finally, the marker-controlled watershed is utilized to segment the cell boundaries from the reconstructed images. Experimental results show that cellular boundaries can be extracted from MPM images by MSW with higher accuracy and robustness. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Simple and robust image-based autofocusing for digital microscopy.

    Science.gov (United States)

    Yazdanfar, Siavash; Kenny, Kevin B; Tasimi, Krenar; Corwin, Alex D; Dixon, Elizabeth L; Filkins, Robert J

    2008-06-09

    A simple image-based autofocusing scheme for digital microscopy is demonstrated that uses as few as two intermediate images to bring the sample into focus. The algorithm is adapted to a commercial inverted microscope and used to automate brightfield and fluorescence imaging of histopathology tissue sections.

  16. MISTICA: Minimum Spanning Tree-Based Coarse Image Alignment for Microscopy Image Sequences.

    Science.gov (United States)

    Ray, Nilanjan; McArdle, Sara; Ley, Klaus; Acton, Scott T

    2016-11-01

    Registration of an in vivo microscopy image sequence is necessary in many significant studies, including studies of atherosclerosis in large arteries and the heart. Significant cardiac and respiratory motion of the living subject, occasional spells of focal plane changes, drift in the field of view, and long image sequences are the principal roadblocks. The first step in such a registration process is the removal of translational and rotational motion. Next, a deformable registration can be performed. The focus of our study here is to remove the translation and/or rigid body motion that we refer to here as coarse alignment. The existing techniques for coarse alignment are unable to accommodate long sequences often consisting of periods of poor quality images (as quantified by a suitable perceptual measure). Many existing methods require the user to select an anchor image to which other images are registered. We propose a novel method for coarse image sequence alignment based on minimum weighted spanning trees (MISTICA) that overcomes these difficulties. The principal idea behind MISTICA is to reorder the images in shorter sequences, to demote nonconforming or poor quality images in the registration process, and to mitigate the error propagation. The anchor image is selected automatically making MISTICA completely automated. MISTICA is computationally efficient. It has a single tuning parameter that determines graph width, which can also be eliminated by the way of additional computation. MISTICA outperforms existing alignment methods when applied to microscopy image sequences of mouse arteries.

  17. Web Based Distributed Coastal Image Analysis System, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project develops Web based distributed image analysis system processing the Moderate Resolution Imaging Spectroradiometer (MODIS) data to provide decision...

  18. Lossless Image Compression Based on Multiple-Tables Arithmetic Coding

    Directory of Open Access Journals (Sweden)

    Rung-Ching Chen

    2009-01-01

    Full Text Available This paper is intended to present a lossless image compression method based on multiple-tables arithmetic coding (MTAC method to encode a gray-level image f. First, the MTAC method employs a median edge detector (MED to reduce the entropy rate of f. The gray levels of two adjacent pixels in an image are usually similar. A base-switching transformation approach is then used to reduce the spatial redundancy of the image. The gray levels of some pixels in an image are more common than those of others. Finally, the arithmetic encoding method is applied to reduce the coding redundancy of the image. To promote high performance of the arithmetic encoding method, the MTAC method first classifies the data and then encodes each cluster of data using a distinct code table. The experimental results show that, in most cases, the MTAC method provides a higher efficiency in use of storage space than the lossless JPEG2000 does.

  19. Digital Image Encryption Algorithm Design Based on Genetic Hyperchaos

    Directory of Open Access Journals (Sweden)

    Jian Wang

    2016-01-01

    Full Text Available In view of the present chaotic image encryption algorithm based on scrambling (diffusion is vulnerable to choosing plaintext (ciphertext attack in the process of pixel position scrambling, we put forward a image encryption algorithm based on genetic super chaotic system. The algorithm, by introducing clear feedback to the process of scrambling, makes the scrambling effect related to the initial chaos sequence and the clear text itself; it has realized the image features and the organic fusion of encryption algorithm. By introduction in the process of diffusion to encrypt plaintext feedback mechanism, it improves sensitivity of plaintext, algorithm selection plaintext, and ciphertext attack resistance. At the same time, it also makes full use of the characteristics of image information. Finally, experimental simulation and theoretical analysis show that our proposed algorithm can not only effectively resist plaintext (ciphertext attack, statistical attack, and information entropy attack but also effectively improve the efficiency of image encryption, which is a relatively secure and effective way of image communication.

  20. Impact of Computed Tomography Image Quality on Image-Guided Radiation Therapy Based on Soft Tissue Registration

    International Nuclear Information System (INIS)

    Morrow, Natalya V.; Lawton, Colleen A.; Qi, X. Sharon; Li, X. Allen

    2012-01-01

    Purpose: In image-guided radiation therapy (IGRT), different computed tomography (CT) modalities with varying image quality are being used to correct for interfractional variations in patient set-up and anatomy changes, thereby reducing clinical target volume to the planning target volume (CTV-to-PTV) margins. We explore how CT image quality affects patient repositioning and CTV-to-PTV margins in soft tissue registration-based IGRT for prostate cancer patients. Methods and Materials: Four CT-based IGRT modalities used for prostate RT were considered in this study: MV fan beam CT (MVFBCT) (Tomotherapy), MV cone beam CT (MVCBCT) (MVision; Siemens), kV fan beam CT (kVFBCT) (CTVision, Siemens), and kV cone beam CT (kVCBCT) (Synergy; Elekta). Daily shifts were determined by manual registration to achieve the best soft tissue agreement. Effect of image quality on patient repositioning was determined by statistical analysis of daily shifts for 136 patients (34 per modality). Inter- and intraobserver variability of soft tissue registration was evaluated based on the registration of a representative scan for each CT modality with its corresponding planning scan. Results: Superior image quality with the kVFBCT resulted in reduced uncertainty in soft tissue registration during IGRT compared with other image modalities for IGRT. The largest interobserver variations of soft tissue registration were 1.1 mm, 2.5 mm, 2.6 mm, and 3.2 mm for kVFBCT, kVCBCT, MVFBCT, and MVCBCT, respectively. Conclusions: Image quality adversely affects the reproducibility of soft tissue-based registration for IGRT and necessitates a careful consideration of residual uncertainties in determining different CTV-to-PTV margins for IGRT using different image modalities.

  1. Impact of Computed Tomography Image Quality on Image-Guided Radiation Therapy Based on Soft Tissue Registration

    Energy Technology Data Exchange (ETDEWEB)

    Morrow, Natalya V.; Lawton, Colleen A. [Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin (United States); Qi, X. Sharon [Department of Radiation Oncology, University of Colorado Denver, Denver, Colorado (United States); Li, X. Allen, E-mail: ali@mcw.edu [Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin (United States)

    2012-04-01

    Purpose: In image-guided radiation therapy (IGRT), different computed tomography (CT) modalities with varying image quality are being used to correct for interfractional variations in patient set-up and anatomy changes, thereby reducing clinical target volume to the planning target volume (CTV-to-PTV) margins. We explore how CT image quality affects patient repositioning and CTV-to-PTV margins in soft tissue registration-based IGRT for prostate cancer patients. Methods and Materials: Four CT-based IGRT modalities used for prostate RT were considered in this study: MV fan beam CT (MVFBCT) (Tomotherapy), MV cone beam CT (MVCBCT) (MVision; Siemens), kV fan beam CT (kVFBCT) (CTVision, Siemens), and kV cone beam CT (kVCBCT) (Synergy; Elekta). Daily shifts were determined by manual registration to achieve the best soft tissue agreement. Effect of image quality on patient repositioning was determined by statistical analysis of daily shifts for 136 patients (34 per modality). Inter- and intraobserver variability of soft tissue registration was evaluated based on the registration of a representative scan for each CT modality with its corresponding planning scan. Results: Superior image quality with the kVFBCT resulted in reduced uncertainty in soft tissue registration during IGRT compared with other image modalities for IGRT. The largest interobserver variations of soft tissue registration were 1.1 mm, 2.5 mm, 2.6 mm, and 3.2 mm for kVFBCT, kVCBCT, MVFBCT, and MVCBCT, respectively. Conclusions: Image quality adversely affects the reproducibility of soft tissue-based registration for IGRT and necessitates a careful consideration of residual uncertainties in determining different CTV-to-PTV margins for IGRT using different image modalities.

  2. Space-based infrared sensors of space target imaging effect analysis

    Science.gov (United States)

    Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang

    2018-02-01

    Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.

  3. Optimization of an Image-Based Talking Head System

    Directory of Open Access Journals (Sweden)

    Kang Liu

    2009-01-01

    Full Text Available This paper presents an image-based talking head system, which includes two parts: analysis and synthesis. The audiovisual analysis part creates a face model of a recorded human subject, which is composed of a personalized 3D mask as well as a large database of mouth images and their related information. The synthesis part generates natural looking facial animations from phonetic transcripts of text. A critical issue of the synthesis is the unit selection which selects and concatenates these appropriate mouth images from the database such that they match the spoken words of the talking head. Selection is based on lip synchronization and the similarity of consecutive images. The unit selection is refined in this paper, and Pareto optimization is used to train the unit selection. Experimental results of subjective tests show that most people cannot distinguish our facial animations from real videos.

  4. A framework of region-based dynamic image fusion

    Institute of Scientific and Technical Information of China (English)

    WANG Zhong-hua; QIN Zheng; LIU Yu

    2007-01-01

    A new framework of region-based dynamic image fusion is proposed. First, the technique of target detection is applied to dynamic images (image sequences) to segment images into different targets and background regions. Then different fusion rules are employed in different regions so that the target information is preserved as much as possible. In addition, steerable non-separable wavelet frame transform is used in the process of multi-resolution analysis, so the system achieves favorable characters of orientation and invariant shift. Compared with other image fusion methods, experimental results showed that the proposed method has better capabilities of target recognition and preserves clear background information.

  5. Hydraulic and thermal conduction phenomena in soils at the particle-scale: Towards realistic FEM simulations

    International Nuclear Information System (INIS)

    Narsilio, G A; Yun, T S; Kress, J; Evans, T M

    2010-01-01

    This paper summarizes a method to characterize conduction properties in soils at the particle-scale. The method set the bases for an alternative way to estimate conduction parameters such as thermal conductivity and hydraulic conductivity, with the potential application to hard-to-obtain samples, where traditional experimental testing on large enough specimens becomes much more expensive. The technique is exemplified using 3D synthetic grain packings generated with discrete element methods, from which 3D granular images are constructed. Images are then imported into the finite element analyses to solve the corresponding governing partial differential equations of hydraulic and thermal conduction. High performance computing is implemented to meet the demanding 3D numerical calculations of the complex geometrical domains. The effects of void ratio and inter-particle contacts in hydraulic and thermal conduction are explored. Laboratory measurements support the numerically obtained results and validate the viability of the new methods used herein. The integration of imaging with rigorous numerical simulations at the pore-scale also enables fundamental observation of particle-scale mechanisms of macro-scale manifestation.

  6. Bayer image parallel decoding based on GPU

    Science.gov (United States)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  7. A New Images Hiding Scheme Based on Chaotic Sequences

    Institute of Scientific and Technical Information of China (English)

    LIU Nian-sheng; GUO Dong-hui; WU Bo-xi; Parr G

    2005-01-01

    We propose a data hidding technique in a still image. This technique is based on chaotic sequence in the transform domain of covert image. We use different chaotic random sequences multiplied by multiple sensitive images, respectively, to spread the spectrum of sensitive images. Multiple sensitive images are hidden in a covert image as a form of noise. The results of theoretical analysis and computer simulation show the new hiding technique have better properties with high security, imperceptibility and capacity for hidden information in comparison with the conventional scheme such as LSB (Least Significance Bit).

  8. Research of image retrieval technology based on color feature

    Science.gov (United States)

    Fu, Yanjun; Jiang, Guangyu; Chen, Fengying

    2009-10-01

    Recently, with the development of the communication and the computer technology and the improvement of the storage technology and the capability of the digital image equipment, more and more image resources are given to us than ever. And thus the solution of how to locate the proper image quickly and accurately is wanted.The early method is to set up a key word for searching in the database, but now the method has become very difficult when we search much more picture that we need. In order to overcome the limitation of the traditional searching method, content based image retrieval technology was aroused. Now, it is a hot research subject.Color image retrieval is the important part of it. Color is the most important feature for color image retrieval. Three key questions on how to make use of the color characteristic are discussed in the paper: the expression of color, the abstraction of color characteristic and the measurement of likeness based on color. On the basis, the extraction technology of the color histogram characteristic is especially discussed. Considering the advantages and disadvantages of the overall histogram and the partition histogram, a new method based the partition-overall histogram is proposed. The basic thought of it is to divide the image space according to a certain strategy, and then calculate color histogram of each block as the color feature of this block. Users choose the blocks that contain important space information, confirming the right value. The system calculates the distance between the corresponding blocks that users choosed. Other blocks merge into part overall histograms again, and the distance should be calculated. Then accumulate all the distance as the real distance between two pictures. The partition-overall histogram comprehensive utilizes advantages of two methods above, by choosing blocks makes the feature contain more spatial information which can improve performance; the distances between partition-overall histogram

  9. Improvements and artifact analysis in conductivity images using multiple internal electrodes

    International Nuclear Information System (INIS)

    Farooq, Adnan; McEwan, Alistair Lee; Woo, Eung Je; Oh, Tong In; Tehrani, Joubin Nasehi

    2014-01-01

    Electrical impedance tomography is an attractive functional imaging method. It is currently limited in resolution and sensitivity due to the complexity of the inverse problem and the safety limits of introducing current. Recently, internal electrodes have been proposed for some clinical situations such as intensive care or RF ablation. This paper addresses the research question related to the benefit of one or more internal electrodes usage since these are invasive. Internal electrodes would be able to reduce the effect of insulating boundaries such as fat and bone and provide improved internal sensitivity. We found there was a measurable benefit with increased numbers of internal electrodes in saline tanks of a cylindrical and complex shape with up to two insulating boundary gel layers modeling fat and muscle. The internal electrodes provide increased sensitivity to internal changes, thereby increasing the amplitude response and improving resolution. However, they also present an additional challenge of increasing sensitivity to position and modeling errors. In comparison with previous work that used point sources for the internal electrodes, we found that it is important to use a detailed mesh of the internal electrodes with these voxels assigned to the conductivity of the internal electrode and its associated holder. A study of different internal electrode materials found that it is optimal to use a conductivity similar to the background. In the tank with a complex shape, the additional internal electrodes provided more robustness in a ventilation model of the lungs via air filled balloons. (paper)

  10. Graphene based metamaterials for terahertz cloaking and subwavelength imaging

    Science.gov (United States)

    Forouzmand, Seyedali

    Graphene is a two-dimensional carbon crystal that became one of the most controversial topics of research in the last few years. The intense interest in graphene stems from recent demonstrations of their potentially revolutionary electromagnetic applications -- including negative refraction, subdiffraction imaging, and even invisibility -- which have suggested a wide range of new devices for communications, sensing, and biomedicine. In addition, it has been shown that graphene is amenable to unique patterning schemes such as cutting, bending, folding, and fusion that are predicted to lead to interesting properties. A recent proposed application of graphene is in engineering the scattering properties of objects, which may be leveraged in applications such as radar-cross-section management and stealth, where it may be required to make one object look like another object or render an object completely invisible. We present the analytical formulation for the analysis of electromagnetic interaction with a finite conducting wedge covered with a cylindrically shaped nanostructured graphene metasurface, resulting in the scattering cancellation of the dominant scattering mode for all the incident and all the observation angles. Following this idea, the cylindrical graphene metasurface is utilized for cloaking of several concentric finite conducting wedges. In addition, a wedge shaped metasurface is proposed as an alternative approach for cloaking of finite wedges. The resolution of the conventional imaging lenses is restricted by the natural diffraction limit. Artificially engineered metamaterials now offer the possibility of creating a superlens that overcomes this restriction. We demonstrate that a wire medium (WM) slab loaded with graphene sheets enables the enhancement of the near field for subwavelength imaging at terahertz (THz) frequencies. The analysis is based on the nonlocal homogenization model for WM with the additional boundary condition in the connection of

  11. Content Based Radiographic Images Indexing and Retrieval Using Pattern Orientation Histogram

    Directory of Open Access Journals (Sweden)

    Abolfazl Lakdashti

    2008-06-01

    Full Text Available Introduction: Content Based Image Retrieval (CBIR is a method of image searching and retrieval in a  database. In medical applications, CBIR is a tool used by physicians to compare the previous and current  medical images associated with patients pathological conditions. As the volume of pictorial information  stored in medical image databases is in progress, efficient image indexing and retrieval is increasingly  becoming a necessity.  Materials and Methods: This paper presents a new content based radiographic image retrieval approach  based on histogram of pattern orientations, namely pattern orientation histogram (POH. POH represents  the  spatial  distribution  of  five  different  pattern  orientations:  vertical,  horizontal,  diagonal  down/left,  diagonal down/right and non-orientation. In this method, a given image is first divided into image-blocks  and  the  frequency  of  each  type  of  pattern  is  determined  in  each  image-block.  Then,  local  pattern  histograms for each of these image-blocks are computed.   Results: The method was compared to two well known texture-based image retrieval methods: Tamura  and  Edge  Histogram  Descriptors  (EHD  in  MPEG-7  standard.  Experimental  results  based  on  10000  IRMA  radiography  image  dataset,  demonstrate  that  POH  provides  better  precision  and  recall  rates  compared to Tamura and EHD. For some images, the recall and precision rates obtained by POH are,  respectively, 48% and 18% better than the best of the two above mentioned methods.    Discussion and Conclusion: Since we exploit the absolute location of the pattern in the image as well as  its global composition, the proposed matching method can retrieve semantically similar medical images.

  12. Physics-based deformable organisms for medical image analysis

    Science.gov (United States)

    Hamarneh, Ghassan; McIntosh, Chris

    2005-04-01

    Previously, "Deformable organisms" were introduced as a novel paradigm for medical image analysis that uses artificial life modelling concepts. Deformable organisms were designed to complement the classical bottom-up deformable models methodologies (geometrical and physical layers), with top-down intelligent deformation control mechanisms (behavioral and cognitive layers). However, a true physical layer was absent and in order to complete medical image segmentation tasks, deformable organisms relied on pure geometry-based shape deformations guided by sensory data, prior structural knowledge, and expert-generated schedules of behaviors. In this paper we introduce the use of physics-based shape deformations within the deformable organisms framework yielding additional robustness by allowing intuitive real-time user guidance and interaction when necessary. We present the results of applying our physics-based deformable organisms, with an underlying dynamic spring-mass mesh model, to segmenting and labelling the corpus callosum in 2D midsagittal magnetic resonance images.

  13. Real-time Image Processing for Microscopy-based Label-free Imaging Flow Cytometry in a Microfluidic Chip.

    Science.gov (United States)

    Heo, Young Jin; Lee, Donghyeon; Kang, Junsu; Lee, Keondo; Chung, Wan Kyun

    2017-09-14

    Imaging flow cytometry (IFC) is an emerging technology that acquires single-cell images at high-throughput for analysis of a cell population. Rich information that comes from high sensitivity and spatial resolution of a single-cell microscopic image is beneficial for single-cell analysis in various biological applications. In this paper, we present a fast image-processing pipeline (R-MOD: Real-time Moving Object Detector) based on deep learning for high-throughput microscopy-based label-free IFC in a microfluidic chip. The R-MOD pipeline acquires all single-cell images of cells in flow, and identifies the acquired images as a real-time process with minimum hardware that consists of a microscope and a high-speed camera. Experiments show that R-MOD has the fast and reliable accuracy (500 fps and 93.3% mAP), and is expected to be used as a powerful tool for biomedical and clinical applications.

  14. Towards a framework for agent-based image analysis of remote-sensing data.

    Science.gov (United States)

    Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera

    2015-04-03

    Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects' properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA).

  15. Supervised learning of tools for content-based search of image databases

    Science.gov (United States)

    Delanoy, Richard L.

    1996-03-01

    A computer environment, called the Toolkit for Image Mining (TIM), is being developed with the goal of enabling users with diverse interests and varied computer skills to create search tools for content-based image retrieval and other pattern matching tasks. Search tools are generated using a simple paradigm of supervised learning that is based on the user pointing at mistakes of classification made by the current search tool. As mistakes are identified, a learning algorithm uses the identified mistakes to build up a model of the user's intentions, construct a new search tool, apply the search tool to a test image, display the match results as feedback to the user, and accept new inputs from the user. Search tools are constructed in the form of functional templates, which are generalized matched filters capable of knowledge- based image processing. The ability of this system to learn the user's intentions from experience contrasts with other existing approaches to content-based image retrieval that base searches on the characteristics of a single input example or on a predefined and semantically- constrained textual query. Currently, TIM is capable of learning spectral and textural patterns, but should be adaptable to the learning of shapes, as well. Possible applications of TIM include not only content-based image retrieval, but also quantitative image analysis, the generation of metadata for annotating images, data prioritization or data reduction in bandwidth-limited situations, and the construction of components for larger, more complex computer vision algorithms.

  16. Image processing system design for microcantilever-based optical readout infrared arrays

    Science.gov (United States)

    Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu

    2012-12-01

    Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.

  17. Content Based Image Matching for Planetary Science

    Science.gov (United States)

    Deans, M. C.; Meyer, C.

    2006-12-01

    Planetary missions generate large volumes of data. With the MER rovers still functioning on Mars, PDS contains over 7200 released images from the Microscopic Imagers alone. These data products are only searchable by keys such as the Sol, spacecraft clock, or rover motion counter index, with little connection to the semantic content of the images. We have developed a method for matching images based on the visual textures in images. For every image in a database, a series of filters compute the image response to localized frequencies and orientations. Filter responses are turned into a low dimensional descriptor vector, generating a 37 dimensional fingerprint. For images such as the MER MI, this represents a compression ratio of 99.9965% (the fingerprint is approximately 0.0035% the size of the original image). At query time, fingerprints are quickly matched to find images with similar appearance. Image databases containing several thousand images are preprocessed offline in a matter of hours. Image matches from the database are found in a matter of seconds. We have demonstrated this image matching technique using three sources of data. The first database consists of 7200 images from the MER Microscopic Imager. The second database consists of 3500 images from the Narrow Angle Mars Orbital Camera (MOC-NA), which were cropped into 1024×1024 sub-images for consistency. The third database consists of 7500 scanned archival photos from the Apollo Metric Camera. Example query results from all three data sources are shown. We have also carried out user tests to evaluate matching performance by hand labeling results. User tests verify approximately 20% false positive rate for the top 14 results for MOC NA and MER MI data. This means typically 10 to 12 results out of 14 match the query image sufficiently. This represents a powerful search tool for databases of thousands of images where the a priori match probability for an image might be less than 1%. Qualitatively, correct

  18. Image based Monte Carlo modeling for computational phantom

    International Nuclear Information System (INIS)

    Cheng, M.; Wang, W.; Zhao, K.; Fan, Y.; Long, P.; Wu, Y.

    2013-01-01

    Full text of the publication follows. The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verification of the models for Monte Carlo (MC) simulation are very tedious, error-prone and time-consuming. In addition, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling. The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients (Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection. (authors)

  19. Quantum image pseudocolor coding based on the density-stratified method

    Science.gov (United States)

    Jiang, Nan; Wu, Wenya; Wang, Luo; Zhao, Na

    2015-05-01

    Pseudocolor processing is a branch of image enhancement. It dyes grayscale images to color images to make the images more beautiful or to highlight some parts on the images. This paper proposes a quantum image pseudocolor coding scheme based on the density-stratified method which defines a colormap and changes the density value from gray to color parallel according to the colormap. Firstly, two data structures: quantum image GQIR and quantum colormap QCR are reviewed or proposed. Then, the quantum density-stratified algorithm is presented. Based on them, the quantum realization in the form of circuits is given. The main advantages of the quantum version for pseudocolor processing over the classical approach are that it needs less memory and can speed up the computation. Two kinds of examples help us to describe the scheme further. Finally, the future work are analyzed.

  20. Electrospun poly(lactic acid) based conducting nanofibrous networks

    International Nuclear Information System (INIS)

    Patra, S N; Bhattacharyya, D; Ray, S; Easteal, A J

    2009-01-01

    Multi-functionalised micro/nanostructures of conducting polymers in neat or blended forms have received much attention because of their unique properties and technological applications in electrical, magnetic and biomedical devices. Biopolymer-based conducting fibrous mats are of special interest for tissue engineering because they not only physically support tissue growth but also are electrically conductive, and thus are able to stimulate specific cell functions or trigger cell responses. They are effective for carrying current in biological environments and can thus be considered for delivering local electrical stimuli at the site of damaged tissue to promote wound healing. Electrospinning is an established way to process polymer solutions or melts into continuous fibres with diameter often in the nanometre range. This process primarily depends on a number of parameters, including the type of polymer, solution viscosity, polarity and surface tension of the solvent, electric field strength and the distance between the spinneret and the collector. The present research has included polyaniline (PANi) as the conducting polymer and poly(L-lactic acid) (PLLA) as the biopolymer. Dodecylbenzene sulphonic acid (DBSA) doped PANi and PLLA have been dissolved in a common solvent (mixtures of chloroform and dimethyl formamide (DMF)), and the solutions successfully electrospun. DMF enhanced the dielectric constant of the solvent, and tetra butyl ammonium bromide (TBAB) was used as an additive to increase the conductivity of the solution. DBSA-doped PANi/PLLA mat exhibits an almost bead-free network of nanofibres that have extraordinarily smooth surface and diameters in the range 75 to 100 nm.

  1. Magnetic resonance imaging investigation of the bone conduction implant – a pilot study at 1.5 Tesla

    Directory of Open Access Journals (Sweden)

    Fredén Jansson KJ

    2015-10-01

    Full Text Available Karl-Johan Fredén Jansson,1 Bo Håkansson,1 Sabine Reinfeldt,1 Cristina Rigato,1 Måns Eeg-Olofsson2 1Department of Signals and Systems, Chalmers University of Technology, 2Deptartment of Otorhinolaryngology Head and Neck Surgery, Sahlgrenska University Hospital, The Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden Purpose: The objective of this pilot study was to investigate if an active bone conduction implant (BCI used in an ongoing clinical study withstands magnetic resonance imaging (MRI of 1.5 Tesla. In particular, the MRI effects on maximum power output (MPO, total harmonic distortion (THD, and demagnetization were investigated. Implant activation and image artifacts were also evaluated.Methods and materials: One implant was placed on the head of a test person at the position corresponding to the normal position of an implanted BCI and applied with a static pressure using a bandage and scanned in a 1.5 Tesla MRI camera. Scanning was performed both with and without the implant, in three orthogonal planes, and for one spin-echo and one gradient-echo pulse sequence. Implant functionality was verified in-between the scans using an audio processor programmed to generate a sequence of tones when attached to the implant. Objective verification was also carried out by measuring MPO and THD on a skull simulator as well as retention force, before and after MRI.Results: It was found that the exposure of 1.5 Tesla MRI only had a minor effect on the MPO, ie, it decreased over all frequencies with an average of 1.1±2.1 dB. The THD remained unchanged above 300 Hz and was increased only at lower frequencies. The retention magnet was demagnetized by 5%. The maximum image artifacts reached a distance of 9 and 10 cm from the implant in the coronal plane for the spin-echo and the gradient-echo sequence, respectively. The test person reported no MRI induced sound from the implant.Conclusion: This pilot study indicates that the present BCI

  2. Reversible light-controlled conductance switching of azobenzene-based metal/polymer nanocomposites

    International Nuclear Information System (INIS)

    Pakula, Christina; Zaporojtchenko, Vladimir; Strunskus, Thomas; Faupel, Franz; Zargarani, Dordaneh; Herges, Rainer

    2010-01-01

    We present a new concept of light-controlled conductance switching based on metal/polymer nanocomposites with dissolved chromophores that do not have intrinsic current switching ability. Photoswitchable metal/PMMA nanocomposites were prepared by physical vapor deposition of Au and Pt clusters, respectively, onto spin-coated thin poly(methylmethacrylate) films doped with azo-dye molecules. High dye concentrations were achieved by functionalizing the azo groups with tails and branches, thus enhancing solubility. The composites show completely reversible optical switching of the absorption bands upon alternating irradiation with UV and blue light. We also demonstrate reversible light-controlled conductance switching. This is attributed to changes in the metal cluster separation upon isomerization based on model experiments where analogous conductance changes were induced by swelling of the composite films in organic vapors and by tensile stress.

  3. Preparation and Properties of Silver Nanowire-Based Transparent Conductive Composite Films

    Science.gov (United States)

    Tian, Ji-Li; Zhang, Hua-Yu; Wang, Hai-Jun

    2016-06-01

    Silver nanowire-based transparent conductive composite films with different structures were successfully prepared using various methods, including liquid polyol, magnetron sputtering and spin coating. The experimental results revealed that the optical transmittance of all different structural composite films decreased slightly (1-3%) compared to pure films. However, the electrical conductivity of all composite films had a great improvement. Under the condition that the optical transmittance was greater than 78% over the wavelength range of 400-800 nm, the AgNW/PVA/AgNW film became a conductor, while the AZO/AgNW/AZO film and the ITO/AgNW/ITO film showed 88.9% and 94% reductions, respectively, for the sheet resistance compared with pure films. In addition, applying a suitable mechanical pressure can improve the conductivity of AgNW-based composite films.

  4. Computer-aided diagnosis workstation and database system for chest diagnosis based on multi-helical CT images

    Science.gov (United States)

    Satoh, Hitoshi; Niki, Noboru; Mori, Kiyoshi; Eguchi, Kenji; Kaneko, Masahiro; Kakinuma, Ryutarou; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru; Sasagawa, Michizou

    2006-03-01

    Multi-helical CT scanner advanced remarkably at the speed at which the chest CT images were acquired for mass screening. Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images and a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification. We also have developed electronic medical recording system and prototype internet system for the community health in two or more regions by using the Virtual Private Network router and Biometric fingerprint authentication system and Biometric face authentication system for safety of medical information. Based on these diagnostic assistance methods, we have now developed a new computer-aided workstation and database that can display suspected lesions three-dimensionally in a short time. This paper describes basic studies that have been conducted to evaluate this new system. The results of this study indicate that our computer-aided diagnosis workstation and network system can increase diagnostic speed, diagnostic accuracy and safety of medical information.

  5. Monte Carlo simulation of grating-based neutron phase contrast imaging at CPHS

    International Nuclear Information System (INIS)

    Zhang Ran; Chen Zhiqiang; Huang Zhifeng; Xiao Yongshun; Wang Xuewu; Wie Jie; Loong, C.-K.

    2011-01-01

    Since the launching of the Compact Pulsed Hadron Source (CPHS) project of Tsinghua University in 2009, works have begun on the design and engineering of an imaging/radiography instrument for the neutron source provided by CPHS. The instrument will perform basic tasks such as transmission imaging and computerized tomography. Additionally, we include in the design the utilization of coded-aperture and grating-based phase contrast methodology, as well as the options of prompt gamma-ray analysis and neutron-energy selective imaging. Previously, we had implemented the hardware and data-analysis software for grating-based X-ray phase contrast imaging. Here, we investigate Geant4-based Monte Carlo simulations of neutron refraction phenomena and then model the grating-based neutron phase contrast imaging system according to the classic-optics-based method. The simulated experimental results of the retrieving phase shift gradient information by five-step phase-stepping approach indicate the feasibility of grating-based neutron phase contrast imaging as an option for the cold neutron imaging instrument at the CPHS.

  6. Homotopy Based Reconstruction from Acoustic Images

    DEFF Research Database (Denmark)

    Sharma, Ojaswa

    of the inherent arrangement. The problem of reconstruction from arbitrary cross sections is a generic problem and is also shown to be solved here using the mathematical tool of continuous deformations. As part of a complete processing, segmentation using level set methods is explored for acoustic images and fast...... GPU (Graphics Processing Unit) based methods are suggested for a streaming computation on large volumes of data. Validation of results for acoustic images is not straightforward due to unavailability of ground truth. Accuracy figures for the suggested methods are provided using phantom object...

  7. Conductance of graphene based normal-superconductor junction with double magnetic barriers

    Science.gov (United States)

    Abdollahipour, B.; Mohebalipour, A.; Maleki, M. A.

    2018-05-01

    We study conductance of a graphene based normal metal-superconductor junction with two magnetic barriers. The magnetic barriers are induced via two applied magnetic fields with the same magnitudes and opposite directions accompanied by an applied electrostatic potential. We solve Dirac-Bogoliubov-De-Gennes (DBdG) equation to calculate conductance of the junction. We find that applying the magnetic field leads to suppression of the Andreev reflection and conductance for all energies. On the other hand, we observe a crossover from oscillatory to tunneling behavior of the conductance as a function of the applied potential by increasing the magnetic field.

  8. Sampling in image space for vision based SLAM

    NARCIS (Netherlands)

    Booij, O.; Zivkovic, Z.; Kröse, B.

    2008-01-01

    Loop closing in vision based SLAM applications is a difficult task. Comparing new image data with all previous image data acquired for the map is practically impossible because of the high computational costs. This problem is part of the bigger problem to acquire local geometric constraints from

  9. FUSION SEGMENTATION METHOD BASED ON FUZZY THEORY FOR COLOR IMAGES

    Directory of Open Access Journals (Sweden)

    J. Zhao

    2017-09-01

    Full Text Available The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.

  10. PCANet-Based Structural Representation for Nonrigid Multimodal Medical Image Registration

    Directory of Open Access Journals (Sweden)

    Xingxing Zhu

    2018-05-01

    Full Text Available Nonrigid multimodal image registration remains a challenging task in medical image processing and analysis. The structural representation (SR-based registration methods have attracted much attention recently. However, the existing SR methods cannot provide satisfactory registration accuracy due to the utilization of hand-designed features for structural representation. To address this problem, the structural representation method based on the improved version of the simple deep learning network named PCANet is proposed for medical image registration. In the proposed method, PCANet is firstly trained on numerous medical images to learn convolution kernels for this network. Then, a pair of input medical images to be registered is processed by the learned PCANet. The features extracted by various layers in the PCANet are fused to produce multilevel features. The structural representation images are constructed for two input images based on nonlinear transformation of these multilevel features. The Euclidean distance between structural representation images is calculated and used as the similarity metrics. The objective function defined by the similarity metrics is optimized by L-BFGS method to obtain parameters of the free-form deformation (FFD model. Extensive experiments on simulated and real multimodal image datasets show that compared with the state-of-the-art registration methods, such as modality-independent neighborhood descriptor (MIND, normalized mutual information (NMI, Weber local descriptor (WLD, and the sum of squared differences on entropy images (ESSD, the proposed method provides better registration performance in terms of target registration error (TRE and subjective human vision.

  11. Image mosaicking based on feature points using color-invariant values

    Science.gov (United States)

    Lee, Dong-Chang; Kwon, Oh-Seol; Ko, Kyung-Woo; Lee, Ho-Young; Ha, Yeong-Ho

    2008-02-01

    In the field of computer vision, image mosaicking is achieved using image features, such as textures, colors, and shapes between corresponding images, or local descriptors representing neighborhoods of feature points extracted from corresponding images. However, image mosaicking based on feature points has attracted more recent attention due to the simplicity of the geometric transformation, regardless of distortion and differences in intensity generated by camera motion in consecutive images. Yet, since most feature-point matching algorithms extract feature points using gray values, identifying corresponding points becomes difficult in the case of changing illumination and images with a similar intensity. Accordingly, to solve these problems, this paper proposes a method of image mosaicking based on feature points using color information of images. Essentially, the digital values acquired from a real digital color camera are converted to values of a virtual camera with distinct narrow bands. Values based on the surface reflectance and invariant to the chromaticity of various illuminations are then derived from the virtual camera values and defined as color-invariant values invariant to changing illuminations. The validity of these color-invariant values is verified in a test using a Macbeth Color-Checker under simulated illuminations. The test also compares the proposed method using the color-invariant values with the conventional SIFT algorithm. The accuracy of the matching between the feature points extracted using the proposed method is increased, while image mosaicking using color information is also achieved.

  12. Construction of 3D MR image-based computer models of pathologic hearts, augmented with histology and optical fluorescence imaging to characterize action potential propagation.

    Science.gov (United States)

    Pop, Mihaela; Sermesant, Maxime; Liu, Garry; Relan, Jatin; Mansi, Tommaso; Soong, Alan; Peyrat, Jean-Marc; Truong, Michael V; Fefer, Paul; McVeigh, Elliot R; Delingette, Herve; Dick, Alexander J; Ayache, Nicholas; Wright, Graham A

    2012-02-01

    Cardiac computer models can help us understand and predict the propagation of excitation waves (i.e., action potential, AP) in healthy and pathologic hearts. Our broad aim is to develop accurate 3D MR image-based computer models of electrophysiology in large hearts (translatable to clinical applications) and to validate them experimentally. The specific goals of this paper were to match models with maps of the propagation of optical AP on the epicardial surface using large porcine hearts with scars, estimating several parameters relevant to macroscopic reaction-diffusion electrophysiological models. We used voltage-sensitive dyes to image AP in large porcine hearts with scars (three specimens had chronic myocardial infarct, and three had radiofrequency RF acute scars). We first analyzed the main AP waves' characteristics: duration (APD) and propagation under controlled pacing locations and frequencies as recorded from 2D optical images. We further built 3D MR image-based computer models that have information derived from the optical measures, as well as morphologic MRI data (i.e., myocardial anatomy, fiber directions and scar definition). The scar morphology from MR images was validated against corresponding whole-mount histology. We also compared the measured 3D isochronal maps of depolarization to simulated isochrones (the latter replicating precisely the experimental conditions), performing model customization and 3D volumetric adjustments of the local conductivity. Our results demonstrated that mean APD in the border zone (BZ) of the infarct scars was reduced by ~13% (compared to ~318 ms measured in normal zone, NZ), but APD did not change significantly in the thin BZ of the ablation scars. A generic value for velocity ratio (1:2.7) in healthy myocardial tissue was derived from measured values of transverse and longitudinal conduction velocities relative to fibers direction (22 cm/s and 60 cm/s, respectively). The model customization and 3D volumetric

  13. COMPARISON AND EVALUATION OF CLUSTER BASED IMAGE SEGMENTATION TECHNIQUES

    OpenAIRE

    Hetangi D. Mehta*, Daxa Vekariya, Pratixa Badelia

    2017-01-01

    Image segmentation is the classification of an image into different groups. Numerous algorithms using different approaches have been proposed for image segmentation. A major challenge in segmentation evaluation comes from the fundamental conflict between generality and objectivity. A review is done on different types of clustering methods used for image segmentation. Also a methodology is proposed to classify and quantify different clustering algorithms based on their consistency in different...

  14. Application of content-based image compression to telepathology

    Science.gov (United States)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  15. Comprehensive quantification of signal-to-noise ratio and g-factor for image-based and k-space-based parallel imaging reconstructions.

    Science.gov (United States)

    Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A

    2008-10-01

    Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.

  16. Visualization of single-wall carbon nanotube (SWNT) networks in conductive polystyrene nanocomposites by charge contrast imaging

    International Nuclear Information System (INIS)

    Loos, Joachim; Alexeev, Alexander; Grossiord, Nadia; Koning, Cor E.; Regev, Oren

    2005-01-01

    The morphology of conductive nanocomposites consisting of low concentration of single-wall carbon nanotubes (SWNT) and polystyrene (PS) has been studied using atomic force microscopy (AFM), transmission electron microscopy (TEM) and, in particular, scanning electron microscopy (SEM). Application of charge contrast imaging in SEM allows visualization of the overall SWNT dispersion within the polymer matrix as well as the identification of individual or bundled SWNTs at high resolution. The contrast mechanism involved will be discussed. In conductive nanocomposites the SWNTs are homogeneously dispersed within the polymer matrix and form a network. Beside fairly straight SWNTs, strongly bended SWNTs have been observed. However, for samples with SWNT concentrations below the percolation threshold, the common overall charging behavior of an insulating material is observed preventing the detailed morphological investigation of the sample

  17. A UNIX-based prototype biomedical virtual image processor

    International Nuclear Information System (INIS)

    Fahy, J.B.; Kim, Y.

    1987-01-01

    The authors have developed a multiprocess virtual image processor for the IBM PC/AT, in order to maximize image processing software portability for biomedical applications. An interprocess communication scheme, based on two-way metacode exchange, has been developed and verified for this purpose. Application programs call a device-independent image processing library, which transfers commands over a shared data bridge to one or more Autonomous Virtual Image Processors (AVIP). Each AVIP runs as a separate process in the UNIX operating system, and implements the device-independent functions on the image processor to which it corresponds. Application programs can control multiple image processors at a time, change the image processor configuration used at any time, and are completely portable among image processors for which an AVIP has been implemented. Run-time speeds have been found to be acceptable for higher level functions, although rather slow for lower level functions, owing to the overhead associated with sending commands and data over the shared data bridge

  18. Neural network based multiscale image restoration approach

    Science.gov (United States)

    de Castro, Ana Paula A.; da Silva, José D. S.

    2007-02-01

    This paper describes a neural network based multiscale image restoration approach. Multilayer perceptrons are trained with artificial images of degraded gray level circles, in an attempt to make the neural network learn inherent space relations of the degraded pixels. The present approach simulates the degradation by a low pass Gaussian filter blurring operation and the addition of noise to the pixels at pre-established rates. The training process considers the degraded image as input and the non-degraded image as output for the supervised learning process. The neural network thus performs an inverse operation by recovering a quasi non-degraded image in terms of least squared. The main difference of the approach to existing ones relies on the fact that the space relations are taken from different scales, thus providing relational space data to the neural network. The approach is an attempt to come up with a simple method that leads to an optimum solution to the problem. Considering different window sizes around a pixel simulates the multiscale operation. In the generalization phase the neural network is exposed to indoor, outdoor, and satellite degraded images following the same steps use for the artificial circle image.

  19. A Joint Land Cover Mapping and Image Registration Algorithm Based on a Markov Random Field Model

    Directory of Open Access Journals (Sweden)

    Apisit Eiumnoh

    2013-10-01

    Full Text Available Traditionally, image registration of multi-modal and multi-temporal images is performed satisfactorily before land cover mapping. However, since multi-modal and multi-temporal images are likely to be obtained from different satellite platforms and/or acquired at different times, perfect alignment is very difficult to achieve. As a result, a proper land cover mapping algorithm must be able to correct registration errors as well as perform an accurate classification. In this paper, we propose a joint classification and registration technique based on a Markov random field (MRF model to simultaneously align two or more images and obtain a land cover map (LCM of the scene. The expectation maximization (EM algorithm is employed to solve the joint image classification and registration problem by iteratively estimating the map parameters and approximate posterior probabilities. Then, the maximum a posteriori (MAP criterion is used to produce an optimum land cover map. We conducted experiments on a set of four simulated images and one pair of remotely sensed images to investigate the effectiveness and robustness of the proposed algorithm. Our results show that, with proper selection of a critical MRF parameter, the resulting LCMs derived from an unregistered image pair can achieve an accuracy that is as high as when images are perfectly aligned. Furthermore, the registration error can be greatly reduced.

  20. Biased discriminant euclidean embedding for content-based image retrieval.

    Science.gov (United States)

    Bian, Wei; Tao, Dacheng

    2010-02-01

    With many potential multimedia applications, content-based image retrieval (CBIR) has recently gained more attention for image management and web search. A wide variety of relevance feedback (RF) algorithms have been developed in recent years to improve the performance of CBIR systems. These RF algorithms capture user's preferences and bridge the semantic gap. However, there is still a big room to further the RF performance, because the popular RF algorithms ignore the manifold structure of image low-level visual features. In this paper, we propose the biased discriminative Euclidean embedding (BDEE) which parameterises samples in the original high-dimensional ambient space to discover the intrinsic coordinate of image low-level visual features. BDEE precisely models both the intraclass geometry and interclass discrimination and never meets the undersampled problem. To consider unlabelled samples, a manifold regularization-based item is introduced and combined with BDEE to form the semi-supervised BDEE, or semi-BDEE for short. To justify the effectiveness of the proposed BDEE and semi-BDEE, we compare them against the conventional RF algorithms and show a significant improvement in terms of accuracy and stability based on a subset of the Corel image gallery.