WorldWideScience

Sample records for diagnostic imaging algorithm

  1. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  2. A damage diagnostic imaging algorithm based on the quantitative comparison of Lamb wave signals

    Science.gov (United States)

    Wang, Dong; Ye, Lin; Lu, Ye; Li, Fucai

    2010-06-01

    With the objective of improving the temperature stability of the quantitative comparison of Lamb wave signals captured in different states, a damage diagnostic imaging algorithm integrated with Shannon-entropy-based interrogation was proposed. It was evaluated experimentally by identifying surface damage in a stiffener-reinforced CF/EP quasi-isotropic woven laminate. The variations in Shannon entropy of the reference (without damage) and present (with damage) signals from individual sensing paths were calibrated as damage signatures and utilized to estimate the probability of the presence of damage in the monitoring area enclosed by an active sensor network. The effects of temperature change on calibration of the damage signatures and estimation of the probability values for the presence of damage were investigated using a set of desynchronized signals. The results demonstrate that the Shannon-entropy-based damage diagnostic imaging algorithm with improved robustness in the presence of temperature change has the capability of providing accurate identification of damage in actual environments.

  3. Image microarrays derived from tissue microarrays (IMA-TMA: New resource for computer-aided diagnostic algorithm development

    Directory of Open Access Journals (Sweden)

    Jennifer A Hipp

    2012-01-01

    Full Text Available Background: Conventional tissue microarrays (TMAs consist of cores of tissue inserted into a recipient paraffin block such that a tissue section on a single glass slide can contain numerous patient samples in a spatially structured pattern. Scanning TMAs into digital slides for subsequent analysis by computer-aided diagnostic (CAD algorithms all offers the possibility of evaluating candidate algorithms against a near-complete repertoire of variable disease morphologies. This parallel interrogation approach simplifies the evaluation, validation, and comparison of such candidate algorithms. A recently developed digital tool, digital core (dCORE, and image microarray maker (iMAM enables the capture of uniformly sized and resolution-matched images, with these representing key morphologic features and fields of view, aggregated into a single monolithic digital image file in an array format, which we define as an image microarray (IMA. We further define the TMA-IMA construct as IMA-based images derived from whole slide images of TMAs themselves. Methods: Here we describe the first combined use of the previously described dCORE and iMAM tools, toward the goal of generating a higher-order image construct, with multiple TMA cores from multiple distinct conventional TMAs assembled as a single digital image montage. This image construct served as the basis of the carrying out of a massively parallel image analysis exercise, based on the use of the previously described spatially invariant vector quantization (SIVQ algorithm. Results: Multicase, multifield TMA-IMAs of follicular lymphoma and follicular hyperplasia were separately rendered, using the aforementioned tools. Each of these two IMAs contained a distinct spectrum of morphologic heterogeneity with respect to both tingible body macrophage (TBM appearance and apoptotic body morphology. SIVQ-based pattern matching, with ring vectors selected to screen for either tingible body macrophages or apoptotic

  4. Image microarrays derived from tissue microarrays (IMA-TMA): New resource for computer-aided diagnostic algorithm development.

    Science.gov (United States)

    Hipp, Jennifer A; Hipp, Jason D; Lim, Megan; Sharma, Gaurav; Smith, Lauren B; Hewitt, Stephen M; Balis, Ulysses G J

    2012-01-01

    Conventional tissue microarrays (TMAs) consist of cores of tissue inserted into a recipient paraffin block such that a tissue section on a single glass slide can contain numerous patient samples in a spatially structured pattern. Scanning TMAs into digital slides for subsequent analysis by computer-aided diagnostic (CAD) algorithms all offers the possibility of evaluating candidate algorithms against a near-complete repertoire of variable disease morphologies. This parallel interrogation approach simplifies the evaluation, validation, and comparison of such candidate algorithms. A recently developed digital tool, digital core (dCORE), and image microarray maker (iMAM) enables the capture of uniformly sized and resolution-matched images, with these representing key morphologic features and fields of view, aggregated into a single monolithic digital image file in an array format, which we define as an image microarray (IMA). We further define the TMA-IMA construct as IMA-based images derived from whole slide images of TMAs themselves. Here we describe the first combined use of the previously described dCORE and iMAM tools, toward the goal of generating a higher-order image construct, with multiple TMA cores from multiple distinct conventional TMAs assembled as a single digital image montage. This image construct served as the basis of the carrying out of a massively parallel image analysis exercise, based on the use of the previously described spatially invariant vector quantization (SIVQ) algorithm. Multicase, multifield TMA-IMAs of follicular lymphoma and follicular hyperplasia were separately rendered, using the aforementioned tools. Each of these two IMAs contained a distinct spectrum of morphologic heterogeneity with respect to both tingible body macrophage (TBM) appearance and apoptotic body morphology. SIVQ-based pattern matching, with ring vectors selected to screen for either tingible body macrophages or apoptotic bodies, was subsequently carried out on the

  5. The diagnostic efficiency of ultrasound guided imaging algorithm in evaluation of patients with hematuria

    Energy Technology Data Exchange (ETDEWEB)

    Unsal, Alparslan, E-mail: alparslanunsal@yahoo.com [Adnan Menderes University, Faculty of Medicine, Department of Radiology, 09100 Aydin (Turkey); Caliskan, Eda Kazak [Adnan Menderes University, Faculty of Medicine, Department of Radiology, 09100 Aydin (Turkey); Erol, Haluk [Adnan Menderes University, Faculty of Medicine, Department of Urology, 09100 Aydin (Turkey); Karaman, Can Zafer [Adnan Menderes University, Faculty of Medicine, Department of Radiology, 09100 Aydin (Turkey)

    2011-07-15

    Purpose: To assess the efficiency of the following imaging algorithm, including intravenous urography (IVU) or computed tomography urography (CTU) based on ultrasonographic (US) selection, in the radiological management of hematuria. Materials and methods: One hundred and forty-one patients with hematuria were prospectively evaluated. Group 1 included 106 cases with normal or nearly normal US result and then they were examined with IVU. Group 2 was composed of the remaining 35 cases which had any urinary tract abnormality, and they were directed to CTU. Radiological results were compared with clinical diagnosis. Results: Ultrasonography and IVU results of 97 cases were congruent in group 1. Eight simple cysts were detected with US and 1 non-obstructing ureter stone was detected with IVU in remaining 9 patients. The only discordant case in clinical comparison was found to have urinary bladder cancer on conventional cystoscopy. Ultrasonography and CTU results were congruent in 30 cases. Additional lesions were detected with CTU (3 ureter stones, 1 ureter TCC, 1 advanced RCC) in remaining 5 patients. Ultrasonography + CTU combination results were all concordant with clinical diagnosis. Except 1 case, radio-clinical agreement was achieved. Conclusion: Cross-sectional imaging modalities are preferred in evaluation of hematuria. CTU is the method of choice; however the limitations preclude using CTU as first line or screening test. Ultrasonography is now being accepted as a first line imaging modality with the increased sensitivity in mass detection compared to IVU. The US guided imaging algorithm can be used effectively in radiological approach to hematuria.

  6. [Molecular diagnostics and imaging].

    Science.gov (United States)

    Fink, Christian; Fisseler-Eckhoff, Annette; Huss, Ralf; Nestle, Ursula

    2009-01-01

    Molecular diagnostic methods and biological imaging techniques can make a major contribution to tailoring patients' treatment needs with regard to medical, ethical and pharmaco-economic aspects. Modern diagnostic methods are already being used to help identify different sub-groups of patients with thoracic tumours who are most likely to benefit significantly from a particular type of treatment. This contribution looks at the most recent developments that have been made in the field of thoracic tumour diagnosis and analyses the pros and cons of new molecular and other imaging techniques in day-to-day clinical practice.

  7. Whole-body MR imaging versus sequential multimodal diagnostic algorithm for staging patients with rectal cancer. Cost analysis

    Energy Technology Data Exchange (ETDEWEB)

    Huppertz, A. [Charite Universitaetsklinikum Berlin, Campus Mitte (Germany). Dept. of Radiology; Charite Universitaetsklinikum Berlin (Germany). Imaging Science Inst.; Schmidt, M.; Schoeffski, O. [Erlangen-Nuernberg Univ. (Germany). Inst. for Health Management; Wagner, M.; Asbach, P.; Maurer, M.H. [Charite Universitaetsklinikum Berlin, Campus Mitte (Germany). Dept. of Radiology; Puettcher, O. [Vivantes Klinikum im Friedrichshain, Berlin (Germany). Dept. of Radiology; Strassburg, J. [Vivantes Klinikum im Friedrichshain, Berlin (Germany). Dept. of Surgery; Stoeckmann, F. [Vivantes Klinikum im Friedrichshain, Berlin (Germany). Dept. of Gastroenterology

    2010-09-15

    Purpose: To compare the direct costs of two diagnostic algorithms for pretherapeutic TNM staging of rectal cancer. Materials and Methods: In a study including 33 patients (mean age: 62.5 years), the direct fixed and variable costs of a sequential multimodal algorithm (rectoscopy, endoscopic and abdominal ultrasound, chest X-ray, thoracic/abdominal CT in the case of positive findings in abdominal ultrasound or chest X-ray) were compared to those of a novel algorithm of rectoscopy followed by MRI using a whole-body scanner. MRI included T 2w sequences of the rectum, 3D T 1w sequences of the liver and chest after bolus injection of gadoxetic acid, and delayed phases of the liver. The personnel work times, material items, and work processes were tracked to the nearest minute by interviewing those responsible for the process (surgeon, gastroenterologist, two radiologists). The costs of labor and materials were determined from personnel reimbursement data and hospital accounting records. Fixed costs were determined from vendor pricing. Results: The mean MRI time was 55 min. CT was performed in 19 / 33 patients (57 %) causing an additional day of hospitalization (costs 374 Euro). The costs for equipment and material were higher for MRI compared to sequential algorithm (equipment 116 vs. 30 Euro; material 159 vs. 60 Euro per patient). The personnel costs were markedly lower for MRI (436 vs. 732 Euro per patient). Altogether, the absolute cost advantage of MRI was 31.3 % (711 vs. 1035 Euro for sequential algorithm). Conclusion: Substantial savings are achievable with the use of whole-body MRI for the preoperative TNM staging of patients with rectal cancer. (orig.)

  8. [The diagnostic algorithm in twin pregnancy].

    Science.gov (United States)

    Ropacka-Lesiak, Mariola; Szaflik, Krzysztof; Breborowicz, Grzegorz H

    2015-03-01

    This paper presents the diagnostic algorithm in twin pregnancy. The most important sonographic parameters in the assessment of twins have been discussed. Moreover, the most significant complications of twin pregnancy as well as diagnostic possibilities and management, have been also presented and defined.

  9. [Diagnostic imaging of lying].

    Science.gov (United States)

    Lass, Piotr; Sławek, Jarosław; Sitek, Emilia; Szurowska, Edyta; Zimmermann, Agnieszka

    2013-01-01

    Functional diagnostic imaging has been applied in neuropsychology for more than two decades. Nowadays, the functional magnetic resonance (fMRI) seems to be the most important technique. Brain imaging in lying has been performed and discussed since 2001. There are postulates to use fMRI for forensic purposes, as well as commercially, e.g. testing the loyalty of employees, especially because of the limitations of traditional polygraph in some cases. In USA fMRI is performed in truthfulness/lying assessment by at least two commercial companies. Those applications are a matter of heated debate of practitioners, lawyers and specialists of ethics. The opponents of fMRI use for forensic purposes indicate the lack of common agreement on it and the lack of wide recognition and insufficient standardisation. Therefore it cannot serve as a forensic proof, yet. However, considering the development of MRI and a high failure rate of traditional polygraphy, forensic applications of MRI seem to be highly probable in future.

  10. Imaging Techniques for Microwave Diagnostics

    Energy Technology Data Exchange (ETDEWEB)

    Donne, T. [FOM-Institute for Plasma Physics Rijnhuizen, Trilateral Euregio Cluster, PO Box 1207, 3430 BE Nieuwegein (Netherlands); Luhmann Jr, N.C. [University of California, Davis, CA 95616 (United States); Park, H.K. [POSTECH, Pohang, Gyeongbuk 790-784 (Korea, Republic of); Tobias, B.

    2011-07-01

    Advances in microwave technology have made it possible to develop a new generation of microwave imaging diagnostics for measuring the parameters of magnetic fusion devices. The most prominent of these diagnostics is electron cyclotron emission imaging (ECE-I). After the first generation of ECE-I diagnostics utilized at the TEXT-U, RTP and TEXTOR tokamaks and the LHD stellarator, new systems have recently come into operation on ASDEX-UG and DIII-D, soon to be followed by a system on KSTAR. The DIII-D and KSTAR systems feature dual imaging arrays that observe different parts of the plasma. The ECE-I diagnostic yields two-dimensional movies of the electron temperature in the plasma and has given already new insights into the physics of sawtooth oscillations, tearing modes and edge localized modes. Microwave Imaging Reflectometry (MIR) is used on LHD to measure electron density fluctuations. A pilot MIR system has been tested at TEXTOR and, based on the promising results, a new system is now under design for KSTAR. The system at TEXTOR was used to measure the plasma rotation velocity. The system at KSTAR and also the one on LHD will be/are used for measuring the profile of the electron density fluctuations in the plasma. Other microwave imaging diagnostics are phase imaging interferometry, and imaging microwave scattering. The emphasis in this paper will be largely focused on ECE-I. First an overview of the advances in microwave technology are discussed, followed by a description of a typical ECE-I system along with some typical experimental results. Also the utilization of imaging techniques in other types of microwave diagnostics will be briefly reviewed. This document is composed of the slides of the presentation. (authors)

  11. The ear: Diagnostic imaging

    Energy Technology Data Exchange (ETDEWEB)

    Vignaud, J.; Jardin, C.; Rosen, L.

    1986-01-01

    This is an English translation of volume 17-1 of Traite de radiodiagnostic and represents a reasonably complete documentation of the diseases of the temporal bone that have imaging manifestations. The book begins with chapters on embryology, anatomy and radiography anatomy; it continues with blood supply and an overview of temporal bone pathology. Subsequent chapters cover malformations, trauma, infections, tumors, postoperative changes, glomus tumors, vertebasilar insufficiency, and facial nerve canal lesions. A final chapter demonstrates and discusses magnetic resonance images of the ear and cerebellopontine angle.

  12. Boosting of Image Denoising Algorithms

    OpenAIRE

    Romano, Yaniv; Elad, Michael

    2015-01-01

    In this paper we propose a generic recursive algorithm for improving image denoising methods. Given the initial denoised image, we suggest repeating the following "SOS" procedure: (i) (S)trengthen the signal by adding the previous denoised image to the degraded input image, (ii) (O)perate the denoising method on the strengthened image, and (iii) (S)ubtract the previous denoised image from the restored signal-strengthened outcome. The convergence of this process is studied for the K-SVD image ...

  13. Diagnostic imaging in thyrotoxicosis.

    Science.gov (United States)

    Summaria, V; Salvatori, M; Rufini, V; Mirk, P; Garganese, M C; Romani, M

    1999-01-01

    In thyrotoxicosis, imaging mainly scintigraphy, color Doppler sonography and radioiodine uptake test are used in the differential diagnosis as well as in the morphofunctional evaluation of the thyroid before and after therapy (mainly pharmacological or with radioiodine). Radioiodine uptake test differentiates high uptake thyrotoxicosis (Graves'disease, toxic nodular goiter) and low uptake thyrotoxycosis (subacute or silent thyroiditis, ectopic thyrotoxicosis, iodine-induced hyperthyroidism). In Graves'disease scintigraphy shows thyroid enlargement with intense homogeneous tracer uptake; rarely nodules with no uptake are present. On color Doppler sonography, a part from enlargement, typical findings are: diffuse structural hypoechogenicity (at times with echoic nodules), parenchymal hypervascularization ("thyroid inferno"), high systolic velocities (PSV > 70-100 cm/sec) in inferior thyroid arteries. Scintigraphy is the only method able to evidence an autonomously functioning thyroid nodule and stage it (in association to clinical findings and TSH, FT3, FT4 determination) as: toxic, non toxic (or pretoxic) and compensated, depending on whether there is inhibition of extranodular tissue. A scintigraphically "hot" nodule appears hypervascularized on color Doppler sonography (especially in the toxic or pre-toxic phase) with high PSV (> 50-70 cm/sec) in the ipsilateral inferior thyroid artery. The most reliable parameters in the evaluation of the therapeutic efficacy are: decreases in thyroid (Graves'disease) or nodular (autonomously functioning nodule) volume; decreased radioiodine uptake (Graves'disease); functional recovery of suppressed parenchyma (autonomously functioning nodule); decreased PSV in the inferior thyroid arteries.

  14. Speckle imaging algorithms for planetary imaging

    Energy Technology Data Exchange (ETDEWEB)

    Johansson, E. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    I will discuss the speckle imaging algorithms used to process images of the impact sites of the collision of comet Shoemaker-Levy 9 with Jupiter. The algorithms use a phase retrieval process based on the average bispectrum of the speckle image data. High resolution images are produced by estimating the Fourier magnitude and Fourier phase of the image separately, then combining them and inverse transforming to achieve the final result. I will show raw speckle image data and high-resolution image reconstructions from our recent experiment at Lick Observatory.

  15. Diagnostic imaging in bovine orthopedics.

    Science.gov (United States)

    Kofler, Johann; Geissbühler, Urs; Steiner, Adrian

    2014-03-01

    Although a radiographic unit is not standard equipment for bovine practitioners in hospital or field situations, ultrasound machines with 7.5-MHz linear transducers have been used in bovine reproduction for many years, and are eminently suitable for evaluation of orthopedic disorders. The goal of this article is to encourage veterinarians to use radiology and ultrasonography for the evaluation of bovine orthopedic disorders. These diagnostic imaging techniques improve the likelihood of a definitive diagnosis in every bovine patient but especially in highly valuable cattle, whose owners demand increasingly more diagnostic and surgical interventions that require high-level specialized techniques.

  16. Monogenic Diabetes: A Diagnostic Algorithm for Clinicians

    Directory of Open Access Journals (Sweden)

    Richard W. Carroll

    2013-09-01

    Full Text Available Monogenic forms of beta cell diabetes account for approximately 1%–2% of all cases of diabetes, yet remain underdiagnosed. Overlapping clinical features with common forms of diabetes, make diagnosis challenging. A genetic diagnosis of monogenic diabetes in many cases alters therapy, affects prognosis, enables genetic counseling, and has implications for cascade screening of extended family members. We describe those types of monogenic beta cell diabetes which are recognisable by distinct clinical features and have implications for altered management; the cost effectiveness of making a genetic diagnosis in this setting; the use of complementary diagnostic tests to increase the yield among the vast majority of patients who will have commoner types of diabetes which are summarised in a clinical algorithm; and the vital role of cascade genetic testing to enhance case finding.

  17. Canine Hip Dysplasia: Diagnostic Imaging.

    Science.gov (United States)

    Butler, J Ryan; Gambino, Jennifer

    2017-07-01

    Diagnostic imaging is the principal method used to screen for and diagnose hip dysplasia in the canine patient. Multiple techniques are available, each having advantages, disadvantages, and limitations. Hip-extended radiography is the most used method and is best used as a screening tool and for assessment for osteoarthritis. Distraction radiographic methods such as the PennHip method allow for improved detection of laxity and improved ability to predict future osteoarthritis development. More advanced techniques such as MRI, although expensive and not widely available, may improve patient screening and allow for improved assessment of cartilage health. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Active imaging for monitoring and technical diagnostics

    Directory of Open Access Journals (Sweden)

    Marek Piszczek

    2014-08-01

    Full Text Available The article presents the results of currently running work in the field of active imaging. The term active refers to both the image acquisition methods, so-called methods of the spatio-temporal framing and active visualization method applying augmented reality. Also results of application of the HMD and 6DoF modules as well as the experimental laser photography device are given. The device works by methods of spatio-temporal framing and it has been developed at the IOE WAT. In terms of image acquisition - active imaging involves the use of illumination of the observed scene. In the field of information visualization - active imaging directly concerns the issues of interaction human-machine environment. The results show the possibility of using the described techniques, among others, rescue (fire brigade, security of mass events (police or the protection of critical infrastructure as well as broadly understood diagnostic problems. Examples presented in the article show a wide range of possible uses of the methods both in observational techniques and measurement. They are relatively innovative solutions and require elaboration of series of hardware and algorithmic issues. However, already at this stage it is clear that active acquisition and visualization methods indicate a high potential for this type of information solutions.[b]Keywords[/b]: active imaging, augmented reality, digital image processing

  19. Recent Advancements in Microwave Imaging Plasma Diagnostics

    Energy Technology Data Exchange (ETDEWEB)

    H. Park; C.C. Chang; B.H. Deng; C.W. Domier; A.J.H. Donni; K. Kawahata; C. Liang; X.P. Liang; H.J. Lu; N.C. Luhmann, Jr.; A. Mase; H. Matsuura; E. Mazzucato; A. Miura; K. Mizuno; T. Munsat; K. and Y. Nagayama; M.J. van de Pol; J. Wang; Z.G. Xia; W-K. Zhang

    2002-03-26

    Significant advances in microwave and millimeter wave technology over the past decade have enabled the development of a new generation of imaging diagnostics for current and envisioned magnetic fusion devices. Prominent among these are revolutionary microwave electron cyclotron emission imaging (ECEI), microwave phase imaging interferometers, imaging microwave scattering and microwave imaging reflectometer (MIR) systems for imaging electron temperature and electron density fluctuations (both turbulent and coherent) and profiles (including transport barriers) on toroidal devices such as tokamaks, spherical tori, and stellarators. The diagnostic technology is reviewed, and typical diagnostic systems are analyzed. Representative experimental results obtained with these novel diagnostic systems are also presented.

  20. Ant Colony Clustering Algorithm and Improved Markov Random Fusion Algorithm in Image Segmentation of Brain Images

    Directory of Open Access Journals (Sweden)

    Guohua Zou

    2016-12-01

    Full Text Available New medical imaging technology, such as Computed Tomography and Magnetic Resonance Imaging (MRI, has been widely used in all aspects of medical diagnosis. The purpose of these imaging techniques is to obtain various qualitative and quantitative data of the patient comprehensively and accurately, and provide correct digital information for diagnosis, treatment planning and evaluation after surgery. MR has a good imaging diagnostic advantage for brain diseases. However, as the requirements of the brain image definition and quantitative analysis are always increasing, it is necessary to have better segmentation of MR brain images. The FCM (Fuzzy C-means algorithm is widely applied in image segmentation, but it has some shortcomings, such as long computation time and poor anti-noise capability. In this paper, firstly, the Ant Colony algorithm is used to determine the cluster centers and the number of FCM algorithm so as to improve its running speed. Then an improved Markov random field model is used to improve the algorithm, so that its antinoise ability can be improved. Experimental results show that the algorithm put forward in this paper has obvious advantages in image segmentation speed and segmentation effect.

  1. Differential diagnostic algorithm for diseases manifested with heart murmurs syndrome.

    Science.gov (United States)

    Naumov, Leonid B

    2009-08-01

    Diagnostic interpretation at auscultation of heart murmurs is accompanied by frequent errors. It creates serious clinical, pedagogical, organizational and social problems. The standard nosological principle of a clinical information description from the diagnosis (a disease name) to the description of symptoms/signs contradicts to real clinical practice from revealing of symptoms through differential diagnostics to a diagnosis establishment. The differential diagnostic algorithm or diagnostic algorithm developed by the author, is based on the opposite syndromic principle of thinking - from the signs to the diagnosis. It completely corresponds to the practical purposes of reliable diagnostics of 35 illnesses, manifested by heart murmurs at a heart auscultation.

  2. Image Compression Algorithms Using Dct

    Directory of Open Access Journals (Sweden)

    Er. Abhishek Kaushik

    2014-04-01

    Full Text Available Image compression is the application of Data compression on digital images. The discrete cosine transform (DCT is a technique for converting a signal into elementary frequency components. It is widely used in image compression. Here we develop some simple functions to compute the DCT and to compress images. An image compression algorithm was comprehended using Matlab code, and modified to perform better when implemented in hardware description language. The IMAP block and IMAQ block of MATLAB was used to analyse and study the results of Image Compression using DCT and varying co-efficients for compression were developed to show the resulting image and error image from the original images. Image Compression is studied using 2-D discrete Cosine Transform. The original image is transformed in 8-by-8 blocks and then inverse transformed in 8-by-8 blocks to create the reconstructed image. The inverse DCT would be performed using the subset of DCT coefficients. The error image (the difference between the original and reconstructed image would be displayed. Error value for every image would be calculated over various values of DCT co-efficients as selected by the user and would be displayed in the end to detect the accuracy and compression in the resulting image and resulting performance parameter would be indicated in terms of MSE , i.e. Mean Square Error.

  3. Advanced Imaging Algorithms for Radiation Imaging Systems

    Energy Technology Data Exchange (ETDEWEB)

    Marleau, Peter [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    The intent of the proposed work, in collaboration with University of Michigan, is to develop the algorithms that will bring the analysis from qualitative images to quantitative attributes of objects containing SNM. The first step to achieving this is to develop an indepth understanding of the intrinsic errors associated with the deconvolution and MLEM algorithms. A significant new effort will be undertaken to relate the image data to a posited three-dimensional model of geometric primitives that can be adjusted to get the best fit. In this way, parameters of the model such as sizes, shapes, and masses can be extracted for both radioactive and non-radioactive materials. This model-based algorithm will need the integrated response of a hypothesized configuration of material to be calculated many times. As such, both the MLEM and the model-based algorithm require significant increases in calculation speed in order to converge to solutions in practical amounts of time.

  4. Listless zerotree image compression algorithm

    Science.gov (United States)

    Lian, Jing; Wang, Ke

    2006-09-01

    In this paper, an improved zerotree structure and a new coding procedure are adopted, which improve the reconstructed image qualities. Moreover, the lists in SPIHT are replaced by flag maps, and lifting scheme is adopted to realize wavelet transform, which lowers the memory requirements and speeds up the coding process. Experimental results show that the algorithm is more effective and efficient compared with SPIHT.

  5. OPTIMIZATION OF DIAGNOSTIC IMAGING IN BREAST CANCER

    Directory of Open Access Journals (Sweden)

    S. A. Velichko

    2015-01-01

    Full Text Available The paper presents the results of breast imaging for 47200 women. Breast cancer was detected in 862 (1.9% patients, fibroadenoma in 1267 (2.7% patients and isolated breast cysts in 1162 (2.4% patients. Different types of fibrocystic breast disease (adenosis, diffuse fibrocystic changes, local fibrosis and others were observed in 60.1% of women. Problems of breast cancer visualization during mammography, characterized by the appearance of fibrocystic mastopathy (sclerosing adenosis, fibrous bands along the ducts have been analyzed. Data on the development of diagnostic algorithms including the modern techniques for ultrasound and interventional radiology aimed at detecting early breast cancer have been presented.  

  6. Image Compression using GSOM Algorithm

    Directory of Open Access Journals (Sweden)

    SHABBIR AHMAD

    2015-10-01

    Full Text Available image compression. Conventional techniques such as Huffman coding and the Shannon Fano method, LZ Method, Run Length Method, LZ-77 are more recent methods for the compression of data. A traditional approach to reduce the large amount of data would be to discard some data redundancy and introduce some noise after reconstruction. We present a neural network based Growing self-organizing map technique that may be a reliable and efficient way to achieve vector quantization. Typical application of such algorithm is image compression. Moreover, Kohonen networks realize a mapping between an input and an output space that preserves topology. This feature can be used to build new compression schemes which allow obtaining better compression rate than with classical method as JPEG without reducing the image quality .the experiment result show that proposed algorithm improve the compression ratio in BMP, JPG and TIFF File.

  7. Methodology, models and algorithms in thermographic diagnostics

    CERN Document Server

    Živčák, Jozef; Madarász, Ladislav; Rudas, Imre J

    2013-01-01

    This book presents  the methodology and techniques of  thermographic applications with focus primarily on medical thermography implemented for parametrizing the diagnostics of the human body. The first part of the book describes the basics of infrared thermography, the possibilities of thermographic diagnostics and the physical nature of thermography. The second half includes tools of intelligent engineering applied for the solving of selected applications and projects. Thermographic diagnostics was applied to problematics of paraplegia and tetraplegia and carpal tunnel syndrome (CTS). The results of the research activities were created with the cooperation of the four projects within the Ministry of Education, Science, Research and Sport of the Slovak Republic entitled Digital control of complex systems with two degrees of freedom, Progressive methods of education in the area of control and modeling of complex object oriented systems on aircraft turbocompressor engines, Center for research of control of te...

  8. Spaceborne SAR Imaging Algorithm for Coherence Optimized.

    Directory of Open Access Journals (Sweden)

    Zhiwei Qiu

    Full Text Available This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR research and application.

  9. Image Compression Using Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Ryan Rey M. Daga

    2012-09-01

    Full Text Available Image compression techniques are important and useful in data storage and image transmission through the Internet. These techniques eliminate redundant information in an image which minimizes the physical space requirement of the image. Numerous types of image compression algorithms have been developed but the resulting image is still less than the optimal. The Harmony search algorithm (HSA, a meta-heuristic optimization algorithm inspired by the music improvisation process of musicians, was applied as the underlying algorithm for image compression. Experiment results show that it is feasible to use the harmony search algorithm as an algorithm for image compression. The HSA-based image compression technique was able to compress colored and grayscale images with minimal visual information loss.

  10. Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed

    Science.gov (United States)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Wright, Stephanie

    2009-01-01

    Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.

  11. Radiogenomic imaging-linking diagnostic imaging and molecular diagnostics

    Institute of Scientific and Technical Information of China (English)

    Mathias; Goyen

    2014-01-01

    Radiogenomic imaging refers to the correlation between cancer imaging features and gene expression and is one of the most promising areas within science and medicine. High-throughput biological techniques have reshaped the perspective of biomedical research allowing for fast and efficient assessment of the entire molecular topography of a cell’s physiology providing new insights into human cancers. The use of non-invasive imaging tools for gene expression profiling of solid tumors could serve as a means for linking specific imaging features with specific gene expression patterns thereby allowing for more accurate diagnosis and prognosis and obviating the need for high-risk invasive biopsy procedures. This review focuses on the medical imaging part as one of the main drivers for the development of radiogenomic imaging.

  12. The accuracy of the Edinburgh diplopia diagnostic algorithm.

    Science.gov (United States)

    Butler, L; Yap, T; Wright, M

    2016-06-01

    PurposeTo assess the diagnostic accuracy of the Edinburgh diplopia diagnostic algorithm.MethodsThis was a prospective study. Details of consecutive patients referred to ophthalmology clinics at Falkirk Community Hospital and Princess Alexandra Eye Pavilion, Edinburgh, with double vision were collected by the clinician first seeing the patient and passed to the investigators. The investigators then assessed the patient using the algorithm. An assessment of the degree of concordance between the 'algorithm assisted' diagnosis and the 'gold standard' diagnosis, made by a consultant ophthalmologist was then carried out. The accuracy of the pre-algorithm diagnosis made by the referrer was also noted.ResultsAll patients referred with diplopia were eligible for inclusion. Fifty-one patients were assessed; six were excluded. The pre-algorithm accuracy of referrers was 24% (10/41). The algorithm assisted diagnosis was correct 82% (37/45) of the time. It correctly diagnosed: cranial nerve (CN) III palsy in 6/6, CN IV palsy in 7/8, CN VI palsy in 12/12, internuclear ophthalmoplegia in 4/4, restrictive myopathy in 4/4, media opacity in 1/1, and blurred vision in 3/3. The algorithm assisted diagnosis was wrong in 18% (8/45) of the patients.ConclusionsThe baseline diagnostic accuracy of non-ophthalmologists rose from 24 to 82% when patients were assessed using the algorithm. The improvement in the diagnostic accuracy resulting from the use of the algorithm would, hopefully, result in more accurate triage of patients with diplopia that are referred to the hospital eye service. We hope we have demonstrated its potential as a learning tool for inexperienced clinicians.

  13. Proposed diagnostic algorithm for patients with suspected mastocytosis

    DEFF Research Database (Denmark)

    Valent, P; Escribano, L; Broesby-Olsen, S

    2014-01-01

    a diagnostic challenge. In the light of this unmet need, we developed a diagnostic algorithm for patients with suspected mastocytosis. In adult patients with typical lesions of mastocytosis in the skin, a bone marrow (BM) biopsy should be considered, regardless of the basal serum tryptase concentration...... is usually not required, even if the tryptase level is increased. Although validation is required, it can be expected that the algorithm proposed herein will facilitate the management of patients with suspected mastocytosis and help avoid unnecessary referrals and investigations....

  14. Towards Robust Image Matching Algorithms

    Science.gov (United States)

    Parsons, Timothy J.

    1984-12-01

    The rapid advance in digital electronics during recent years has enabled the real-time hardware implementation of many basic image processing techniques and these methods are finding increasing use in both commercial and military applications where a superiority to existing systems can be demonstrated. The potential superiority of an entirely passive, automatic image processing based navigation system over the less accurate and active navigation systems based on radar, for example "TERCOM", is evident. By placing a sensor on board an aircraft or missile together with the appropriate processing power and enough memory to store a reference image or a map of the planned route, large scale features extracted from the scene available to the sensor can be compared with the same feature stored in memory. The difference between the aircraft's actual position and its desired position can then be evaluated and the appropriate navigational correction undertaken. This paper summaries work carried out at British Aerospace Hatfield to investigate various classes of algorithms and solutions which would render a robust image matching system viable for such an automatic system flying at low level with a thermal I.R. sensor.

  15. The neutron imaging diagnostic at NIF (invited).

    Science.gov (United States)

    Merrill, F E; Bower, D; Buckles, R; Clark, D D; Danly, C R; Drury, O B; Dzenitis, J M; Fatherley, V E; Fittinghoff, D N; Gallegos, R; Grim, G P; Guler, N; Loomis, E N; Lutz, S; Malone, R M; Martinson, D D; Mares, D; Morley, D J; Morgan, G L; Oertel, J A; Tregillis, I L; Volegov, P L; Weiss, P B; Wilde, C H; Wilson, D C

    2012-10-01

    A neutron imaging diagnostic has recently been commissioned at the National Ignition Facility (NIF). This new system is an important diagnostic tool for inertial fusion studies at the NIF for measuring the size and shape of the burning DT plasma during the ignition stage of Inertial Confinement Fusion (ICF) implosions. The imaging technique utilizes a pinhole neutron aperture, placed between the neutron source and a neutron detector. The detection system measures the two dimensional distribution of neutrons passing through the pinhole. This diagnostic has been designed to collect two images at two times. The long flight path for this diagnostic, 28 m, results in a chromatic separation of the neutrons, allowing the independently timed images to measure the source distribution for two neutron energies. Typically the first image measures the distribution of the 14 MeV neutrons and the second image of the 6-12 MeV neutrons. The combination of these two images has provided data on the size and shape of the burning plasma within the compressed capsule, as well as a measure of the quantity and spatial distribution of the cold fuel surrounding this core.

  16. Clinics in diagnostic imaging (175)

    Science.gov (United States)

    Krishnan, Vijay; Lim, Tze Chwan; Ho, Francis Cho Hao; Peh, Wilfred CG

    2017-01-01

    A 54-year-old man presented with change in behaviour, nocturnal enuresis, abnormal limb movement and headache of one week’s duration. The diagnosis of butterfly glioma (glioblastoma multiforme) was made based on imaging characteristics and was further confirmed by biopsy findings. As the corpus callosum is usually resistant to infiltration by tumours, a mass that involves and crosses the corpus callosum is suggestive of an aggressive neoplasm. Other neoplastic and non-neoplastic conditions that may involve the corpus callosum and mimic a butterfly glioma, as well as associated imaging features, are discussed. PMID:28361164

  17. Image Classification through integrated K- Means Algorithm

    Directory of Open Access Journals (Sweden)

    Balasubramanian Subbiah

    2012-03-01

    Full Text Available Image Classification has a significant role in the field of medical diagnosis as well as mining analysis and is even used for cancer diagnosis in the recent years. Clustering analysis is a valuable and useful tool for image classification and object diagnosis. A variety of clustering algorithms are available and still this is a topic of interest in the image processing field. However, these clustering algorithms are confronted with difficulties in meeting the optimum quality requirements, automation and robustness requirements. In this paper, we propose two clustering algorithm combinations with integration of K-Means algorithm that can tackle some of these problems. Comparison study is made between these two novel combination algorithms. The experimental results demonstrate that the proposed algorithms are very effective in producing desired clusters of the given data sets as well as diagnosis. These algorithms are very much useful for image classification as well as extraction of objects.

  18. Objective measurement for image defogging algorithms

    Institute of Scientific and Technical Information of China (English)

    郭璠; 唐琎; 蔡自兴

    2014-01-01

    Since there is lack of methodology to assess the performance of defogging algorithm and the existing assessment methods have some limitations, three new methods for assessing the defogging algorithm were proposed. One was using synthetic foggy image simulated by image degradation model to assess the defogging algorithm in full-reference way. In this method, the absolute difference was computed between the synthetic image with and without fog. The other two were computing the fog density of gray level image or constructing assessment system of color image from human visual perception to assess the defogging algorithm in no-reference way. For these methods, an assessment function was defined to evaluate algorithm performance from the function value. Using the defogging algorithm comparison, the experimental results demonstrate the effectiveness and reliability of the proposed methods.

  19. Clinics in diagnostic imaging (171)

    Science.gov (United States)

    Ooi, Su Kai Gideon; Tan, Tien Jin; Ngu, James Chi Yong

    2016-01-01

    A 46-year-old Chinese woman with a history of cholecystectomy and appendicectomy presented to the emergency department with symptoms of intestinal obstruction. Physical examination revealed central abdominal tenderness but no clinical features of peritonism. Plain radiography of the abdomen revealed a grossly distended large bowel loop with the long axis extending from the right lower abdomen toward the epigastrium, and an intraluminal air-fluid level. These findings were suspicious for an acute caecal volvulus, which was confirmed on subsequent contrast-enhanced computed tomography (CT) of the abdomen and pelvis. CT demonstrated an abnormal positional relationship between the superior mesenteric vein and artery, indicative of an underlying intestinal malrotation. This case highlights the utility of preoperative imaging in establishing the diagnosis of an uncommon cause of bowel obstruction. It also shows the importance of recognising the characteristic imaging features early, so as to ensure appropriate and expedient management, thus reducing patient morbidity arising from complications. PMID:27872936

  20. Diagnostic imaging of lipoma arborescens

    Energy Technology Data Exchange (ETDEWEB)

    Martin, S.; Hernandez, L.; Romero, J.; Lafuente, J.; Poza, A.I.; Ruiz, P. [Servicio de Radiodiagnostico, Hospital General Universitario Gregorio Maranon, c/Dr. Esquerdo, 46, E-28007 Madrid (Spain); Jimeno, M. [Servicio de Anatomia Patologica, Hospital General Universitario Gregorio Maranon, c/Dr. Esquerdo, 46, E-28007 Madrid (Spain)

    1998-06-01

    Objective. The imaging characteristics of lipoma arborescens using plain radiographs, computed tomography (CT), and magnetic resonance imaging (MRI) are described. Design and patients. Five patients with a diagnosis of lipoma arborescens are presented. Three had monoarticular involvement of the knee joint. In the remaining two patients both knees and both hips, respectively, were affected. All patients were examined using plain radiographs and MRI. CT was employed in two cases. Results and conclusions. A conclusive diagnosis with exclusion of other synovial pathologies having similar clinical and radiological behaviour can be achieved on the basis of the MRI characteristics of lipoma arborescens. The aetiology of lipoma arborescens remains unknown, but its association with previous pathology of the affected joints in all our patients supports the theory of a non-neoplastic reactive process involving the synovial membrane. (orig.) With 5 figs., 18 refs.

  1. Fuzzy logic-based diagnostic algorithm for implantable cardioverter defibrillators.

    Science.gov (United States)

    Bárdossy, András; Blinowska, Aleksandra; Kuzmicz, Wieslaw; Ollitrault, Jacky; Lewandowski, Michał; Przybylski, Andrzej; Jaworski, Zbigniew

    2014-02-01

    The paper presents a diagnostic algorithm for classifying cardiac tachyarrhythmias for implantable cardioverter defibrillators (ICDs). The main aim was to develop an algorithm that could reduce the rate of occurrence of inappropriate therapies, which are often observed in existing ICDs. To achieve low energy consumption, which is a critical factor for implantable medical devices, very low computational complexity of the algorithm was crucial. The study describes and validates such an algorithm and estimates its clinical value. The algorithm was based on the heart rate variability (HRV) analysis. The input data for our algorithm were: RR-interval (I), as extracted from raw intracardiac electrogram (EGM), and in addition two other features of HRV called here onset (ONS) and instability (INST). 6 diagnostic categories were considered: ventricular fibrillation (VF), ventricular tachycardia (VT), sinus tachycardia (ST), detection artifacts and irregularities (including extrasystoles) (DAI), atrial tachyarrhythmias (ATF) and no tachycardia (i.e. normal sinus rhythm) (NT). The initial set of fuzzy rules based on the distributions of I, ONS and INST in the 6 categories was optimized by means of a software tool for automatic rule assessment using simulated annealing. A training data set with 74 EGM recordings was used during optimization, and the algorithm was validated with a validation data set with 58 EGM recordings. Real life recordings stored in defibrillator memories were used. Additionally the algorithm was tested on 2 sets of recordings from the PhysioBank databases: MIT-BIH Arrhythmia Database and MIT-BIH Supraventricular Arrhythmia Database. A custom CMOS integrated circuit implementing the diagnostic algorithm was designed in order to estimate the power consumption. A dedicated Web site, which provides public online access to the algorithm, has been created and is available for testing it. The total number of events in our training and validation sets was 132. In

  2. ALGORITHM FOR IMAGE MIXING AND ENCRYPTION

    Directory of Open Access Journals (Sweden)

    Ayman M. Abdalla

    2013-04-01

    Full Text Available This new algorithm mixes two or more images of different types and sizes by employing a shuffling procedure combined with S-box substitution to perform lossless image encryption. This combines stream cipher with block cipher, on the byte level, in mixing the images. When this algorithm was implemented, empirical analysis using test images of different types and sizes showed that it is effective and resistant to attacks.

  3. Imaging diagnostics in ovarian cancer

    DEFF Research Database (Denmark)

    Fog, Sigrid Marie Kasper Kasper; Dueholm, Margit; Marinovskij, Edvard;

    2017-01-01

    OBJECTIVE: To analyze the ability of magnetic resonance imaging (MRI) and systematic evaluation at surgery to predict optimal cytoreduction in primary advanced ovarian cancer and to develop a preoperative scoring system for cancer staging. STUDY DESIGN: Preoperative MRI and standard laparotomy were...... performed in 99 women with either ovarian or primary peritoneal cancer. Using univariate and multivariate logistic regression analysis of a systematic description of the tumor in nine abdominal compartments obtained by MRI and during surgery plus clinical parameters, a scoring system was designed....... MRI is able to assess ovarian cancer with peritoneal carcinomatosis with satisfactory concordance with laparotomic findings. This scoring system could be useful as a clinical guideline and should be evaluated and developed further in larger studies....

  4. ALGORITHMIC IMAGING APPROACH TO IMPOTENCE

    Directory of Open Access Journals (Sweden)

    Mahyar Ghafoori

    2012-05-01

    Full Text Available Impotence is a common problem that has great impact on the quality of life. Clinical evaluation usually can exclude endocrinologic imbalance, neurogenic dysfunction, and psychological problems as the etiology. A patient who fails to get an erection after vasoactive medications which are injected probably, has hemodynamic impotence. Dynamic studies that include imaging techniques are now available to discriminate between arterial and venous pathology.Doppler ultrasound with color flow and spectral analysis,dynamic infusion corpus cavernosometry and cavernosography, and selective internal pudendal arteriography are outpatient diagnostic procedures that will differentiate, image and quantify the abnormalities in patients with hemodynamic impotence. Not all tests are needed in every patient. Each of these examinations is preceded with the intracavernosal injection of vasoactive medication. Papaverine hydrochloride, phentolamine mesylate, or prostaglandin El will overcome normal sympathetic tone and produce an erection by smooth muscle relaxation and arterial dilatation in a normal patient. Color-flow Doppler and spectral analysis will showthe cavernosal arteries and can identify the hemodynamic effects of stricture or occlusion. Peak systolic velocity is measured. Normal ranges are well established. Spectral analysis also is used to predict the presence of venous disease. Sizable venous leaks in the dorsal penile vein are readily imaged.While the technique may not adequately identify low-grade venous pathology, it will identify the size and location of fibrous plaque formation associated with Peyronie's disease. Cavernosography or cavernosometry is a separate procedure that will quantitate the severity of venousincompetence as well as specifically identify the various avenues of systemic venous return that must be localized if venous occlusive therapy is chosen. In this study, the peak arterial systolic occlusion pressure is quantified during

  5. Marketing considerations for diagnostic imaging centers.

    Science.gov (United States)

    McCue, P

    1987-10-01

    Diagnostic imaging centers seek every possible advantage to maintain a successful practice in the face of competition from hospitals and other freestanding operators. Several radiologists and business managers involved in existing or planned centers discuss their marketing strategies, modality choices, organizational structure, and other issues pertinent to the start-up and operation of a viable free-standing operation.

  6. Comparative Study of Image Denoising Algorithms in Digital Image Processing

    Directory of Open Access Journals (Sweden)

    Aarti

    2014-05-01

    Full Text Available This paper proposes a basic scheme for understanding the fundamentals of digital image processing and the image denising algorithm. There are three basic operation categorized on during image processing i.e. image rectification and restoration, enhancement and information extraction. Image denoising is the basic problem in digital image processing. The main task is to make the image free from Noise. Salt & pepper (Impulse noise and the additive white Gaussian noise and blurredness are the types of noise that occur during transmission and capturing. For denoising the image there are some algorithms which denoise the image.

  7. Comparative Study of Image Denoising Algorithms in Digital Image Processing

    Directory of Open Access Journals (Sweden)

    Aarti Kumari

    2015-11-01

    Full Text Available This paper proposes a basic scheme for understanding the fundamentals of digital image processing and the image denising algorithm. There are three basic operation categorized on during image processing i.e. image rectification and restoration, enhancement and information extraction. Image denoising is the basic problem in digital image processing. The main task is to make the image free from Noise. Salt & pepper (Impulse noise and the additive white Gaussian noise and blurredness are the types of noise that occur during transmission and capturing. For denoising the image there are some algorithms which denoise the image.

  8. An efficient algorithm for color image segmentation

    Directory of Open Access Journals (Sweden)

    Shikha Yadav

    2016-09-01

    Full Text Available In field of image processing, image segmentation plays an important role that focus on splitting the whole image into segments. Representation of an image so that it can be more easily analysed and involves more information is an important segmentation goal. The process of partitioning an image can be usually realized by Region based, Boundary based or edge based method. In this work a hybrid approach is followed that combines improved bee colony optimization and Tabu search for color image segmentation. The results produced from this hybrid approach are compared with non-sorted particle swarm optimization, non-sorted genetic algorithm and improved bee colony optimization. Results show that the Hybrid algorithm has better or somewhat similar performance as compared to other algorithms that are based on population. The algorithm is successfully implemented on MATLAB.

  9. [Algorithm of the diagnostics of trauma and degenerative diseases of the spine].

    Science.gov (United States)

    Shchedrenok, V V; Sebelev, K I; Anikeev, N V; Tiul'kin, O N; Kaurova, T A; Moguchaia, O V

    2011-01-01

    Clinico-radial data were compared in 583 patients with trauma and degenerative diseases of the spine. The clinico-diagnostic complex included radiography of the spine (round-up and functional), magnetic resonance imaging, computerized helical tomography of the spine with spondylometric measurements. Indices of the measurements of the cross-section area of the vertebral artery canal at the level of C3-C6 vertebrae and the volume of the intervertebral canal at different levels in health among men and women are presented. An algorithm of radiation diagnostics in pathology of the spine is proposed.

  10. Quantum Image Encryption Algorithm Based on Quantum Image XOR Operations

    Science.gov (United States)

    Gong, Li-Hua; He, Xiang-Tao; Cheng, Shan; Hua, Tian-Xiang; Zhou, Nan-Run

    2016-07-01

    A novel encryption algorithm for quantum images based on quantum image XOR operations is designed. The quantum image XOR operations are designed by using the hyper-chaotic sequences generated with the Chen's hyper-chaotic system to control the control-NOT operation, which is used to encode gray-level information. The initial conditions of the Chen's hyper-chaotic system are the keys, which guarantee the security of the proposed quantum image encryption algorithm. Numerical simulations and theoretical analyses demonstrate that the proposed quantum image encryption algorithm has larger key space, higher key sensitivity, stronger resistance of statistical analysis and lower computational complexity than its classical counterparts.

  11. An algorithm for encryption of secret images into meaningful images

    Science.gov (United States)

    Kanso, A.; Ghebleh, M.

    2017-03-01

    Image encryption algorithms typically transform a plain image into a noise-like cipher image, whose appearance is an indication of encrypted content. Bao and Zhou [Image encryption: Generating visually meaningful encrypted images, Information Sciences 324, 2015] propose encrypting the plain image into a visually meaningful cover image. This improves security by masking existence of encrypted content. Following their approach, we propose a lossless visually meaningful image encryption scheme which improves Bao and Zhou's algorithm by making the encrypted content, i.e. distortions to the cover image, more difficult to detect. Empirical results are presented to show high quality of the resulting images and high security of the proposed algorithm. Competence of the proposed scheme is further demonstrated by means of comparison with Bao and Zhou's scheme.

  12. A Robust Automated Cataract Detection Algorithm Using Diagnostic Opinion Based Parameter Thresholding for Telemedicine Application

    Directory of Open Access Journals (Sweden)

    Shashwat Pathak

    2016-09-01

    Full Text Available This paper proposes and evaluates an algorithm to automatically detect the cataracts from color images in adult human subjects. Currently, methods available for cataract detection are based on the use of either fundus camera or Digital Single-Lens Reflex (DSLR camera; both are very expensive. The main motive behind this work is to develop an inexpensive, robust and convenient algorithm which in conjugation with suitable devices will be able to diagnose the presence of cataract from the true color images of an eye. An algorithm is proposed for cataract screening based on texture features: uniformity, intensity and standard deviation. These features are first computed and mapped with diagnostic opinion by the eye expert to define the basic threshold of screening system and later tested on real subjects in an eye clinic. Finally, a tele-ophthamology model using our proposed system has been suggested, which confirms the telemedicine application of the proposed system.

  13. Diagnostic imaging of craniopharyngioma; Diagnostyka obrazowa czaszkogardlakow

    Energy Technology Data Exchange (ETDEWEB)

    Gradzki, J.; Nowak, S.; Paprzycki, W. [Akademia Medyczna, Poznan (Poland)

    1993-12-31

    40 patients have been examined with operational and histological confirmation of craniopharyngioma. CT image and X-ray plane of skull were performed in case all of these patients. TMR was conformed to examine 4 patients. X-ray planes was compared to CT. CT permits tumor cyst detection. The efficacy of mentioned above diagnostic techniques was compared with surgical findings. (author). 7 refs, 5 figs, 2 tabs.

  14. AN EFFICIENT BTC IMAGE COMPRESSION ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Block truncation coding (BTC) is a simple and fast image compression technique suitable for realtime image transmission, and it has high channel error resisting capability and good reconstructed image quality. The main shortcoming of the original BTC algorithm is the high bit rate (normally 2 bits/pixel). In order to reduce the bit rate, an efficient BTC image compression algorithm was presented in this paper. In the proposed algorithm, a simple look-up-table method is presented for coding the higher mean and the lower mean of a block without any extra distortion, and a prediction technique is introduced to reduce the number of bits used to code the bit plane with some extra distortion. The test results prove the effectiveness of the proposed algorithm.

  15. CT imaging in acute pulmonary embolism: diagnostic strategies

    Energy Technology Data Exchange (ETDEWEB)

    Wildberger, Joachim E.; Mahnken, Andreas H.; Das, Marco; Guenther, Rolf W. [University of Technology (RWTH), Department of Diagnostic Radiology, University Hospital, Aachen (Germany); Kuettner, Axel [Eberhard Karls University, Department of Diagnostic Radiology, Tuebingen (Germany); Lell, Michael [Friedrich Alexander University, Department of Diagnostic Radiology, Erlangen (Germany)

    2005-05-01

    Computed tomography pulmonary angiography (CTA) has increasingly become accepted as a widely available, safe, cost-effective, and accurate method for a quick and comprehensive diagnosis of acute pulmonary embolism (PE). Pulmonary catheter angiography is still considered the gold standard and final imaging method in many diagnostic algorithms. However, spiral CTA has become established as the first imaging test in clinical routine due to its high negative predictive value for clinically relevant PE. Despite the direct visualization of clot material, depiction of cardiac and pulmonary function in combination with the quantification of pulmonary obstruction helps to grade the severity of PE for further risk stratification and to monitor the effect of thrombolytic therapy. Because PE and deep venous thrombosis are two different aspects of the same disease, additional indirect CT venography may be a valuable addition to the initial diagnostic algorithm - if this was positive for PE - and demonstration of the extent and localization of deep venous thrombosis has an impact on clinical management. Additional and alternate diagnoses add to the usefulness of this method. Using advanced multislice spiral CT technology, some practitioners have advocated CTA as the sole imaging tool for routine clinical assessment in suspected acute PE. This will simplify standards of practice in the near future. (orig.)

  16. Image compression algorithm using wavelet transform

    Science.gov (United States)

    Cadena, Luis; Cadena, Franklin; Simonov, Konstantin; Zotin, Alexander; Okhotnikov, Grigory

    2016-09-01

    Within the multi-resolution analysis, the study of the image compression algorithm using the Haar wavelet has been performed. We have studied the dependence of the image quality on the compression ratio. Also, the variation of the compression level of the studied image has been obtained. It is shown that the compression ratio in the range of 8-10 is optimal for environmental monitoring. Under these conditions the compression level is in the range of 1.7 - 4.2, depending on the type of images. It is shown that the algorithm used is more convenient and has more advantages than Winrar. The Haar wavelet algorithm has improved the method of signal and image processing.

  17. Image Series Segmentation and Improved MC Algorithm

    Institute of Scientific and Technical Information of China (English)

    WAN Wei-bing; SHI Peng-fei

    2008-01-01

    A semiautomatic segmentation method based on active contour is proposed for computed tomog-raphy (CT) image series. First, to get initial contour, one image slice was segmented exactly by C-V method based on Mumford-Shah model. Next, the computer will segment the nearby slice automatically using the snake model one by one. During segmenting of image slices, former slice boundary, as next slice initial con-tour, may cross over next slice real boundary and never return to right position. To avoid contour skipping over, the distance variance between two slices is evaluated by an threshold, which decides whether to initiate again. Moreover, a new improved marching cubes (MC) algorithm based on 2D images series segmentation boundary is given for 3D image reconstruction. Compared with the standard method, the proposed algorithm reduces detecting time and needs less storing memory. The effectiveness and capabilities of the algorithm were illustrated by experimental results.

  18. User-friendly imaging algorithms for interferometry

    Science.gov (United States)

    Young, John; Thiébaut, Éric; Duvert, Gilles; Vannier, Martin; Garcia, Paulo; Mella, Guillaume

    2016-08-01

    OPTICON currently supports a Joint Research Activity (JRA) dedicated to providing easy to use image reconstruction algorithms for optical/IR interferometric data. This JRA aims to provide state-of-the-art image reconstruction methods with a common interface and comprehensive documentation to the community. These tools will provide the capability to compare the results of using different settings and algorithms in a consistent and unified way. The JRA is also providing tutorials and sample datasets to introduce the principles of image reconstruction and illustrate how to use the software products. We describe the design of the imaging tools, in particular the interface between the graphical user interface and the image reconstruction algorithms, and summarise the current status of their implementation.

  19. [Osteoarthrosis: implementation of current diagnostic and therapeutic algorithms].

    Science.gov (United States)

    Meza-Reyes, Gilberto; Aldrete-Velasco, Jorge; Espinosa-Morales, Rolando; Torres-Roldán, Fernando; Díaz-Borjón, Alejandro; Robles-San Román, Manuel

    2017-01-01

    In the modern world, among the different clinical presentations of osteoarthritis, gonarthrosis and coxarthrosis exhibit the highest prevalence. In this paper, the characteristics of osteoarthritis and the different scales of assessment and classification of this pathology are exposed, to provide an exhibition of current evidence generated around diagnostic algorithms and treatment of osteoarthritis, with emphasis set out in the knee and hip, as these are the most frequent; a rational procedure for monitoring patients with osteoarthritis based on characteristic symptoms and the severity of the condition is also set. Finally, reference is made to the therapeutic benefits of the recent introduction of viscosupplementation with Hylan GF-20.

  20. Image enhancement of digital periapical radiographs according to diagnostic tasks

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jin Woo; Han, Won Jeong; Kim, Eun Kyung [Dept. of Oral and Maxillofacial Radiology, Dankook University College of Dentistry, Cheonan (Korea, Republic of)

    2014-03-15

    his study was performed to investigate the effect of image enhancement of periapical radiographs according to the diagnostic task. Eighty digital intraoral radiographs were obtained from patients and classified into four groups according to the diagnostic tasks of dental caries, periodontal diseases, periapical lesions, and endodontic files. All images were enhanced differently by using five processing techniques. Three radiologists blindly compared the subjective image quality of the original images and the processed images using a 5-point scale. There were significant differences between the image quality of the processed images and that of the original images (P<0.01) in all the diagnostic task groups. Processing techniques showed significantly different efficacy according to the diagnostic task (P<0.01). Image enhancement affects the image quality differently depending on the diagnostic task. And the use of optimal parameters is important for each diagnostic task.

  1. Infrared imaging diagnostics for INTF ion beam

    Science.gov (United States)

    Sudhir, D.; Bandyopadhyay, M.; Pandey, R.; Joshi, J.; Yadav, A.; Rotti, C.; Bhuyan, M.; Bansal, G.; Soni, J.; Tyagi, H.; Pandya, K.; Chakraborty, A.

    2015-04-01

    In India, testing facility named INTF [1] (Indian test facility) is being built in Institute for Plasma Research to characterize ITER-Diagnostic Neutral Beam (DNB). INTF is expected to deliver 60A negative hydrogen ion beam current of energy 100keV. The beam will be operated with 5Hz modulation having 3s ON/20s OFF duty cycle. To characterize the beam parameters several diagnostics are at different stages of design and development. One of them will be a beam dump, made of carbon fiber composite (CFC) plates placed perpendicular to the beam direction at a distance lm approximately. The beam dump needs to handle ˜ 6MW of beam power with peak power density ˜ 38.5MW/m2. The diagnostic is based on thermal (infra-red - IR) imaging of the footprint of the 1280 beamlets falling on the beam dump using four IR cameras from the rear side of the dump. The beam dump will be able to measure beam uniformity, beamlet divergence. It may give information on relative variation of negative ion stripping losses for different beam pulses. The design of this CFC based beam dump needs to address several physics and engineering issues, including some specific inputs from manufacturers. The manuscript will describe an overview of the diagnostic system and its design methodology highlighting those issues and the present status of its development.

  2. Empirical Evaluation of Diagnostic Algorithm Performance Using a Generic Framework

    Directory of Open Access Journals (Sweden)

    Arjan van Gemund

    2010-01-01

    Full Text Available A variety of rule-based, model-based and datadriven techniques have been proposed for detection and isolation of faults in physical systems. However, there have been few efforts to comparatively analyze the performance of these approaches on the same system under identical conditions. One reason for this was the lack of a standard framework to perform this comparison. In this paper we introduce a framework, called DXF, that provides a common language to represent the system description, sensor data and the fault diagnosis results; a run-time architecture to execute the diagnosis algorithms under identical conditions and collect the diagnosis results; and an evaluation component that can compute performance metrics from the diagnosis results to compare the algorithms. We have used DXF to perform an empirical evaluation of 13 diagnostic algorithms on a hardware testbed (ADAPT at NASA Ames Research Center and on a set of synthetic circuits typically used as benchmarks in the model-based diagnosis community. Based on these empirical data we analyze the performance of each algorithm and suggest directions for future development.

  3. Diagnostic Accuracy Comparison of Artificial Immune Algorithms for Primary Headaches

    Directory of Open Access Journals (Sweden)

    Ufuk Çelik

    2015-01-01

    Full Text Available The present study evaluated the diagnostic accuracy of immune system algorithms with the aim of classifying the primary types of headache that are not related to any organic etiology. They are divided into four types: migraine, tension, cluster, and other primary headaches. After we took this main objective into consideration, three different neurologists were required to fill in the medical records of 850 patients into our web-based expert system hosted on our project web site. In the evaluation process, Artificial Immune Systems (AIS were used as the classification algorithms. The AIS are classification algorithms that are inspired by the biological immune system mechanism that involves significant and distinct capabilities. These algorithms simulate the specialties of the immune system such as discrimination, learning, and the memorizing process in order to be used for classification, optimization, or pattern recognition. According to the results, the accuracy level of the classifier used in this study reached a success continuum ranging from 95% to 99%, except for the inconvenient one that yielded 71% accuracy.

  4. Least significant qubit algorithm for quantum images

    Science.gov (United States)

    Sang, Jianzhi; Wang, Shen; Li, Qiong

    2016-11-01

    To study the feasibility of the classical image least significant bit (LSB) information hiding algorithm on quantum computer, a least significant qubit (LSQb) information hiding algorithm of quantum image is proposed. In this paper, we focus on a novel quantum representation for color digital images (NCQI). Firstly, by designing the three qubits comparator and unitary operators, the reasonability and feasibility of LSQb based on NCQI are presented. Then, the concrete LSQb information hiding algorithm is proposed, which can realize the aim of embedding the secret qubits into the least significant qubits of RGB channels of quantum cover image. Quantum circuit of the LSQb information hiding algorithm is also illustrated. Furthermore, the secrets extracting algorithm and circuit are illustrated through utilizing control-swap gates. The two merits of our algorithm are: (1) it is absolutely blind and (2) when extracting secret binary qubits, it does not need any quantum measurement operation or any other help from classical computer. Finally, simulation and comparative analysis show the performance of our algorithm.

  5. Algorithms in radiology and medical imaging.

    Science.gov (United States)

    Athanasoulis, C A; Lee, A K

    1987-08-01

    As a tool in clinical decision making, algorithms deserve careful consideration. The potential use or abuse of algorithms in rationing health care renders such consideration essential. In radiology and medical imaging, algorithms have been applied as teaching tools in the conference room setting. These teaching decision trees, however, may not be applicable in the clinical situation. If an algorithmic approach to clinical radiology is pursued, several issues should be considered. Specifically, the application, design, designer, economics, and universality of the algorithms must be addressed. As an alternative to the wide dissemination of clinical algorithms, the authors propose the development of consensus opinions among specialists and the promulgation of the principle of radiologist-consultant-decision maker. A decision team is preferable to a decision tree.

  6. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  7. Ionosphere correction algorithm for spaceborne SAR imaging

    Institute of Scientific and Technical Information of China (English)

    Lin Yang; Mengdao Xing; Guangcai Sun

    2016-01-01

    For spaceborne synthetic aperture radar (SAR) ima-ging, the dispersive ionosphere has significant effects on the pro-pagation of the low frequency (especial y P-band) radar signal. The ionospheric effects can be a significant source of the phase error in the radar signal, which causes a degeneration of the image quality in spaceborne SAR imaging system. The background ionospheric effects on spaceborne SAR through modeling and simulation are analyzed, and the qualitative and quantitative analysis based on the spatio-temporal variability of the ionosphere is given. A novel ionosphere correction algorithm (ICA) is proposed to deal with the ionospheric effects on the low frequency spaceborne SAR radar signal. With the proposed algorithm, the degradation of the image quality caused by the ionosphere is corrected. The simulation re-sults show the effectiveness of the proposed algorithm.

  8. An Efficient Image Steganographic Algorithm

    OpenAIRE

    M. Shobana

    2015-01-01

    Steganography assumes a key part in the mystery information (computerized) correspondence. The pattern of consolidating mystery picture in Cover picture is begat as Image-Image Steganography. Most presumably all pictures are part into red, green and blue layers, But here sequestered from everything reason, the spread's layers picture are spitting into Cyan, Magenta, Yellow and key (Black). On the establishment of these four shading layers, three calculations have been arranged and...

  9. Fast SAR Imaging Algorithm for FLGPR

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A fast SAR imaging algorithm for near- field subsurface forward-looking ground penetrating radar (FLGPR) is presented. By using nonstationary convolution filter, the refocused image spectrum can be reconstructed directly from the backscattered signal spectrum of target area. The experimental results show the proposed method can fast achieve image refocusing. Also it has higher computational efficiency than the phase-shift migration approach and the delay-and-sum (DAS) approach.

  10. Image Recovery Algorithm Based on Learned Dictionary

    Directory of Open Access Journals (Sweden)

    Xinghui Zhu

    2014-01-01

    Full Text Available We proposed a recovery scheme for image deblurring. The scheme is under the framework of sparse representation and it has three main contributions. Firstly, considering the sparse property of natural image, the nonlocal overcompleted dictionaries are learned for image patches in our scheme. And, then, we coded the patches in each nonlocal clustering with the corresponding learned dictionary to recover the whole latent image. In addition, for some practical applications, we also proposed a method to evaluate the blur kernel to make the algorithm usable in blind image recovery. The experimental results demonstrated that the proposed scheme is competitive with some current state-of-the-art methods.

  11. Dermoscopic Image Segmentation using Machine Learning Algorithm

    Directory of Open Access Journals (Sweden)

    L. P. Suresh

    2011-01-01

    Full Text Available Problem statement: Malignant melanoma is the most frequent type of skin cancer. Its incidence has been rapidly increasing over the last few decades. Medical image segmentation is the most essential and crucial process in order to facilitate the characterization and visualization of the structure of interest in medical images. Approach: This study explains the task of segmenting skin lesions in Dermoscopy images based on intelligent systems such as Fuzzy and Neural Networks clustering techniques for the early diagnosis of Malignant Melanoma. The various intelligent system based clustering techniques used are Fuzzy C Means Algorithm (FCM, Possibilistic C Means Algorithm (PCM, Hierarchical C Means Algorithm (HCM; C-mean based Fuzzy Hopfield Neural Network, Adaline Neural Network and Regression Neural Network. Results: The segmented images are compared with the ground truth image using various parameters such as False Positive Error (FPE, False Negative Error (FNE Coefficient of similarity, spatial overlap and their performance is evaluated. Conclusion: The experimental results show that the Hierarchical C Means algorithm( Fuzzy provides better segmentation than other (Fuzzy C Means, Possibilistic C Means, Adaline Neural Network, FHNN and GRNN clustering algorithms. Thus Hierarchical C Means approach can handle uncertainties that exist in the data efficiently and useful for the lesion segmentation in a computer aided diagnosis system to assist the clinical diagnosis of dermatologists.

  12. Usefulness of diagnostic imaging in primary hyperparathyroidism

    Energy Technology Data Exchange (ETDEWEB)

    Sekiyama, Kazuya; Akakura, Koichiro; Mikami, Kazuo; Mizoguchi, Ken-ichi; Tobe, Toyofusa; Nakano, Koichi; Numata, Tsutomu; Konno, Akiyoshi; Ito, Haruo [Chiba Univ. (Japan). Graduate School of Medicine

    2003-01-01

    In patients with primary hyperparathyroidism, prevention of urinary stone recurrence can be achieved by surgical removal of the enlarged parathyroid gland. To ensure the efficacy of surgery for primary hyperparathyroidism, preoperative localization of the enlarged gland is important. In the present study, usefulness of diagnostic imaging for localization of the enlarged gland was investigated in primary hyperparathyroidism. We retrospectively examined the findings of imaging studies and clinical records in 79 patients (97 glands) who underwent surgical treatment for primary hyperparathyroidism at Chiba University Hospital between 1976 and 2000. The detection rates of accurate localization were investigated for imaging techniques, such as ultrasonography (US), computed tomography (CT), magnetic resonance imaging (MRI) thallium-201 and technetium-99m pertechnetate (Tl-Tc) subtraction scintigraphy and {sup 99m}Tc-methoxyisobutylisonitrile (MIBI) scintigraphy, and analysed in relation to the size and weight of the gland and pathological diagnosis. The detection rates by US, CT, MRI, Tl-Tc subtraction scintigraphy and MIBI scintigraphy were 70%, 67%, 73%, 38% and 78%, respectively. The overall detection rate changed from 50% to 88% before and after 1987. The detection rate of MIBI scintigraphy was superior to Tl-Tc subtraction scintigraphy. In primary hyperparathyroidism, improvement of accurate localization of an enlarged parathyroid gland was demonstrated along with recent advances in imaging techniques including MIBI scintigraphy. (author)

  13. An Efficient Image Steganographic Algorithm

    Directory of Open Access Journals (Sweden)

    M Shobana

    2015-12-01

    Full Text Available Steganography assumes a key part in the mystery information (computerized correspondence. The pattern of consolidating mystery picture in Cover picture is begat as Image-Image Steganography. Most presumably all pictures are part into red, green and blue layers, But here sequestered from everything reason, the spread's layers picture are spitting into Cyan, Magenta, Yellow and key (Black. On the establishment of these four shading layers, three calculations have been arranged and inspected in an effective way. In this calculation pixel intensities stand in a critical spot to choose its inserting limit. To assess the bore of the got picture its PSNR and MSE has been computed.

  14. HEREDITARY CONNECTIVE TISSUE DISORDERS: NOMENCLATURE AND DIAGNOSTIC ALGORITHM

    Directory of Open Access Journals (Sweden)

    A. V. Klemenov

    2015-01-01

    Full Text Available Hereditary connective tissue disorders (HCTDs are a genetically and clinically diverse group of diseases, which encompasses common congenital disorders of fibrous connective tissue structures. Out of the whole variety of the clinical manifestations of NCTDs, only differentiated monogenic syndromes with the agreed guidelines for their diagnosis have been long the focus of the medical community’s attention. Many unclassified forms of the pathology (dysplasia phenotypes have been disregarded while assessing a person’s prognosis and defining treatment policy. With no clear definition of NCTDs or their approved diagnostic algorithm, it is difficult to study their real prevalence in the population, to compare literature data, and to constructively discuss various scientific and practical aspects of this disease. Efforts to systematize individual clinical types of NCTD and to formulate their diagnostic criteria are set forth in the All-Russian Research Society Expert Committee national guidelines approved in 2009 and revised in 2012. The paper gives current views on the nomenclature of NCTDs, considers diagnostic criteria for both classified monogenic syndromes (Marfan's syndrome, Ehlers–Danlos' syndrome, MASS phenotype, primary mitral valve prolapse, joint hypermobility syndrome and unclassified dysplasia phenotypes (MASS-like phenotype, marfanoid appearance, Ehlers–Danlos-like phenotype, benign joint hypermobility syndrome, unclassified phenotype. The above abnormalities are presented as a continuous list drawn up in the decreasing order of the degree of their clinical manifestations and prognostic value (the phenotypic continuum described by M.J. Glesby and R.E. Pyentz: from monogenic syndromes through dysplasia phenotypes to an unclassified phenotype. Emphasis is laid on the clinical NCTD identification difficulties associated with the lack of specificity of external and visceral markers of connective tissue asthenia and with the certain

  15. Evaluation of image uniformity in diagnostic magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Ogura, Akio [Kyoto City Hospital (Japan); Inoue, Hiroshi; Higashida, Mitsuji; Yamazaki, Masaru; Uto, Tomoyuki

    1997-12-01

    Image uniformity refers to the ability of the MR imaging system to produce a constant signal response throughout the scanned volume when the object being imaged has homogeneous MR characteristics. To facilitate the determination of image uniformity in diagnostic magnetic resonance images, reports such as the NEMA Standard and AAPM report have been issued. However, these methods of evaluation are impractical in cases such as the day-to-day quality control of the machine or comparisons between the different MR systems, because these methods affect the signal-to-noise ratio (SNR) and create problems by displaying nonuniformity locations. Therefore, we present a new method for evaluating uniformity, called the test segment method. The influence of SNR on the NEMA test and the segment method were examined. In addition, the results of the two methods were compared for certain nonuniformity conditions. Results showed that the segment method did not affect SNR and provided good display of nonuniformity. (author)

  16. Image fusion algorithm using nonsubsampled contourlet transform

    Science.gov (United States)

    Xiao, Yang; Cao, Zhiguo; Wang, Kai; Xu, Zhengxiang

    2007-11-01

    In this paper, a pixel-level image fusion algorithm based on Nonsubsampled Contourlet Transform (NSCT) has been proposed. Compared with Contourlet Transform, NSCT is redundant, shift-invariant and more suitable for image fusion. Each image from different sensors could be decomposed into a low frequency image and a series of high frequency images of different directions by multi-scale NSCT. For low and high frequency images, they are fused based on local-contrast enhancement and definition respectively. Finally, fused image is reconstructed from low and high frequency fused images. Experiment demonstrates that NSCT could preserve edge significantly and the fusion rule based on region segmentation performances well in local-contrast enhancement.

  17. Efficient x-ray image enhancement algorithm using image fusion.

    Science.gov (United States)

    Shen, Kuan; Wen, Yumei; Cai, Yufang

    2009-01-01

    Multiresolution Analysis (MRA) plays an important role in image and signal processing fields, and it can extract information at different scales. Image fusion is a process of combining two or more images into an image, which extracts features from source images and provides more information than one image. The research presented in this article is aimed at the development of an automated imaging enhancement system in digital radiography (DR) images, which can clearly display all the defects in one image and don't bring blocking effect. In terms of characteristic of the collected radiographic signals, in the proposed scheme the subsection of signals is mapped to 0-255 gray scale to form several gray images and then these images are fused to form a new enhanced image. This article focuses on comparing the discriminating power of several multiresolution images decomposing methods using contrast pyramid, wavelet, and ridgelet respectively. The algorithms are extensively tested and the results are compared with standard image enhancement algorithms. Tests indicate that the fused images present a more detailed representation of the x-ray image. Detection, recognition, and search tasks may therefore benefit from this.

  18. Image Mosaicing Algorithm for Rolled Fingerprint Construction

    Institute of Scientific and Technical Information of China (English)

    贺迪; 荣钢; 周杰

    2002-01-01

    Fingerprint identification is one of the most important biometric authentication methods. However, current devices for recording digital fingerprints can only capture plain-touch fingerprints. Rolled fingerprints have much more information for recognition, so a method is needed to construct a rolled fingerprint from a series of plain-touch fingerprints. This paper presents a novel algorithm for image mosaicing for real time rolled fingerprint construction in which the images are assembled with corrections to create a smooth, non-fragmented rolled fingerprint in real time. Experimental results demonstrate its effectiveness by comparing it with other conventional algorithms.

  19. Using an algorithmic approach to secondary amenorrhea: Avoiding diagnostic error.

    Science.gov (United States)

    Roberts-Wilson, Tiffany K; Spencer, Jessica B; Fantz, Corinne R

    2013-08-23

    Secondary amenorrhea in women of reproductive age may be an indication of an undiagnosed, chronic condition and appropriate treatment is dependent upon accurate diagnosis of the underlying etiology. A thorough clinical assessment and a few common laboratory tests can easily identify the most frequent causes of secondary amenorrhea. However, once these have been ruled out, the more uncommon pathophysiologies can be difficult to diagnose due to similarities in presentation and appropriate laboratory testing and interpretation become critical. In these cases, misdiagnosis is unfortunately common and often the result of poor laboratory utilization in the form of a failure to employ indicated tests, the use of obsolete tests, or erroneous interpretation in the face of interfering factors or co-morbidities. Consequently, the algorithmic approach to laboratory evaluation in the context of secondary amenorrhea described in this review can minimize the risk of diagnostic error as well was decrease test volume, cost, and time to diagnosis.

  20. Algorithm for Fast Registration of Radar Images

    Directory of Open Access Journals (Sweden)

    Subrata Rakshit

    2002-07-01

    Full Text Available Radar imagery provides an all-weather and 24 h coverage, making it ideal for critical defence applications. In some applications, multiple images acquired of an area need to be registered for further processing. Such situations arise for battlefield surveillance based on satellite imagery. The registration has to be done between an earlier (reference image and a new (live image. For automated surveillance, registration is a prerequisite for change detection. Speed is essential due to large volumes of data involved and the need for quick responses. The registration transformation is quite simple, being mainly a global translation. (Scale and rotation corrections can be applied based on known camera parameters. The challenge lies in the fact that the radar images are not as feature-rich as optical images and the image content variation can be as high as 90 per cent. Even though the change on the ground may not be drastic, seasonal variations can significantly alter the radar signatures of ground, vegetation, and water bodies. This necessitates a novel approach different from the techniques developed for optical images. An algorithm has been developed that leads to fast registration of radar images, even in the presence of specular noise and significant scene content variation. The key features of this approach are adaptability to sensor/terrain types, ability to handle large content variations and false positive rejection. The present work shows that this algorithm allows for various cost-performance trade-offs, making it suitable for a wide variety of applications. The algorithm, in various cost-performance configurations, is tested on a set of ERS images. Results of such tests have been reported, indicating the performance of the algorithm for various cost-performance trade-offs.

  1. Performance evaluation of image segmentation algorithms on microscopic image data.

    Science.gov (United States)

    Beneš, Miroslav; Zitová, Barbara

    2015-01-01

    In our paper, we present a performance evaluation of image segmentation algorithms on microscopic image data. In spite of the existence of many algorithms for image data partitioning, there is no universal and 'the best' method yet. Moreover, images of microscopic samples can be of various character and quality which can negatively influence the performance of image segmentation algorithms. Thus, the issue of selecting suitable method for a given set of image data is of big interest. We carried out a large number of experiments with a variety of segmentation methods to evaluate the behaviour of individual approaches on the testing set of microscopic images (cross-section images taken in three different modalities from the field of art restoration). The segmentation results were assessed by several indices used for measuring the output quality of image segmentation algorithms. In the end, the benefit of segmentation combination approach is studied and applicability of achieved results on another representatives of microscopic data category - biological samples - is shown.

  2. Optimization of diagnostic imaging use in patients with acute abdominal pain (OPTIMA: Design and rationale

    Directory of Open Access Journals (Sweden)

    Bossuyt Patrick MM

    2007-08-01

    Full Text Available Abstract Background The acute abdomen is a frequent entity at the Emergency Department (ED, which usually needs rapid and accurate diagnostic work-up. Diagnostic work-up with imaging can consist of plain X-ray, ultrasonography (US, computed tomography (CT and even diagnostic laparoscopy. However, no evidence-based guidelines exist in current literature. The actual diagnostic work-up of a patient with acute abdominal pain presenting to the ED varies greatly between hospitals and physicians. The OPTIMA study was designed to provide the evidence base for constructing an optimal diagnostic imaging guideline for patients with acute abdominal pain at the ED. Methods/design Thousand consecutive patients with abdominal pain > 2 hours and Discussion This study aims to provide the evidence base for the development of a diagnostic algorithm that can act as a guideline for ED physicians to evaluate patients with acute abdominal pain.

  3. Novel permutation measures for image encryption algorithms

    Science.gov (United States)

    Abd-El-Hafiz, Salwa K.; AbdElHaleem, Sherif H.; Radwan, Ahmed G.

    2016-10-01

    This paper proposes two measures for the evaluation of permutation techniques used in image encryption. First, a general mathematical framework for describing the permutation phase used in image encryption is presented. Using this framework, six different permutation techniques, based on chaotic and non-chaotic generators, are described. The two new measures are, then, introduced to evaluate the effectiveness of permutation techniques. These measures are (1) Percentage of Adjacent Pixels Count (PAPC) and (2) Distance Between Adjacent Pixels (DBAP). The proposed measures are used to evaluate and compare the six permutation techniques in different scenarios. The permutation techniques are applied on several standard images and the resulting scrambled images are analyzed. Moreover, the new measures are used to compare the permutation algorithms on different matrix sizes irrespective of the actual parameters used in each algorithm. The analysis results show that the proposed measures are good indicators of the effectiveness of the permutation technique.

  4. Dynamic Data Updating Algorithm for Image Superresolution Reconstruction

    Institute of Scientific and Technical Information of China (English)

    TAN Bing; XU Qing; ZHANG Yan; XING Shuai

    2006-01-01

    A dynamic data updating algorithm for image superesolution is proposed. On the basis of Delaunay triangulation and its local updating property, this algorithm can update the changed region directly under the circumstances that only a part of the source images has been changed. For its high efficiency and adaptability, this algorithm can serve as a fast algorithm for image superesolution reconstruction.

  5. An edge detection algorithm for imaging ladar

    Institute of Scientific and Technical Information of China (English)

    Qi Wang(王骐); Ziqin Li(李自勤); Qi Li(李琦); Jianfeng Sun(孙剑峰); Juncheng Fu(傅俊诚)

    2003-01-01

    In this paper, the morphological filter based on parametric edge detection is presented and applied toimaging ladar image with speckle noise. This algorithm and Laplacian of Gaussian (LOG) operator arecompared on edge detection. The experimental results indicate the superior performance of this kind ofthe edge detection.

  6. Image segmentation using an improved differential algorithm

    Science.gov (United States)

    Gao, Hao; Shi, Yujiao; Wu, Dongmei

    2014-10-01

    Among all the existing segmentation techniques, the thresholding technique is one of the most popular due to its simplicity, robustness, and accuracy (e.g. the maximum entropy method, Otsu's method, and K-means clustering). However, the computation time of these algorithms grows exponentially with the number of thresholds due to their exhaustive searching strategy. As a population-based optimization algorithm, differential algorithm (DE) uses a population of potential solutions and decision-making processes. It has shown considerable success in solving complex optimization problems within a reasonable time limit. Thus, applying this method into segmentation algorithm should be a good choice during to its fast computational ability. In this paper, we first propose a new differential algorithm with a balance strategy, which seeks a balance between the exploration of new regions and the exploitation of the already sampled regions. Then, we apply the new DE into the traditional Otsu's method to shorten the computation time. Experimental results of the new algorithm on a variety of images show that, compared with the EA-based thresholding methods, the proposed DE algorithm gets more effective and efficient results. It also shortens the computation time of the traditional Otsu method.

  7. An algorithm of image segmentation for overlapping grain image

    Institute of Scientific and Technical Information of China (English)

    WANG Zhi; JIN Guang; SUN Xiao-wei

    2005-01-01

    Aiming at measurement of granularity size of nonmetal grain, an algorithm of image segmentation and parameter calculation for microscopic overlapping grain image was studied. This algorithm presents some new attributes of graph sequence from discrete attribute of graph,and consequently achieves the geometrical characteristics from input graph, and the new graph sequence in favor of image segmentation is recombined. The conception that image edge denoted with "twin-point" is put forward, base on geometrical characters of point, image edge is transformed into serial edge, and on recombined serial image edge, based on direction vector definition of line and some additional restricted conditions, the segmentation twin-points are searched with, thus image segmentation is accomplished. Serial image edge is transformed into twin-point pattern, to realize calculation of area and granularity size of nonmetal grain. The inkling and uncertainty on selection of structure element which base on mathematical morphology are avoided in this algorithm, and image segmentation and parameter calculation are realized without changing grain's self statistical characters.

  8. Experimental Study of Fractal Image Compression Algorithm

    Directory of Open Access Journals (Sweden)

    Chetan R. Dudhagara

    2012-08-01

    Full Text Available Image compression applications have been increasing in recent years. Fractal compression is a lossy compression method for digital images, based on fractals. The method is best suited for textures and natural images, relying on the fact that parts of an image often resemble other parts of the same image. In this paper, a study on fractal-based image compression and fixed-size partitioning will be made, analyzed for performance and compared with a standard frequency domain based image compression standard, JPEG. Sample images will be used to perform compression and decompression. Performance metrics such as compression ratio, compression time and decompression time will be measured in JPEG cases. Also the phenomenon of resolution/scale independence will be studied and described with examples. Fractal algorithms convert these parts into mathematical data called "fractal codes" which are used to recreate the encoded image. Fractal encoding is a mathematical process used to encode bitmaps containing a real-world image as a set of mathematical data that describes the fractal properties of the image. Fractal encoding relies on the fact that all natural, and most artificial, objects contain redundant information in the form of similar, repeating patterns called fractals.

  9. Image completion algorithm based on texture synthesis

    Institute of Scientific and Technical Information of China (English)

    Zhang Hongying; Peng Qicong; Wu Yadong

    2007-01-01

    A new algorithm is proposed for completing the missing parts caused by the removal of foreground or background elements from an image of natural scenery in a visually plausible way.The major contributions of the proposed algorithm are: (1) for most natural images, there is a strong orientation of texture or color distribution.So a method is introduced to compute the main direction of the texture and complete the image by limiting the search to one direction to carry out image completion quite fast; (2) there exists a synthesis ordering for image completion.The searching order of the patches is denned to ensure the regions with more known information and the structures should be completed before filling in other regions; (3) to improve the visual effect of texture synthesis, an adaptive scheme is presented to determine the size of the template window for capturing the features of various scales.A number of examples are given to demonstrate the effectiveness of the proposed algorithm.

  10. LSB Based Quantum Image Steganography Algorithm

    Science.gov (United States)

    Jiang, Nan; Zhao, Na; Wang, Luo

    2016-01-01

    Quantum steganography is the technique which hides a secret message into quantum covers such as quantum images. In this paper, two blind LSB steganography algorithms in the form of quantum circuits are proposed based on the novel enhanced quantum representation (NEQR) for quantum images. One algorithm is plain LSB which uses the message bits to substitute for the pixels' LSB directly. The other is block LSB which embeds a message bit into a number of pixels that belong to one image block. The extracting circuits can regain the secret message only according to the stego cover. Analysis and simulation-based experimental results demonstrate that the invisibility is good, and the balance between the capacity and the robustness can be adjusted according to the needs of applications.

  11. Digital image processing an algorithmic approach with Matlab

    CERN Document Server

    Qidwai, Uvais

    2009-01-01

    Introduction to Image Processing and the MATLAB EnvironmentIntroduction Digital Image Definitions: Theoretical Account Image Properties MATLAB Algorithmic Account MATLAB CodeImage Acquisition, Types, and File I/OImage Acquisition Image Types and File I/O Basics of Color Images Other Color Spaces Algorithmic Account MATLAB CodeImage ArithmeticIntroduction Operator Basics Theoretical TreatmentAlgorithmic Treatment Coding ExamplesAffine and Logical Operations, Distortions, and Noise in ImagesIntroduction Affine Operations Logical Operators Noise in Images Distortions in ImagesAlgorithmic Account

  12. Diagnostic imaging in patients with retinitis pigmentosa.

    Science.gov (United States)

    Mitamura, Yoshinori; Mitamura-Aizawa, Sayaka; Nagasawa, Toshihiko; Katome, Takashi; Eguchi, Hiroshi; Naito, Takeshi

    2012-01-01

    Retinitis pigmentosa (RP) is a progressive inherited retinal disease, and patients with RP have reduced visual function caused by a degeneration of the photoreceptors and retinal pigment epithelium (RPE). At the end stage of RP, the degeneration of the photoreceptors in the fovea reduces central vision, and RP is one of the main causes of acquired blindness in developed countries. Therefore, morphological and functional assessments of the photoreceptors in the macula area can be useful in estimating the residual retinal function in RP patients. Optical coherence tomography (OCT) is a well-established method of examining the retinal architecture in situ. The photoreceptor inner/outer segment (IS/OS) junction is observed as a distinct, highly reflective line by OCT. The presence of the IS/OS junction in the OCT images is essential for normal visual function. Fundus autofluorescence (FAF) results from the accumulation of lipofuscin in the RPE cells and has been used to investigate RPE and retinal function. More than one-half of RP patients have an abnormally high density parafoveal FAF ring (AF ring). The AF ring represents the border between functional and dysfunctional retina. In this review, we shall summarize recent progress on diagnostic imaging in eyes with RP.

  13. Rex shunt preoperative imaging: diagnostic capability of imaging modalities.

    Directory of Open Access Journals (Sweden)

    Sharon W Kwan

    Full Text Available The purpose of this study was to evaluate the diagnostic capability of imaging modalities used for preoperative mesenteric-left portal bypass ("Rex shunt" planning. Twenty patients with extrahepatic portal vein thrombosis underwent 57 preoperative planning abdominal imaging studies. Two readers retrospectively reviewed these studies for an ability to confidently determine left portal vein (PV patency, superior mesenteric vein (SMV patency, and intrahepatic left and right PV contiguity. In this study, computed tomographic arterial portography allowed for confident characterization of left PV patency, SMV patency and left and right PV continuity in 100% of the examinations. Single phase contrast-enhanced CT, multi-phase contrast-enhanced CT, multiphase contrast-enhanced MRI, and transarterial portography answered all key diagnostic questions in 33%, 30%, 0% and 8% of the examinations, respectively. In conclusion, of the variety of imaging modalities that have been employed for Rex shunt preoperative planning, computed tomographic arterial portography most reliably allows for assessment of left PV patency, SMV patency, and left and right PV contiguity in a single study.

  14. Chaos-Based Multipurpose Image Watermarking Algorithm

    Institute of Scientific and Technical Information of China (English)

    ZHU Congxu; LIAO Xuefeng; LI Zhihua

    2006-01-01

    To achieve the goal of image content authentication and copyright protection simultaneously, this paper presents a novel image dual watermarking method based on chaotic map. Firstly, the host image was split into many nonoverlapping small blocks, and the block-wise discrete cosine transform (DCT) is computed. Secondly, the robust watermarks, shuffled by the chaotic sequences, are embedded in the DC coefficients of blocks to achieve the goal of copyright protection. The semi-fragile watermarks, generated by chaotic map, are embedded in the AC coefficients of blocks to obtain the aim of image authentication. Both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility.

  15. A modified algorithm for SAR parallel imaging

    Institute of Scientific and Technical Information of China (English)

    HU Ju-rong; WANG Fei; CAO Ning; LU Hao

    2009-01-01

    Synthetic aperture radar can provide two dimension images by converting the acquired echoed SAR signal to targets coordinate and reflectivity. With the advancement of sophisticated SAR signal processing, more and more SAR imaging methods have been proposed for synthetic aperture radar which works at near field and the Fresnel approximation is not appropriate. Time domain correlation is a kind of digital reconstruction method based on processing the synthetic aperture radar data in the two-dimensional frequency domain via Fourier transform. It reconstructs SAR image via simply correlation without any need for approximation or interpolation. But its high computational cost for correlation makes it unsuitable for real time imaging. In order to reduce the computational burden a modified algorithm about time domain correlation was given in this paper. It also can take full advantage of parallel computations of the imaging processor. Its practical implementation was proposed and the preliminary simulation results were presented. Simulation results show that the proposed algorithm is a computationally efficient way of implementing the reconstruction in real time SAR image processing.

  16. A Multiresolution Image Completion Algorithm for Compressing Digital Color Images

    Directory of Open Access Journals (Sweden)

    R. Gomathi

    2014-01-01

    Full Text Available This paper introduces a new framework for image coding that uses image inpainting method. In the proposed algorithm, the input image is subjected to image analysis to remove some of the portions purposefully. At the same time, edges are extracted from the input image and they are passed to the decoder in the compressed manner. The edges which are transmitted to decoder act as assistant information and they help inpainting process fill the missing regions at the decoder. Textural synthesis and a new shearlet inpainting scheme based on the theory of p-Laplacian operator are proposed for image restoration at the decoder. Shearlets have been mathematically proven to represent distributed discontinuities such as edges better than traditional wavelets and are a suitable tool for edge characterization. This novel shearlet p-Laplacian inpainting model can effectively reduce the staircase effect in Total Variation (TV inpainting model whereas it can still keep edges as well as TV model. In the proposed scheme, neural network is employed to enhance the value of compression ratio for image coding. Test results are compared with JPEG 2000 and H.264 Intracoding algorithms. The results show that the proposed algorithm works well.

  17. Systematic Analysis of Painful Total Knee Prosthesis, a Diagnostic Algorithm

    Directory of Open Access Journals (Sweden)

    Oliver Djahani

    2013-12-01

    Full Text Available   Remaining pain after total knee arthroplasty (TKA is a common observation in about 20% of postoperative patients; where in about 60% of these knees require early revision surgery within five years. Obvious causes of this pain could be identified simply with clinical examinations and standard radiographs. However, unexplained painful TKA still remains a challenge for the surgeon. The management should include a multidisciplinary approach to the patient`s pain as well as addressing the underlying etiology. There are a number of extrinsic (tendinopathy, hip, ankle, spine, CRPS and so on and intrinsic (infection, instability, malalignment, wear and so on causes of painful knee replacement. On average, diagnosis takes more than 12 months and patients become very dissatisfied and some of them even acquire psychological problems. Hence, a systematic diagnostic algorithm might be helpful. This review article aims to act as a guide to the evaluation of patients with painful TKA described in 10 different steps. Furthermore, the preliminary results of a series of 100 consecutive cases will be discussed. Revision surgery was performed only in those cases with clear failure mechanism.

  18. Imaging diagnostics of the foot; Bildgebende Diagnostik des Fusses

    Energy Technology Data Exchange (ETDEWEB)

    Szeimies, Ulrike; Staebler, Axel [Radiologie in Muenchen-Harlaching, Muenchen (Germany); Walther, Markus (eds.) [Schoen-Klinik Muenchen-Harlaching, Muenchen (Germany). Zentrum fuer Fuss- und Sprunggelenkchirurgie

    2012-11-01

    The book on imaging diagnostics of the foot contains the following chapters: (1) Imaging techniques. (2) Clinical diagnostics. (3) Ankle joint and hind foot. (4) Metatarsus. (5) Forefoot. (6) Pathology of plantar soft tissue. (7) Nervous system diseases. (8) Diseases without specific anatomic localization. (9) System diseases including the foot. (10) Tumor like lesions. (11) Normative variants.

  19. Endmember extraction algorithms from hyperspectral images

    Directory of Open Access Journals (Sweden)

    M. C. Cantero

    2006-06-01

    Full Text Available During the last years, several high-resolution sensors have been developed for hyperspectral remote sensing applications. Some of these sensors are already available on space-borne devices. Space-borne sensors are currently acquiring a continual stream of hyperspectral data, and new efficient unsupervised algorithms are required to analyze the great amount of data produced by these instruments. The identification of image endmembers is a crucial task in hyperspectral data exploitation. Once the individual endmembers have been identified, several methods can be used to map their spatial distribution, associations and abundances. This paper reviews the Pixel Purity Index (PPI, N-FINDR and Automatic Morphological Endmember Extraction (AMEE algorithms developed to accomplish the task of finding appropriate image endmembers by applying them to real hyperspectral data. In order to compare the performance of these methods a metric based on the Root Mean Square Error (RMSE between the estimated and reference abundance maps is used.

  20. Review: Image Encryption Using Chaos Based algorithms

    Directory of Open Access Journals (Sweden)

    Er. Ankita Gaur

    2014-03-01

    Full Text Available Due to the development in the field of network technology and multimedia applications, every minute thousands of messages which can be text, images, audios, videos are created and transmitted over wireless network. Improper delivery of the message may leads to the leakage of important information. So encryption is used to provide security. In last few years, variety of image encryption algorithms based on chaotic system has been proposed to protect image from unauthorized access. 1-D chaotic system using logistic maps has weak security, small key space and due to the floating of pixel values, some data lose occurs and proper decryption of image becomes impossible. In this paper different chaotic maps such as Arnold cat map, sine map, logistic map, tent map have been studied.

  1. 2014 KLCSG-NCC Korea Practice Guidelines for the management of hepatocellular carcinoma: HCC diagnostic algorithm.

    Science.gov (United States)

    Lee, Jeong Min; Park, Joong-Won; Choi, Byung Ihn

    2014-01-01

    Hepatocellular carcinoma (HCC) is the fifth most commonly occurring cancer in Korea and typically has a poor prognosis with a 5-year survival rate of only 28.6%. Therefore, it is of paramount importance to achieve the earliest possible diagnosis of HCC and to recommend the most up-to-date optimal treatment strategy in order to increase the survival rate of patients who develop this disease. After the establishment of the Korean Liver Cancer Study Group (KLCSG) and the National Cancer Center (NCC), Korea jointly produced for the first time the Clinical Practice Guidelines for HCC in 2003, revised them in 2009, and published the newest revision of the guidelines in 2014, including changes in the diagnostic criteria of HCC and incorporating the most recent medical advances over the past 5 years. In this review, we will address the noninvasive diagnostic criteria and diagnostic algorithm of HCC included in the newly established KLCSG-NCC guidelines in 2014, and review the differences in the criteria for a diagnosis of HCC between the KLCSG-NCC guidelines and the most recent imaging guidelines endorsed by the European Organisation for Research and Treatment of Cancer (EORTC), the Liver Imaging Reporting and Data System (LI-RADS), the Organ Procurement and Transplantation Network (OPTN) system, the Asian Pacific Association for the Study of the Liver (APASL) and the Japan Society of Hepatology (JSH).

  2. Thermoacoustic imaging and spectroscopy for enhanced cancer diagnostics

    Science.gov (United States)

    Bauer, Daniel Ryan

    Early detection of cancer is paramount for improved patient survival. This dissertation presents work developing imaging techniques to improve cancer diagnostics and detection utilizing light and microwave induced thermoacoustic imaging. In the second chapter, the well-established pre-clinical mouse window chamber model is interfaced with simultaneously acquired high-resolution pulse echo (PE) ultrasound and photoacoustic (PA) imaging. Co-registered PE and PA imaging, coupled with developed image segmentation algorithms, are used to quantitatively track and monitor the size, shape, heterogeneity, and neovasculature of the tumor microenvironment during a month long study. Average volumetric growth was 5.35 mm3/day, which correlated well with two dimensional results from fluorescent imaging (R = 0.97, p imaging is also employed to probe the assumed oxygenation status of the tumor vasculature. The window chamber model combined with high-resolution PE and PA imaging could form a powerful testbed for characterizing cancers and evaluating new contrast and therapeutic agents. The third chapter utilizes a clinical ultrasound array to facilitate fast volumetric spectroscopic PA imaging to detect and discriminate endogenous absorbers (i.e. oxy/deoxygenated hemoglobin) as well as exogenous PA contrast agents (i.e. gold nanorods, fluorophores). In vivo spatiotemporal tracking of administered gold nanorods is presented, with the contrast agent augmenting the PA signal 18 dB. Furthermore, through the use of spectral unmixing algorithms, the relative concentrations of multiple endogenous and exogenous co-localized absorbers were reconstructed in tumor bearing mice. The concentration of Alexaflour647 was calculated to increase nearly 20 dB in the center of a prostate tumor after a tail-vein injection of the contrast agent. Additionally, after direct subcutaneous injections of two different gold nanorods into a breast tumor, the concentration of each nanoparticle was discriminated

  3. Restoration algorithms for imaging through atmospheric turbulence

    Science.gov (United States)

    2017-02-18

    imaging based super-resolution algorithm, as well as our current work on the simplification of the Fried kernel for deconvolution purposes. 22...imagemagick library2) and saved as individual PNG sequences. Since the Matlab R© software is widely used by the community, we also provide each...sequence using the GIMP3 software (this procedure is summarized in Figure 2). The dynamic sequences are also provided with their respective 1https

  4. A Novel Approach to Fast Image Filtering Algorithm of Infrared Images based on Intro Sort Algorithm

    CERN Document Server

    Gupta, Kapil Kumar; Niranjan, Jitendra Kumar

    2012-01-01

    In this study we investigate the fast image filtering algorithm based on Intro sort algorithm and fast noise reduction of infrared images. Main feature of the proposed approach is that no prior knowledge of noise required. It is developed based on Stefan- Boltzmann law and the Fourier law. We also investigate the fast noise reduction approach that has advantage of less computation load. In addition, it can retain edges, details, text information even if the size of the window increases. Intro sort algorithm begins with Quick sort and switches to heap sort when the recursion depth exceeds a level based on the number of elements being sorted. This approach has the advantage of fast noise reduction by reducing the comparison time. It also significantly speed up the noise reduction process and can apply to real-time image processing. This approach will extend the Infrared images applications for medicine and video conferencing.

  5. GPUs benchmarking in subpixel image registration algorithm

    Science.gov (United States)

    Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier

    2015-05-01

    Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.

  6. Probability scores and diagnostic algorithms in pulmonary embolism: are they followed in clinical practice?

    Science.gov (United States)

    Sanjuán, Pilar; Rodríguez-Núñez, Nuria; Rábade, Carlos; Lama, Adriana; Ferreiro, Lucía; González-Barcala, Francisco Javier; Alvarez-Dobaño, José Manuel; Toubes, María Elena; Golpe, Antonio; Valdés, Luis

    2014-05-01

    Clinical probability scores (CPS) determine the pre-test probability of pulmonary embolism (PE) and assess the need for the tests required in these patients. Our objective is to investigate if PE is diagnosed according to clinical practice guidelines. Retrospective study of clinically suspected PE in the emergency department between January 2010 and December 2012. A D-dimer value ≥ 500 ng/ml was considered positive. PE was diagnosed on the basis of the multislice computed tomography angiography and, to a lesser extent, with other imaging techniques. The CPS used was the revised Geneva scoring system. There was 3,924 cases of suspected PE (56% female). Diagnosis was determined in 360 patients (9.2%) and the incidence was 30.6 cases per 100,000 inhabitants/year. Sensitivity and the negative predictive value of the D-dimer test were 98.7% and 99.2% respectively. CPS was calculated in only 24 cases (0.6%) and diagnostic algorithms were not followed in 2,125 patients (54.2%): in 682 (17.4%) because clinical probability could not be estimated and in 482 (37.6%), 852 (46.4%) and 109 (87.9%) with low, intermediate and high clinical probability, respectively, because the diagnostic algorithms for these probabilities were not applied. CPS are rarely calculated in the diagnosis of PE and the diagnostic algorithm is rarely used in clinical practice. This may result in procedures with potential significant side effects being unnecessarily performed or to a high risk of underdiagnosis. Copyright © 2013 SEPAR. Published by Elsevier Espana. All rights reserved.

  7. Computationally efficient algorithm for multifocus image reconstruction

    Science.gov (United States)

    Eltoukhy, Helmy A.; Kavusi, Sam

    2003-05-01

    A method for synthesizing enhanced depth of field digital still camera pictures using multiple differently focused images is presented. This technique exploits only spatial image gradients in the initial decision process. The image gradient as a focus measure has been shown to be experimentally valid and theoretically sound under weak assumptions with respect to unimodality and monotonicity. Subsequent majority filtering corroborates decisions with those of neighboring pixels, while the use of soft decisions enables smooth transitions across region boundaries. Furthermore, these last two steps add algorithmic robustness for coping with both sensor noise and optics-related effects, such as misregistration or optical flow, and minor intensity fluctuations. The dependence of these optical effects on several optical parameters is analyzed and potential remedies that can allay their impact with regard to the technique's limitations are discussed. Several examples of image synthesis using the algorithm are presented. Finally, leveraging the increasing functionality and emerging processing capabilities of digital still cameras, the method is shown to entail modest hardware requirements and is implementable using a parallel or general purpose processor.

  8. Focal congenital hyperinsulinism managed by medical treatment: a diagnostic algorithm based on molecular genetic screening.

    Science.gov (United States)

    Maiorana, Arianna; Barbetti, Fabrizio; Boiani, Arianna; Rufini, Vittoria; Pizzoferro, Milena; Francalanci, Paola; Faletra, Flavio; Nichols, Colin G; Grimaldi, Chiara; de Ville de Goyet, Jean; Rahier, Jacques; Henquin, Jean-Claude; Dionisi-Vici, Carlo

    2014-11-01

    Congenital hyperinsulinism (CHI) requires rapid diagnosis and treatment to avoid irreversible neurological sequelae due to hypoglycaemia. Aetiological diagnosis is instrumental in directing the appropriate therapy. Current diagnostic algorithms provide a complete set of diagnostic tools including (i) biochemical assays, (ii) genetic facility and (iii) state-of-the-art imaging. They consider the response to a therapeutic diazoxide trial an early, crucial step before proceeding (or not) to specific genetic testing and eventually imaging, aimed at distinguishing diffuse vs focal CHI. However, interpretation of the diazoxide test is not trivial and can vary between research groups, which may lead to inappropriate decisions. Objective of this report is proposing a new algorithm in which early genetic screening, rather than diazoxide trial, dictates subsequent clinical decisions. Two CHI patients weaned from parenteral glucose infusion and glucagon after starting diazoxide. No hypoglycaemia was registered during a 72-h continuous glucose monitoring (CGMS), or hypoglycaemic episodes were present for no longer than 3% of 72-h. Normoglycaemia was obtained by low-medium dose diazoxide combined with frequent carbohydrate feeds for several years. We identified monoallelic, paternally inherited mutations in KATP channel genes, and (18) F-DOPA PET-CT revealed a focal lesion that was surgically resected, resulting in complete remission of hypoglycaemia. Although rare, some patients with focal lesions may be responsive to diazoxide. As a consequence, we propose an algorithm that is not based on a 'formal' diazoxide response but on genetic testing, in which patients carrying paternally inherited ABCC8 or KCNJ11 mutations should always be subjected to (18) F-DOPA PET-CT. © 2014 John Wiley & Sons Ltd.

  9. Modifications to the synthetic aperture microwave imaging diagnostic

    Science.gov (United States)

    Brunner, K. J.; Chorley, J. C.; Dipper, N. A.; Naylor, G.; Sharples, R. M.; Taylor, G.; Thomas, D. A.; Vann, R. G. L.

    2016-11-01

    The synthetic aperture microwave imaging diagnostic has been operating on the MAST experiment since 2011. It has provided the first 2D images of B-X-O mode conversion windows and showed the feasibility of conducting 2D Doppler back-scattering experiments. The diagnostic heavily relies on field programmable gate arrays to conduct its work. Recent successes and newly gained experience with the diagnostic have led us to modify it. The enhancements will enable pitch angle profile measurements, O and X mode separation, and the continuous acquisition of 2D DBS data. The diagnostic has also been installed on the NSTX-U and is acquiring data since May 2016.

  10. Segmentation of Medical Image using Clustering and Watershed Algorithms

    OpenAIRE

    M. C.J. Christ; R.M.S Parvathi

    2011-01-01

    Problem statement: Segmentation plays an important role in medical imaging. Segmentation of an image is the division or separation of the image into dissimilar regions of similar attribute. In this study we proposed a methodology that integrates clustering algorithm and marker controlled watershed segmentation algorithm for medical image segmentation. The use of the conservative watershed algorithm for medical image analysis is pervasive because of its advantages, such as always being able to...

  11. Optimization of diagnostic imaging use in patients with acute abdominal pain (OPTIMA): Design and rationale

    Science.gov (United States)

    Laméris, Wytze; van Randen, Adrienne; Dijkgraaf, Marcel GW; Bossuyt, Patrick MM; Stoker, Jaap; Boermeester, Marja A

    2007-01-01

    Background The acute abdomen is a frequent entity at the Emergency Department (ED), which usually needs rapid and accurate diagnostic work-up. Diagnostic work-up with imaging can consist of plain X-ray, ultrasonography (US), computed tomography (CT) and even diagnostic laparoscopy. However, no evidence-based guidelines exist in current literature. The actual diagnostic work-up of a patient with acute abdominal pain presenting to the ED varies greatly between hospitals and physicians. The OPTIMA study was designed to provide the evidence base for constructing an optimal diagnostic imaging guideline for patients with acute abdominal pain at the ED. Methods/design Thousand consecutive patients with abdominal pain > 2 hours and < 5 days will be enrolled in this multicentre trial. After clinical history, physical and laboratory examination all patients will undergo a diagnostic imaging protocol, consisting of plain X-ray (upright chest and supine abdomen), US and CT. The reference standard will be a post hoc assignment of the final diagnosis by an expert panel. The focus of the analysis will be on the added value of the imaging modalities over history and clinical examination, relative to the incremental costs. Discussion This study aims to provide the evidence base for the development of a diagnostic algorithm that can act as a guideline for ED physicians to evaluate patients with acute abdominal pain. PMID:17683592

  12. Relaxation Algorithm of Piecing-Error for Sub-Images

    Institute of Scientific and Technical Information of China (English)

    LI Yueping; TANG Pushan

    2001-01-01

    During the process of automatic image recognition or automatic reverse design of IC, people often encounter the problem that some sub-images must be pieced together into a whole image. In the traditional piecing algorithm for sub images, a large accumulated error will be made. In this paper, a relaxation algorithm of piecing-error for sub-images is presented. It can eliminate the accumulated error in the traditional algorithm and greatly improve the quality of pieced image. Based on an initial pieced image, one can continuously adjust the center of every sub-image and its angle to lessen the error between the adjacent sub-images, so the quality of pieced image can be improved. The presented results indicate that the proposed algorithm can dramatically decrease the error while the quality of ultimate pieced image is still acceptable. The time complexity of this algorithm is O(n In n).

  13. Phase-contrast enhanced mammography: A new diagnostic tool for breast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Wang Zhentian; Thuering, Thomas; David, Christian; Roessl, Ewald; Trippel, Mafalda; Kubik-Huch, Rahel A.; Singer, Gad; Hohl, Michael K.; Hauser, Nik; Stampanoni, Marco [Swiss Light Source, Paul Scherrer Institut, 5232 Villigen (Switzerland); Laboratory for Micro and Nanotechnology, Paul Scherrer Institut, 5232 Villigen (Switzerland); Philips Technologie GmbH, Roentgenstrasse 24, 22335 Hamburg (Germany); Institute of Pathology, Kantonsspital Baden, 5404 Baden (Switzerland); Department of Radiology, Kantonsspital Baden, 5404 Baden (Switzerland); Institute of Pathology, Kantonsspital Baden, 5404 Baden (Switzerland); Department of Gynecology and Obstetrics, Interdisciplinary Breast Center Baden, Kantonsspital Baden, 5404 Baden (Switzerland); Swiss Light Source, Paul Scherrer Institut, 5232 Villigen, Switzerland and Institute for Biomedical Engineering, University and ETH Zuerich, 8092 Zuerich (Switzerland)

    2012-07-31

    Phase contrast and scattering-based X-ray imaging can potentially revolutionize the radiological approach to breast imaging by providing additional and complementary information to conventional, absorption-based methods. We investigated native, non-fixed whole breast samples using a grating interferometer with an X-ray tube-based configuration. Our approach simultaneously recorded absorption, differential phase contrast and small-angle scattering signals. The results show that this novel technique - combined with a dedicated image fusion algorithm - has the potential to deliver enhanced breast imaging with complementary information for an improved diagnostic process.

  14. Cost-effectiveness modelling in diagnostic imaging: a stepwise approach

    NARCIS (Netherlands)

    Sailer, A.M.; Zwam, W.H. van; Wildberger, J.E.; Grutters, J.P.C.

    2015-01-01

    Diagnostic imaging (DI) is the fastest growing sector in medical expenditures and takes a central role in medical decision-making. The increasing number of various and new imaging technologies induces a growing demand for cost-effectiveness analysis (CEA) in imaging technology assessment. In this ar

  15. Image Searching within Another Image Using Image Matching and Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Mehmet Karakoc

    2015-10-01

    Full Text Available Main focus of this work is to realize image searching within another image in an efficient way. Image searching within another image is accomplished through the integrated use of image matching techniques and searching algorithms. Artificial neural networks along with various image features such as average color value, color standard deviation, correlation and edge parameters are used for image matching whereas genetic algorithms were used for image searching. In the work presented in this paper, an integrated method based on smart searching algorithms, quick image matching methods and parallel programming techniques were proposed and implemented. Proposed method was tested on several low and high-resolution reference and template images. Results revealed that the proposed method can successfully match images and significantly reduce the total search time.

  16. Diagnostic Method of Diabetes Based on Support Vector Machine and Tongue Images.

    Science.gov (United States)

    Zhang, Jianfeng; Xu, Jiatuo; Hu, Xiaojuan; Chen, Qingguang; Tu, Liping; Huang, Jingbin; Cui, Ji

    2017-01-01

    Objective. The purpose of this research is to develop a diagnostic method of diabetes based on standardized tongue image using support vector machine (SVM). Methods. Tongue images of 296 diabetic subjects and 531 nondiabetic subjects were collected by the TDA-1 digital tongue instrument. Tongue body and tongue coating were separated by the division-merging method and chrominance-threshold method. With extracted color and texture features of the tongue image as input variables, the diagnostic model of diabetes with SVM was trained. After optimizing the combination of SVM kernel parameters and input variables, the influences of the combinations on the model were analyzed. Results. After normalizing parameters of tongue images, the accuracy rate of diabetes predication was increased from 77.83% to 78.77%. The accuracy rate and area under curve (AUC) were not reduced after reducing the dimensions of tongue features with principal component analysis (PCA), while substantially saving the training time. During the training for selecting SVM parameters by genetic algorithm (GA), the accuracy rate of cross-validation was grown from 72% or so to 83.06%. Finally, we compare with several state-of-the-art algorithms, and experimental results show that our algorithm has the best predictive accuracy. Conclusions. The diagnostic method of diabetes on the basis of tongue images in Traditional Chinese Medicine (TCM) is of great value, indicating the feasibility of digitalized tongue diagnosis.

  17. Diagnostic Method of Diabetes Based on Support Vector Machine and Tongue Images

    Science.gov (United States)

    Hu, Xiaojuan; Chen, Qingguang; Tu, Liping; Huang, Jingbin; Cui, Ji

    2017-01-01

    Objective. The purpose of this research is to develop a diagnostic method of diabetes based on standardized tongue image using support vector machine (SVM). Methods. Tongue images of 296 diabetic subjects and 531 nondiabetic subjects were collected by the TDA-1 digital tongue instrument. Tongue body and tongue coating were separated by the division-merging method and chrominance-threshold method. With extracted color and texture features of the tongue image as input variables, the diagnostic model of diabetes with SVM was trained. After optimizing the combination of SVM kernel parameters and input variables, the influences of the combinations on the model were analyzed. Results. After normalizing parameters of tongue images, the accuracy rate of diabetes predication was increased from 77.83% to 78.77%. The accuracy rate and area under curve (AUC) were not reduced after reducing the dimensions of tongue features with principal component analysis (PCA), while substantially saving the training time. During the training for selecting SVM parameters by genetic algorithm (GA), the accuracy rate of cross-validation was grown from 72% or so to 83.06%. Finally, we compare with several state-of-the-art algorithms, and experimental results show that our algorithm has the best predictive accuracy. Conclusions. The diagnostic method of diabetes on the basis of tongue images in Traditional Chinese Medicine (TCM) is of great value, indicating the feasibility of digitalized tongue diagnosis. PMID:28133611

  18. Diagnostic Method of Diabetes Based on Support Vector Machine and Tongue Images

    Directory of Open Access Journals (Sweden)

    Jianfeng Zhang

    2017-01-01

    Full Text Available Objective. The purpose of this research is to develop a diagnostic method of diabetes based on standardized tongue image using support vector machine (SVM. Methods. Tongue images of 296 diabetic subjects and 531 nondiabetic subjects were collected by the TDA-1 digital tongue instrument. Tongue body and tongue coating were separated by the division-merging method and chrominance-threshold method. With extracted color and texture features of the tongue image as input variables, the diagnostic model of diabetes with SVM was trained. After optimizing the combination of SVM kernel parameters and input variables, the influences of the combinations on the model were analyzed. Results. After normalizing parameters of tongue images, the accuracy rate of diabetes predication was increased from 77.83% to 78.77%. The accuracy rate and area under curve (AUC were not reduced after reducing the dimensions of tongue features with principal component analysis (PCA, while substantially saving the training time. During the training for selecting SVM parameters by genetic algorithm (GA, the accuracy rate of cross-validation was grown from 72% or so to 83.06%. Finally, we compare with several state-of-the-art algorithms, and experimental results show that our algorithm has the best predictive accuracy. Conclusions. The diagnostic method of diabetes on the basis of tongue images in Traditional Chinese Medicine (TCM is of great value, indicating the feasibility of digitalized tongue diagnosis.

  19. A new semi-fragile watermarking algorithm for image authentication

    Institute of Scientific and Technical Information of China (English)

    HAN De-zhi; HU Yu-ping

    2005-01-01

    This paper presents a new semi-fragile watermarking algorithm for image authentication which extracts image features from the low frequency domain to generate two watermarks: one for classifying of the intentional content modification and the other for indicating the modified location. The algorithm provides an effective mechanism for image authentication. The watermark generation and watermark embedment are disposed in the image itself, and the received image authentication needs no information about the original image or watermark. The algorithm increases watermark security and prevents forged watermark. Experimental results show that the algorithm can identify intentional content modification and incidental tampering, and also indicate the location where a modification takes place.

  20. The Watershed Algorithm for Image Segmentation

    Institute of Scientific and Technical Information of China (English)

    OU Yan; LIN Nan

    2007-01-01

    This article introduced the watershed algorithm for the segmentation, illustrated the segmation process by implementing this algorithm. By comparing with another three related algorithm, this article revealed both the advantages and drawbacks of the watershed algorithm.

  1. Relaxation Algorithm of Piecing—Error for Sub—Images

    Institute of Scientific and Technical Information of China (English)

    李跃平; 唐璞山

    2001-01-01

    During the process of automatic image recognition or automatic reverse design of IC,people often encounter the problem that some sub-images must be pieced together into a whole image,In the traditional piecing algorithm for subimages,a large accumulated error will be made.In this paper,a relaxation algorithm of piecing -error for sub-images is presented.It can eliminate the accumulated error in the traditional algorithm and greatly improve the quality of pieced image.Based on an initial pieced image,one can continuously adjust the center of every sub-image and its angle to lessen the error between the adjacent sub-images,so the quality of pieced image can be improved.The presented results indicate that the proposed algorithm can dramatically decrease the error while the quality of ultimate pieced image is still acceptable.The time complexity of this algorithm is O(nlnn).

  2. Image Processing Algorithms – A Comprehensive Study

    Directory of Open Access Journals (Sweden)

    Mahesh Prasanna K

    2014-06-01

    Full Text Available Digital image processing is an ever expanding and dynamic area with applications reaching out into our everyday life such as medicine, space exploration, surveillance, authentication, automated industry inspection and many more areas. These applications involve different processes like image enhancement and object detection [1]. Implementing such applications on a general purpose computer can be easier, but not very time efficient due to additional constraints on memory and other peripheral devices. Application specific hardware implementation offers much greater speed than a software implementation. With advances in the VLSI (Very Large Scale Integrated technology hardware implementation has become an attractive alternative. Implementing complex computation tasks on hardware and by exploiting parallelism and pipelining in algorithms yield significant reduction in execution times [2].

  3. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    Directory of Open Access Journals (Sweden)

    Adis Alihodzic

    2014-01-01

    Full Text Available Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed.

  4. Inverse synthetic aperture radar imaging principles, algorithms and applications

    CERN Document Server

    Chen , Victor C

    2014-01-01

    Inverse Synthetic Aperture Radar Imaging: Principles, Algorithms and Applications is based on the latest research on ISAR imaging of moving targets and non-cooperative target recognition (NCTR). With a focus on the advances and applications, this book will provide readers with a working knowledge on various algorithms of ISAR imaging of targets and implementation with MATLAB. These MATLAB algorithms will prove useful in order to visualize and manipulate some simulated ISAR images.

  5. Optimisation of patient protection and image quality in diagnostic ...

    African Journals Online (AJOL)

    Optimisation of patient protection and image quality in diagnostic radiology. ... The study leads to the introduction of the concept of plan- do-check-act on QC results ... (QA) programme and continues to collect data for establishment of DRL's.

  6. MEDICAL IMAGE SEGMENTATION BASED ON A MODIFIED LEVEL SET ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    Yang Yong; Lin Pan; Zheng Chongxun; Gu Jianwen

    2005-01-01

    Objective To present a novel modified level set algorithm for medical image segmentation. Methods The algorithm is developed by substituting the speed function of level set algorithm with the region and gradient information of the image instead of the conventional gradient information. This new algorithm has been tested by a series of different modality medical images. Results We present various examples and also evaluate and compare the performance of our method with the classical level set method on weak boundaries and noisy images. Conclusion Experimental results show the proposed algorithm is effective and robust.

  7. [X-ray endoscopic semiotics and diagnostic algorithm of radiation studies of preneoplastic gastric mucosa changes].

    Science.gov (United States)

    Akberov, R F; Gorshkov, A N

    1997-01-01

    The X-ray endoscopic semiotics of precancerous gastric mucosal changes (epithelial dysplasia, intestinal epithelial rearrangement) was examined by the results of 1574 gastric examination. A diagnostic algorithm was developed for radiation studies in the diagnosis of the above pathology.

  8. PARAMETRIC EVALUATION ON THE PERFORMANCE OF VARIOUS IMAGE COMPRESSION ALGORITHMS

    OpenAIRE

    V. Sutha Jebakumari; P. Arockia Jansi Rani

    2011-01-01

    Wavelet analysis plays a vital role in the signal processing especially in image compression. In this paper, various compression algorithms like block truncation coding, EZW and SPIHT are studied and ana- lyzed; its algorithm idea and steps are given. The parameters for all these algorithms are analyzed and the best parameter for each of these compression algorithms is found out.

  9. PARAMETRIC EVALUATION ON THE PERFORMANCE OF VARIOUS IMAGE COMPRESSION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    V. Sutha Jebakumari

    2011-05-01

    Full Text Available Wavelet analysis plays a vital role in the signal processing especially in image compression. In this paper, various compression algorithms like block truncation coding, EZW and SPIHT are studied and ana- lyzed; its algorithm idea and steps are given. The parameters for all these algorithms are analyzed and the best parameter for each of these compression algorithms is found out.

  10. Steganography Algorithm to Hide Secret Message inside an Image

    CERN Document Server

    Ibrahim, Rosziati

    2011-01-01

    In this paper, the authors propose a new algorithm to hide data inside image using steganography technique. The proposed algorithm uses binary codes and pixels inside an image. The zipped file is used before it is converted to binary codes to maximize the storage of data inside the image. By applying the proposed algorithm, a system called Steganography Imaging System (SIS) is developed. The system is then tested to see the viability of the proposed algorithm. Various sizes of data are stored inside the images and the PSNR (Peak signal-to-noise ratio) is also captured for each of the images tested. Based on the PSNR value of each images, the stego image has a higher PSNR value. Hence this new steganography algorithm is very efficient to hide the data inside the image.

  11. Clutter discrimination algorithm simulation in pulse laser radar imaging

    Science.gov (United States)

    Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; Su, Xuan; Zhu, Fule

    2015-10-01

    Pulse laser radar imaging performance is greatly influenced by different kinds of clutter. Various algorithms are developed to mitigate clutter. However, estimating performance of a new algorithm is difficult. Here, a simulation model for estimating clutter discrimination algorithms is presented. This model consists of laser pulse emission, clutter jamming, laser pulse reception and target image producing. Additionally, a hardware platform is set up gathering clutter data reflected by ground and trees. The data logging is as clutter jamming input in the simulation model. The hardware platform includes a laser diode, a laser detector and a high sample rate data logging circuit. The laser diode transmits short laser pulses (40ns FWHM) at 12.5 kilohertz pulse rate and at 905nm wavelength. An analog-to-digital converter chip integrated in the sample circuit works at 250 mega samples per second. The simulation model and the hardware platform contribute to a clutter discrimination algorithm simulation system. Using this system, after analyzing clutter data logging, a new compound pulse detection algorithm is developed. This new algorithm combines matched filter algorithm and constant fraction discrimination (CFD) algorithm. Firstly, laser echo pulse signal is processed by matched filter algorithm. After the first step, CFD algorithm comes next. Finally, clutter jamming from ground and trees is discriminated and target image is produced. Laser radar images are simulated using CFD algorithm, matched filter algorithm and the new algorithm respectively. Simulation result demonstrates that the new algorithm achieves the best target imaging effect of mitigating clutter reflected by ground and trees.

  12. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  13. A survey of medical diagnostic imaging technologies

    Energy Technology Data Exchange (ETDEWEB)

    Heese, V.; Gmuer, N.; Thomlinson, W.

    1991-10-01

    The fields of medical imaging and medical imaging instrumentation are increasingly important. The state-of-the-art continues to advance at a very rapid pace. In fact, various medical imaging modalities are under development at the National Synchrotron Light Source (such as MECT and Transvenous Angiography.) It is important to understand how these techniques compare with today's more conventional imaging modalities. The purpose of this report is to provide some basic information about the various medical imaging technologies currently in use and their potential developments as a basis for this comparison. This report is by no means an in-depth study of the physics and instrumentation of the various imaging modalities; instead, it is an attempt to provide an explanation of the physical bases of these techniques and their principal clinical and research capabilities.

  14. A survey of medical diagnostic imaging technologies

    Energy Technology Data Exchange (ETDEWEB)

    Heese, V.; Gmuer, N.; Thomlinson, W.

    1991-10-01

    The fields of medical imaging and medical imaging instrumentation are increasingly important. The state-of-the-art continues to advance at a very rapid pace. In fact, various medical imaging modalities are under development at the National Synchrotron Light Source (such as MECT and Transvenous Angiography.) It is important to understand how these techniques compare with today`s more conventional imaging modalities. The purpose of this report is to provide some basic information about the various medical imaging technologies currently in use and their potential developments as a basis for this comparison. This report is by no means an in-depth study of the physics and instrumentation of the various imaging modalities; instead, it is an attempt to provide an explanation of the physical bases of these techniques and their principal clinical and research capabilities.

  15. A Replication of the Autism Diagnostic Observation Schedule (ADOS) Revised Algorithms

    Science.gov (United States)

    Gotham, Katherine; Risi, Susan; Dawson, Geraldine; Tager-Flusberg, Helen; Joseph, Robert; Carter, Alice; Hepburn, Susan; McMahon, William; Rodier, Patricia; Hyman, Susan L.; Sigman, Marian; Rogers, Sally; Landa, Rebecca; Spence, M. Anne; Osann, Kathryn; Flodman, Pamela; Volkmar, Fred; Hollander, Eric; Buxbaum, Joseph; Pickles, Andrew; Lord, Catherine

    2008-01-01

    A study replicated the module comparability and predictive ability of the revised algorithms of the Autism Diagnostic Observation Schedule (ADOS) in an independent dataset of children with autism. Results indicated that the revised ADOS algorithms improved module comparability and predictive validity for autistic children than the earlier…

  16. A Replication of the Autism Diagnostic Observation Schedule (ADOS) Revised Algorithms

    Science.gov (United States)

    Gotham, Katherine; Risi, Susan; Dawson, Geraldine; Tager-Flusberg, Helen; Joseph, Robert; Carter, Alice; Hepburn, Susan; McMahon, William; Rodier, Patricia; Hyman, Susan L.; Sigman, Marian; Rogers, Sally; Landa, Rebecca; Spence, M. Anne; Osann, Kathryn; Flodman, Pamela; Volkmar, Fred; Hollander, Eric; Buxbaum, Joseph; Pickles, Andrew; Lord, Catherine

    2008-01-01

    A study replicated the module comparability and predictive ability of the revised algorithms of the Autism Diagnostic Observation Schedule (ADOS) in an independent dataset of children with autism. Results indicated that the revised ADOS algorithms improved module comparability and predictive validity for autistic children than the earlier…

  17. Array antenna diagnostics with the 3D reconstruction algorithm

    DEFF Research Database (Denmark)

    Cappellin, Cecilia; Meincke, Peter; Pivnenko, Sergey;

    2012-01-01

    The 3D reconstruction algorithm is applied to a slotted waveguide array measured at the DTU-ESA Spherical Near-Field Antenna Test Facility. One slot of the array is covered by conductive tape and an error is present in the array excitation. Results show the accuracy obtainable by the 3D...... reconstruction algorithm. Considerations on the measurement sampling, the obtainable spatial resolution, and the possibility of taking full advantage of the reconstruction geometry are provided....

  18. Algorithms for Image Analysis and Combination of Pattern Classifiers with Application to Medical Diagnosis

    CERN Document Server

    Georgiou, Harris

    2009-01-01

    Medical Informatics and the application of modern signal processing in the assistance of the diagnostic process in medical imaging is one of the more recent and active research areas today. This thesis addresses a variety of issues related to the general problem of medical image analysis, specifically in mammography, and presents a series of algorithms and design approaches for all the intermediate levels of a modern system for computer-aided diagnosis (CAD). The diagnostic problem is analyzed with a systematic approach, first defining the imaging characteristics and features that are relevant to probable pathology in mammo-grams. Next, these features are quantified and fused into new, integrated radio-logical systems that exhibit embedded digital signal processing, in order to improve the final result and minimize the radiological dose for the patient. In a higher level, special algorithms are designed for detecting and encoding these clinically interest-ing imaging features, in order to be used as input to ...

  19. Improving night sky star image processing algorithm for star sensors.

    Science.gov (United States)

    Arbabmir, Mohammad Vali; Mohammadi, Seyyed Mohammad; Salahshour, Sadegh; Somayehee, Farshad

    2014-04-01

    In this paper, the night sky star image processing algorithm, consisting of image preprocessing, star pattern recognition, and centroiding steps, is improved. It is shown that the proposed noise reduction approach can preserve more necessary information than other frequently used approaches. It is also shown that the proposed thresholding method unlike commonly used techniques can properly perform image binarization, especially in images with uneven illumination. Moreover, the higher performance rate and lower average centroiding estimation error of near 0.045 for 400 simulated images compared to other algorithms show the high capability of the proposed night sky star image processing algorithm.

  20. Spatially adaptive regularized iterative high-resolution image reconstruction algorithm

    Science.gov (United States)

    Lim, Won Bae; Park, Min K.; Kang, Moon Gi

    2000-12-01

    High resolution images are often required in applications such as remote sensing, frame freeze in video, military and medical imaging. Digital image sensor arrays, which are used for image acquisition in many imaging systems, are not dense enough to prevent aliasing, so the acquired images will be degraded by aliasing effects. To prevent aliasing without loss of resolution, a dense detector array is required. But it may be very costly or unavailable, thus, many imaging systems are designed to allow some level of aliasing during image acquisition. The purpose of our work is to reconstruct an unaliased high resolution image from the acquired aliased image sequence. In this paper, we propose a spatially adaptive regularized iterative high resolution image reconstruction algorithm for blurred, noisy and down-sampled image sequences. The proposed approach is based on a Constrained Least Squares (CLS) high resolution reconstruction algorithm, with spatially adaptive regularization operators and parameters. These regularization terms are shown to improve the reconstructed image quality by forcing smoothness, while preserving edges in the reconstructed high resolution image. Accurate sub-pixel motion registration is the key of the success of the high resolution image reconstruction algorithm. However, sub-pixel motion registration may have some level of registration error. Therefore, a reconstruction algorithm which is robust against the registration error is required. The registration algorithm uses a gradient based sub-pixel motion estimator which provides shift information for each of the recorded frames. The proposed algorithm is based on a technique of high resolution image reconstruction, and it solves spatially adaptive regularized constrained least square minimization functionals. In this paper, we show that the reconstruction algorithm gives dramatic improvements in the resolution of the reconstructed image and is effective in handling the aliased information. The

  1. Application of Parallel Algorithm Approach for Performance Optimization of Oil Paint Image Filter Algorithm

    Directory of Open Access Journals (Sweden)

    Siddhartha Mukherjee

    2014-04-01

    Full Text Available This paper gives a detailed study on the performance of image filter algorithm with various parameters applied on an image of RGB model. There are various popular image filters, which consumes large amount of computing resources for processing. Oil paint image filter is one of the very interesting filters, which is very performance hungry. Current research tries to find improvement in oil paint image filter algorithm by using parallel pattern library. With increasing kernel-size, the processing time of oil paint image filter algorithm increases exponentially. I have also observed in various blogs and forums, the questions for faster oil paint have been asked repeatedly.

  2. Radiation exposure from diagnostic imaging among patients with gastrointestinal disorders.

    LENUS (Irish Health Repository)

    Desmond, Alan N

    2012-03-01

    There are concerns about levels of radiation exposure among patients who undergo diagnostic imaging for inflammatory bowel disease (IBD), compared with other gastrointestinal (GI) disorders. We quantified imaging studies and estimated the cumulative effective dose (CED) of radiation received by patients with organic and functional GI disorders. We also identified factors and diagnoses associated with high CEDs.

  3. A Hybrid Neural Network-Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics

    Science.gov (United States)

    Kobayashi, Takahisa; Simon, Donald L.

    2001-01-01

    In this paper, a model-based diagnostic method, which utilizes Neural Networks and Genetic Algorithms, is investigated. Neural networks are applied to estimate the engine internal health, and Genetic Algorithms are applied for sensor bias detection and estimation. This hybrid approach takes advantage of the nonlinear estimation capability provided by neural networks while improving the robustness to measurement uncertainty through the application of Genetic Algorithms. The hybrid diagnostic technique also has the ability to rank multiple potential solutions for a given set of anomalous sensor measurements in order to reduce false alarms and missed detections. The performance of the hybrid diagnostic technique is evaluated through some case studies derived from a turbofan engine simulation. The results show this approach is promising for reliable diagnostics of aircraft engines.

  4. Using qualitative research to inform development of a diagnostic algorithm for UTI in children.

    Science.gov (United States)

    de Salis, Isabel; Whiting, Penny; Sterne, Jonathan A C; Hay, Alastair D

    2013-06-01

    Diagnostic and prognostic algorithms can help reduce clinical uncertainty. The selection of candidate symptoms and signs to be measured in case report forms (CRFs) for potential inclusion in diagnostic algorithms needs to be comprehensive, clearly formulated and relevant for end users. To investigate whether qualitative methods could assist in designing CRFs in research developing diagnostic algorithms. Specifically, the study sought to establish whether qualitative methods could have assisted in designing the CRF for the Health Technology Association funded Diagnosis of Urinary Tract infection in Young children (DUTY) study, which will develop a diagnostic algorithm to improve recognition of urinary tract infection (UTI) in children aged Qualitative methods were applied using semi-structured interviews of 30 UK doctors and nurses working with young children in primary care and a Children's Emergency Department. We elicited features that clinicians believed useful in diagnosing UTI and compared these for presence or absence and terminology with the DUTY CRF. Despite much agreement between clinicians' accounts and the DUTY CRFs, we identified a small number of potentially important symptoms and signs not included in the CRF and some included items that could have been reworded to improve understanding and final data analysis. This study uniquely demonstrates the role of qualitative methods in the design and content of CRFs used for developing diagnostic (and prognostic) algorithms. Research groups developing such algorithms should consider using qualitative methods to inform the selection and wording of candidate symptoms and signs.

  5. Mathematical (diagnostic algorithms in the digitization of oral histopathology: The new frontier in histopathological diagnosis

    Directory of Open Access Journals (Sweden)

    Abhishek Banerjee

    2015-01-01

    Full Text Available The technological progress in the digitalization of a complete histological glass slide has opened a new door in the tissue based diagnosis. Automated slide diagnosis can be made possible by the use of mathematical algorithms which are formulated by binary codes or values. These algorithms (diagnostic algorithms include both object based (object features, structures and pixel based (texture measures. The intra- and inter-observer errors inherent in the visual diagnosis of a histopathological slide are largely replaced by the use of diagnostic algorithms leading to a standardized and reproducible diagnosis. The present paper reviews the advances in digital histopathology especially related to the use of mathematical algorithms (diagnostic algorithms in the field of oral histopathology. The literature was reviewed for data relating to the use of algorithms utilized in the construction of computational software with special applications in oral histopathological diagnosis. The data were analyzed, and the types and end targets of the algorithms were tabulated. The advantages, specificities and reproducibility of the software, its shortcomings and its comparison with traditional methods of histopathological diagnosis were evaluated. Algorithms help in automated slide diagnosis by creating software with possible reduced errors and bias with a high degree of specificity, sensitivity, and reproducibility. Akin to the identification of thumbprints and faces, software for histopathological diagnosis will in the near future be an important part of the histopathological diagnosis.

  6. Diagnostic imaging analysis of the impacted mesiodens

    Energy Technology Data Exchange (ETDEWEB)

    Noh, Jeong Jun; Choi, Bo Ram; Jeong, Hwan Seok; Huh, Kyung Hoe; Yi, Won Jin; Heo, Min Suk; Lee, Sam Sun; Choi, Soon Chul [School of Dentistry, Seoul National University, Seoul (Korea, Republic of)

    2010-06-15

    The research was performed to predict the three dimensional relationship between the impacted mesiodens and the maxillary central incisors and the proximity with the anatomic structures by comparing their panoramic images with the CT images. Among the patients visiting Seoul National University Dental Hospital from April 2003 to July 2007, those with mesiodens were selected (154 mesiodens of 120 patients). The numbers, shapes, orientation and positional relationship of mesiodens with maxillary central incisors were investigated in the panoramic images. The proximity with the anatomical structures and complications were investigated in the CT images as well. The sex ratio (M : F) was 2.28 : 1 and the mean number of mesiodens per one patient was 1.28. Conical shape was 84.4% and inverted orientation was 51.9%. There were more cases of anatomical structures encroachment, especially on the nasal floor and nasopalatine duct, when the mesiodens was not superimposed with the central incisor. There were, however, many cases of the nasopalatine duct encroachment when the mesiodens was superimpoised with the apical 1/3 of central incisor (52.6%). Delayed eruption (55.6%), crown rotation (66.7%) and crown resorption (100%) were observed when the mesiodens was superimposed with the crown of the central incisor. It is possible to predict three dimensional relationship between the impacted mesiodens and the maxillary central incisors in the panoramic images, but more details should be confirmed by the CT images when necessary.

  7. DIAGNOSTIC IMAGING OF THE THROWING ATHLETE’S SHOULDER

    OpenAIRE

    Malone, Terry; Hazle, Charles

    2013-01-01

    The diagnostic capabilities of advanced imaging have increasingly enabled clinicians to delineate between structural alterations and injuries more efficiently than ever before. These impressive gains have unfortunately begun to provide a reliance on imaging at the loss of quality in the clinical examination. Ideally, imaging of the shoulder complex is performed to confirm the provisional diagnosis developed from the history and clinical exam rather than to create such. This clinical commentar...

  8. MUTUAL IMAGE TRANSFORMATION ALGORITHMS FOR VISUAL INFORMATION PROCESSING AND RETRIEVAL

    Directory of Open Access Journals (Sweden)

    G. A. Kukharev

    2017-01-01

    Full Text Available Subject of Research. The paper deals with methods and algorithms for mutual transformation of related pairs of images in order to enhance the capabilities of cross-modal multimedia retrieval (CMMR technologies. We have thoroughly studied the problem of mutual transformation of face images of various kinds (e.g. photos and drawn pictures. This problem is widely represented in practice. Research is this area is based on existing datasets. The algorithms we have proposed in this paper can be applied to arbitrary pairs of related images due to the unified mathematical specification. Method. We have presented three image transformation algorithms. The first one is based on principal component analysis and Karhunen-Loève transform (1DPCA/1DKLT. Unlike the existing solution, it does not use the training set during the transformation process. The second algorithm assumes generation of an image population. The third algorithm performs the transformation based on two-dimensional principal component analysis and Karhunen-Loève transform (2DPCA/2DKLT. Main Results. The experiments on image transformation and population generation have revealed the main features of each algorithm. The first algorithm allows construction of an accurate and stable model of transition between two given sets of images. The second algorithm can be used to add new images to existing bases and the third algorithm is capable of performing the transformation outside the training dataset. Practical Relevance. Taking into account the qualities of the proposed algorithms, we have provided recommendations concerning their application. Possible scenarios include construction of a transition model for related pairs of images, mutual transformation of the images inside and outside the dataset as well as population generation in order to increase representativeness of existing datasets. Thus, the proposed algorithms can be used to improve reliability of face recognition performed on images

  9. Diagnostic Imaging for Dental Implant Therapy

    Directory of Open Access Journals (Sweden)

    Aishwarya Nagarajan

    2014-01-01

    Full Text Available Dental implant is a device made of alloplastic (foreign material implanted into the jaw bone beneath the mucosal layer to support a fixed or removable dental prosthesis. Dental implants are gaining immense popularity and wide acceptance because they not only replace lost teeth but also provide permanent restorations that do not interfere with oral function or speech or compromise the self-esteem of a patient. Appropriate treatment planning for replacement of lost teeth is required and imaging plays a pivotal role to ensure a satisfactory outcome. The development of pre-surgical imaging techniques and surgical templates helps the dentist place the implants with relative ease. This article focuses on various types of imaging modalities that have a pivotal role in implant therapy.

  10. Performance analysis of Non Linear Filtering Algorithms for underwater images

    CERN Document Server

    Padmavathi, Dr G; Kumar, Mr M Muthu; Thakur, Suresh Kumar

    2009-01-01

    Image filtering algorithms are applied on images to remove the different types of noise that are either present in the image during capturing or injected in to the image during transmission. Underwater images when captured usually have Gaussian noise, speckle noise and salt and pepper noise. In this work, five different image filtering algorithms are compared for the three different noise types. The performances of the filters are compared using the Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE). The modified spatial median filter gives desirable results in terms of the above two parameters for the three different noise. Forty underwater images are taken for study.

  11. Automated Photogrammetric Image Matching with Sift Algorithm and Delaunay Triangulation

    DEFF Research Database (Denmark)

    Karagiannis, Georgios; Antón Castro, Francesc/François; Mioc, Darka

    2016-01-01

    An algorithm for image matching of multi-sensor and multi-temporal satellite images is developed. The method is based on the SIFT feature detector proposed by Lowe in (Lowe, 1999). First, SIFT feature points are detected independently in two images (reference and sensed image). The features...... of each feature set for each image are computed. The isomorphism of the Delaunay triangulations is determined to guarantee the quality of the image matching. The algorithm is implemented in Matlab and tested on World-View 2, SPOT6 and TerraSAR-X image patches....

  12. An improved SIFT algorithm based on KFDA in image registration

    Science.gov (United States)

    Chen, Peng; Yang, Lijuan; Huo, Jinfeng

    2016-03-01

    As a kind of stable feature matching algorithm, SIFT has been widely used in many fields. In order to further improve the robustness of the SIFT algorithm, an improved SIFT algorithm with Kernel Discriminant Analysis (KFDA-SIFT) is presented for image registration. The algorithm uses KFDA to SIFT descriptors for feature extraction matrix, and uses the new descriptors to conduct the feature matching, finally chooses RANSAC to deal with the matches for further purification. The experiments show that the presented algorithm is robust to image changes in scale, illumination, perspective, expression and tiny pose with higher matching accuracy.

  13. Multiple-image encryption algorithm based on mixed image element and permutation

    Science.gov (United States)

    Zhang, Xiaoqiang; Wang, Xuesong

    2017-05-01

    To improve encryption efficiency and facilitate the secure transmission of multiple digital images, by defining the pure image element and mixed image element, this paper presents a new multiple-image encryption (MIE) algorithm based on the mixed image element and permutation, which can simultaneously encrypt any number of images. Firstly, segment the original images into pure image elements; secondly, scramble all the pure image elements with the permutation generated by the piecewise linear chaotic map (PWLCM) system; thirdly, combine mixed image elements into scrambled images; finally, diffuse the content of mixed image elements by performing the exclusive OR (XOR) operation among scrambled images and the chaotic image generated by another PWLCM system. The comparison with two similar algorithms is made. Experimental results and algorithm analyses show that the proposed MIE algorithm is very simple and efficient, which is suitable for practical image encryption.

  14. A new parallel algorithm for image matching based on entropy

    Institute of Scientific and Technical Information of China (English)

    董开坤; 胡铭曾

    2001-01-01

    Presents a new parallel image matching algorithm based on the concept of entropy feature vector and suitable to SIMD computer, which, in comparison with other algorithms, has the following advantages: ( 1 ) The spatial information of an image is appropriately introduced into the definition of image entropy. (2) A large number of multiplication operations are eliminated, thus the algorithm is sped up. (3) The shortcoming of having to do global calculation in the first instance is overcome, and concludes the algorithm has very good locality and is suitable for parallel processing.

  15. Image segmentation by using the localized subspace iteration algorithm

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    An image segmentation algorithm called"segmentation based on the localized subspace iterations"(SLSI)is proposed in this paper.The basic idea is to combine the strategies in Ncut algorithm by Shi and Malik in 2000 and the LSI by E,Li and Lu in 2007.The LSI is applied to solve an eigenvalue problem associated with the affinity matrix of an image,which makes the overall algorithm linearly scaled.The choices of the partition number,the supports and weight functions in SLSI are discussed.Numerical experiments for real images show the applicability of the algorithm.

  16. A Review of Spaceborne SAR Algorithm for Image Formation

    Directory of Open Access Journals (Sweden)

    Li Chun-sheng

    2013-03-01

    Full Text Available This paper firstly reviews the history and trends in development of spaceborne Synthetic Aperture Radar (SAR satellite technology in American and European countries. Besides, the basic information of the launched satellites and the future satellite plans are introduced. Then this paper summaries and assorts the imaging algorithm of spaceborn SAR satellite and analyzes the advantages and disadvantages of each algorithm. Moreover, the scope and the application status of each algorithm are presented. And then the paper elaborates trends of SAR imaging algorithm, which mainly introduces the algorithms based on compressive sensing theory and new image modes, and the results of simulation are also illustrated. At last, the paper summaries the development direction of spaceborne SAR imaging algorithm.

  17. Plenoptic Imaging for Three-Dimensional Particle Field Diagnostics.

    Energy Technology Data Exchange (ETDEWEB)

    Guildenbecher, Daniel Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hall, Elise Munz [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-06-01

    Plenoptic imaging is a promising emerging technology for single-camera, 3D diagnostics of particle fields. In this work, recent developments towards quantitative measurements of particle size, positions, and velocities are discussed. First, the technique is proven viable with measurements of the particle field generated by the impact of a water drop on a thin film of water. Next, well cont rolled experiments are used to verify diagnostic uncertainty. Finally, an example is presented of 3D plenoptic imaging of a laboratory scale, explosively generated fragment field.

  18. Comparison of Model-Based Segmentation Algorithms for Color Images.

    Science.gov (United States)

    1987-03-01

    image. Hunt and Kubler [Ref. 3] found that for image restoration, Karhunen-Loive transformation followed by single channel image processing worked...Algorithm for Segmentation of Multichannel Images. M.S.Thesis, Naval Postgraduate School, Monterey, CaliFornia, December 1993. 3. Hunt, B.R., Kubler 0

  19. Automated spectral imaging for clinical diagnostics

    Science.gov (United States)

    Breneman, John; Heffelfinger, David M.; Pettipiece, Ken; Tsai, Chris; Eden, Peter; Greene, Richard A.; Sorensen, Karen J.; Stubblebine, Will; Witney, Frank

    1998-04-01

    Bio-Rad Laboratories supplies imaging equipment for many applications in the life sciences. As part of our effort to offer more flexibility to the investigator, we are developing a microscope-based imaging spectrometer for the automated detection and analysis of either conventionally or fluorescently labeled samples. Immediate applications will include the use of fluorescence in situ hybridization (FISH) technology. The field of cytogenetics has benefited greatly from the increased sensitivity of FISH producing simplified analysis of complex chromosomal rearrangements. FISH methods for identification lends itself to automation more easily than the current cytogenetics industry standard of G- banding, however, the methods are complementary. Several technologies have been demonstrated successfully for analyzing the signals from labeled samples, including filter exchanging and interferometry. The detection system lends itself to other fluorescent applications including the display of labeled tissue sections, DNA chips, capillary electrophoresis or any other system using color as an event marker. Enhanced displays of conventionally stained specimens will also be possible.

  20. Diagnostic imaging of shoulder rotator cuff lesions

    OpenAIRE

    Nogueira-Barbosa Marcello Henrique; Volpon José Batista; Elias Jr Jorge; Muccillo Gerson

    2002-01-01

    Shoulder rotator cuff tendon tears were evaluated with ultrasonography (US) and magnetic resonance imaging (MRI). Surgical or arthroscopical correlation were available in 25 cases. Overall costs were also considered. Shoulder impingement syndrome diagnosis was done on a clinical basis. Surgery or arthroscopy was considered when conservative treatment failure for 6 months, or when rotator cuff repair was indicated. Ultrasound was performed in 22 patients and MRI in 17 of the 25 patients. Sensi...

  1. Reduction of the inappropriate ICD therapies by implementing a new fuzzy logic-based diagnostic algorithm.

    Science.gov (United States)

    Lewandowski, Michał; Przybylski, Andrzej; Kuźmicz, Wiesław; Szwed, Hanna

    2013-09-01

    The aim of the study was to analyze the value of a completely new fuzzy logic-based detection algorithm (FA) in comparison with arrhythmia classification algorithms used in existing ICDs in order to demonstrate whether the rate of inappropriate therapies can be reduced. On the basis of the RR intervals database containing arrhythmia events and controls recordings from the ICD memory a diagnostic algorithm was developed and tested by a computer program. This algorithm uses the same input signals as existing ICDs: RR interval as the primary input variable and two variables derived from it, onset and stability. However, it uses 15 fuzzy rules instead of fixed thresholds used in existing devices. The algorithm considers 6 diagnostic categories: (1) VF (ventricular fibrillation), (2) VT (ventricular tachycardia), (3) ST (sinus tachycardia), (4) DAI (artifacts and heart rhythm irregularities including extrasystoles and T-wave oversensing-TWOS), (5) ATF (atrial and supraventricular tachycardia or fibrillation), and 96) NT (sinus rhythm). This algorithm was tested on 172 RR recordings from different ICDs in the follow-up of 135 patients. All diagnostic categories of the algorithm were present in the analyzed recordings: VF (n = 35), VT (n = 48), ST (n = 14), DAI (n = 32), ATF (n = 18), NT (n = 25). Thirty-eight patients (31.4%) in the studied group received inappropriate ICD therapies. In all these cases the final diagnosis of the algorithm was correct (19 cases of artifacts, 11 of atrial fibrillation and 8 of ST) and fuzzy rules algorithm implementation would have withheld unnecessary therapies. Incidence of inappropriate therapies: 3 vs. 38 (the proposed algorithm vs. ICD diagnosis, respectively) differed significantly (p fuzzy logic based algorithm seems to be promising and its implementation could diminish ICDs inappropriate therapies. We found FA usefulness in correct diagnosis of sinus tachycardia, atrial fibrillation and artifacts in comparison with tested ICDs.

  2. An Automatic Image Inpainting Algorithm Based on FCM

    Directory of Open Access Journals (Sweden)

    Jiansheng Liu

    2014-01-01

    Full Text Available There are many existing image inpainting algorithms in which the repaired area should be manually determined by users. Aiming at this drawback of the traditional image inpainting algorithms, this paper proposes an automatic image inpainting algorithm which automatically identifies the repaired area by fuzzy C-mean (FCM algorithm. FCM algorithm classifies the image pixels into a number of categories according to the similarity principle, making the similar pixels clustering into the same category as possible. According to the provided gray value of the pixels to be inpainted, we calculate the category whose distance is the nearest to the inpainting area and this category is to be inpainting area, and then the inpainting area is restored by the TV model to realize image automatic inpainting.

  3. Impact of an intra-cycle motion correction algorithm on overall evaluability and diagnostic accuracy of computed tomography coronary angiography

    Energy Technology Data Exchange (ETDEWEB)

    Pontone, Gianluca; Bertella, Erika; Baggiano, Andrea; Mushtaq, Saima; Loguercio, Monica; Segurini, Chiara; Conte, Edoardo; Beltrama, Virginia; Annoni, Andrea; Formenti, Alberto; Petulla, Maria; Trabattoni, Daniela; Pepi, Mauro [Centro Cardiologico Monzino, IRCCS, Milan (Italy); Andreini, Daniele; Montorsi, Piero; Bartorelli, Antonio L. [Centro Cardiologico Monzino, IRCCS, Milan (Italy); University of Milan, Department of Cardiovascular Sciences and Community Health, Milan (Italy); Guaricci, Andrea I. [University of Foggia, Department of Cardiology, Foggia (Italy)

    2016-01-15

    The aim of this study was to evaluate the impact of a novel intra-cycle motion correction algorithm (MCA) on overall evaluability and diagnostic accuracy of cardiac computed tomography coronary angiography (CCT). From a cohort of 900 consecutive patients referred for CCT for suspected coronary artery disease (CAD), we enrolled 160 (18 %) patients (mean age 65.3 ± 11.7 years, 101 male) with at least one coronary segment classified as non-evaluable for motion artefacts. The CCT data sets were evaluated using a standard reconstruction algorithm (SRA) and MCA and compared in terms of subjective image quality, evaluability and diagnostic accuracy. The mean heart rate during the examination was 68.3 ± 9.4 bpm. The MCA showed a higher Likert score (3.1 ± 0.9 vs. 2.5 ± 1.1, p < 0.001) and evaluability (94%vs.79 %, p < 0.001) than the SRA. In a 45-patient subgroup studied by clinically indicated invasive coronary angiography, specificity, positive predictive value and accuracy were higher in MCA vs. SRA in segment-based and vessel-based models, respectively (87%vs.73 %, 50%vs.34 %, 85%vs.73 %, p < 0.001 and 62%vs.28 %, 66%vs.51 % and 75%vs.57 %, p < 0.001). In a patient-based model, MCA showed higher accuracy vs. SCA (93%vs.76 %, p < 0.05). MCA can significantly improve subjective image quality, overall evaluability and diagnostic accuracy of CCT. (orig.)

  4. A review of diagnostic imaging of snakes and lizards.

    Science.gov (United States)

    Banzato, T; Hellebuyck, T; Van Caelenberg, A; Saunders, J H; Zotti, A

    2013-07-13

    Snakes and lizards are considered 'stoic' animals and often show only non-specific signs of illness. Consequently, diagnostic imaging--along with clinical examination and laboratory tests--is gaining importance in making a final diagnosis and establishing a correct therapy. The large number of captive snake and lizard species commonly kept as pets, together with the high inter- and intraspecific morphological variability that is innate in these animals, make the analysis of diagnostic images challenging for the veterinary practitioner. Moreover, a thorough knowledge of the anatomy, physiology and pathology of the species that are the object of clinical investigation is mandatory for the correct interpretation of diagnostic images. Despite the large amount of clinical and scientific work carried out in the past two decades, the radiographic features of snakes and lizards have not undergone systematic description, and therefore veterinarians often have to rely mostly on anatomical studies rather than radiological literature. The aim of this paper is to review the most commonly used diagnostic imaging modalities, as well as to provide an overview of the available international original studies and scientific reviews describing the normal and pathological imaging features in snakes and lizards.

  5. AN IMPROVED FUZZY CLUSTERING ALGORITHM FOR MICROARRAY IMAGE SPOTS SEGMENTATION

    Directory of Open Access Journals (Sweden)

    V.G. Biju

    2015-11-01

    Full Text Available An automatic cDNA microarray image processing using an improved fuzzy clustering algorithm is presented in this paper. The spot segmentation algorithm proposed uses the gridding technique developed by the authors earlier, for finding the co-ordinates of each spot in an image. Automatic cropping of spots from microarray image is done using these co-ordinates. The present paper proposes an improved fuzzy clustering algorithm Possibility fuzzy local information c means (PFLICM to segment the spot foreground (FG from background (BG. The PFLICM improves fuzzy local information c means (FLICM algorithm by incorporating typicality of a pixel along with gray level information and local spatial information. The performance of the algorithm is validated using a set of simulated cDNA microarray images added with different levels of AWGN noise. The strength of the algorithm is tested by computing the parameters such as the Segmentation matching factor (SMF, Probability of error (pe, Discrepancy distance (D and Normal mean square error (NMSE. SMF value obtained for PFLICM algorithm shows an improvement of 0.9 % and 0.7 % for high noise and low noise microarray images respectively compared to FLICM algorithm. The PFLICM algorithm is also applied on real microarray images and gene expression values are computed.

  6. Non-invasive diagnostic imaging of colorectal liver metastases

    Institute of Scientific and Technical Information of China (English)

    Pier; Paolo; Mainenti; Federica; Romano; Laura; Pizzuti; Sabrina; Segreto; Giovanni; Storto; Lorenzo; Mannelli; Massimo; Imbriaco; Luigi; Camera; Simone; Maurea

    2015-01-01

    Colorectal cancer is one of the few malignant tumors in which synchronous or metachronous liver metastases [colorectal liver metastases(CRLMs)] may be treated with surgery. It has been demonstrated that resection of CRLMs improves the long-term prognosis. On the other hand, patients with un-resectable CRLMs may benefit from chemotherapy alone or in addition to liverdirected therapies. The choice of the most appropriate therapeutic management of CRLMs depends mostly on the diagnostic imaging. Nowadays, multiple non-invasive imaging modalities are available and those have a pivotal role in the workup of patients with CRLMs. Although extensive research has been performed with regards to the diagnostic performance of ultrasonography, computed tomography, positron emission tomography and magnetic resonance for the detection of CRLMs, the optimal imaging strategies for staging and follow up are still to be established. This largely due to the progressive technological and pharmacological advances which are constantly improving the accuracy of each imaging modality. This review describes the non-invasive imaging approaches of CRLMs reporting the technical features, the clinical indications, the advantages and the potential limitations of each modality, as well as including some information on the development of new imaging modalities, the role of new contrast media and the feasibility of using parametric image analysis as diagnostic marker of presence of CRLMs.

  7. Non-invasive diagnostic imaging of colorectal liver metastases.

    Science.gov (United States)

    Mainenti, Pier Paolo; Romano, Federica; Pizzuti, Laura; Segreto, Sabrina; Storto, Giovanni; Mannelli, Lorenzo; Imbriaco, Massimo; Camera, Luigi; Maurea, Simone

    2015-07-28

    Colorectal cancer is one of the few malignant tumors in which synchronous or metachronous liver metastases [colorectal liver metastases (CRLMs)] may be treated with surgery. It has been demonstrated that resection of CRLMs improves the long-term prognosis. On the other hand, patients with un-resectable CRLMs may benefit from chemotherapy alone or in addition to liver-directed therapies. The choice of the most appropriate therapeutic management of CRLMs depends mostly on the diagnostic imaging. Nowadays, multiple non-invasive imaging modalities are available and those have a pivotal role in the workup of patients with CRLMs. Although extensive research has been performed with regards to the diagnostic performance of ultrasonography, computed tomography, positron emission tomography and magnetic resonance for the detection of CRLMs, the optimal imaging strategies for staging and follow up are still to be established. This largely due to the progressive technological and pharmacological advances which are constantly improving the accuracy of each imaging modality. This review describes the non-invasive imaging approaches of CRLMs reporting the technical features, the clinical indications, the advantages and the potential limitations of each modality, as well as including some information on the development of new imaging modalities, the role of new contrast media and the feasibility of using parametric image analysis as diagnostic marker of presence of CRLMs.

  8. DIAGNOSTIC IMAGING OF THE THROWING ATHLETE’S SHOULDER

    Science.gov (United States)

    Hazle, Charles

    2013-01-01

    The diagnostic capabilities of advanced imaging have increasingly enabled clinicians to delineate between structural alterations and injuries more efficiently than ever before. These impressive gains have unfortunately begun to provide a reliance on imaging at the loss of quality in the clinical examination. Ideally, imaging of the shoulder complex is performed to confirm the provisional diagnosis developed from the history and clinical exam rather than to create such. This clinical commentary will provide the framework for both basic and advanced uses of imaging as well as discussion of evolving modalities. Level of Evidence: 5 PMID:24175143

  9. High performance deformable image registration algorithms for manycore processors

    CERN Document Server

    Shackleford, James; Sharp, Gregory

    2013-01-01

    High Performance Deformable Image Registration Algorithms for Manycore Processors develops highly data-parallel image registration algorithms suitable for use on modern multi-core architectures, including graphics processing units (GPUs). Focusing on deformable registration, we show how to develop data-parallel versions of the registration algorithm suitable for execution on the GPU. Image registration is the process of aligning two or more images into a common coordinate frame and is a fundamental step to be able to compare or fuse data obtained from different sensor measurements. E

  10. A Sparse Bayesian Learning Algorithm for Longitudinal Image Data.

    Science.gov (United States)

    Sabuncu, Mert R

    2015-10-01

    Longitudinal imaging studies, where serial (multiple) scans are collected on each individual, are becoming increasingly widespread. The field of machine learning has in general neglected the longitudinal design, since many algorithms are built on the assumption that each datapoint is an independent sample. Thus, the application of general purpose machine learning tools to longitudinal image data can be sub-optimal. Here, we present a novel machine learning algorithm designed to handle longitudinal image datasets. Our approach builds on a sparse Bayesian image-based prediction algorithm. Our empirical results demonstrate that the proposed method can offer a significant boost in prediction performance with longitudinal clinical data.

  11. Fast image matching algorithm based on projection characteristics

    Science.gov (United States)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  12. The Noise Clinic: a Blind Image Denoising Algorithm

    Directory of Open Access Journals (Sweden)

    Marc Lebrun

    2015-01-01

    Full Text Available This paper describes the complete implementation of a blind image algorithm, that takes any digital image as input. In a first step the algorithm estimates a Signal and Frequency Dependent (SFD noise model. In a second step, the image is denoised by a multiscale adaptation of the Non-local Bayes denoising method. We focus here on a careful analysis of the denoising step and present a detailed discussion of the influence of its parameters. Extensive commented tests of the blind denoising algorithm are presented, on real JPEG images and scans of old photographs.

  13. Advances in the diagnostic imaging of pheochromocytomas

    Directory of Open Access Journals (Sweden)

    Forssell-Aronsson E

    2011-05-01

    Full Text Available Eva Forssell-Aronsson1, Emil Schüler1, Håkan Ahlman21Department of Radiation Physics, 2Department of Surgery, Lundberg Laboratory of Cancer Research, Institute of Clinical Sciences, Sahlgrenska Academy at the University of Gothenburg, Sahlgrenska University Hospital, Gothenburg, SwedenAbstract: Pheochromocytomas (PCs and paragangliomas (PGLs are routinely localized by computed tomography (CT, magnetic resonance imaging (MRI, and metaiodobenzylguanidine (MIBG scintigraphy. CT can identify tumors with high sensitivity but rather low specificity. MRI has higher sensitivity and specificity than CT and is superior to detect extra-adrenal disease. Radioiodinated MIBG scintigraphy has been used for more than 30 years and is based on MIBG uptake via the norepinephrine transporter on the cell membrane. The technique is very useful for whole-body studies in case of multiple primary tumors or metastases. Tumors with sole production of dopamine usually cannot be visualized with MIBG and may require positron emission tomographic (PET studies with 18F-labeled radiotracers. Somatostatin receptor scintigraphy (SRS using the radiolabeled somatostatin analog octreotide (based on the expression of the somatostatin receptors 2 and 5 by the tumor can demonstrate PGL or metastases not visualized by MIBG. In this article, we review the use of MIBG scintigraphy to diagnose PC/PGL and compare the sensitivity and specificity with that of CT and MRI. We also describe the recent SRS and PET techniques and review the latest results of clinical studies by comparing these imaging modalities. Future perspectives of functional imaging modalities for PC/PGL are finally presented.Keywords: MIBG, scintigraphy, pheochromocytoma, paraganglioma, PET

  14. Three-dimensional imaging reconstruction algorithm of gated-viewing laser imaging with compressive sensing.

    Science.gov (United States)

    Li, Li; Xiao, Wei; Jian, Weijian

    2014-11-20

    Three-dimensional (3D) laser imaging combining compressive sensing (CS) has an advantage in lower power consumption and less imaging sensors; however, it brings enormous stress to subsequent calculation devices. In this paper we proposed a fast 3D imaging reconstruction algorithm to deal with time-slice images sampled by single-pixel detectors. The algorithm implements 3D imaging reconstruction before CS recovery, thus it saves plenty of runtime of CS recovery. Several experiments are conducted to verify the performance of the algorithm. Simulation results demonstrated that the proposed algorithm has better performance in terms of efficiency compared to an existing algorithm.

  15. Convergence of iterative image reconstruction algorithms for Digital Breast Tomosynthesis

    DEFF Research Database (Denmark)

    Sidky, Emil; Jørgensen, Jakob Heide; Pan, Xiaochuan

    2012-01-01

    solutions can aid in iterative image reconstruction algorithm design. This issue is particularly acute for iterative image reconstruction in Digital Breast Tomosynthesis (DBT), where the corresponding data model IS particularly poorly conditioned. The impact of this poor conditioning is that iterative......Most iterative image reconstruction algorithms are based on some form of optimization, such as minimization of a data-fidelity term plus an image regularizing penalty term. While achieving the solution of these optimization problems may not directly be clinically relevant, accurate optimization....... Math. Imag. Vol. 40, pgs 120-145) and apply it to iterative image reconstruction in DBT....

  16. 76 FR 77834 - Scientific Information Request on Intravascular Diagnostic and Imaging Medical Devices

    Science.gov (United States)

    2011-12-14

    ... Intravascular Diagnostic and Imaging Medical Devices AGENCY: Agency for Healthcare Research and Quality (AHRQ... intravascular diagnostic and imaging medical devices, including: Fractional Flow Reserve (FFR), Coronary Flow... Resonance Imaging (MRI), Elastrography, and Thermography. Scientific information is being solicited to...

  17. Diagnostic imaging of acute pulmonary embolism.

    Science.gov (United States)

    Christiansen, F

    1997-01-01

    The common strategy of combining clinical information, lung scintigraphy and pulmonary angiography in the diagnosis of acute pulmonary embolism (PE), has many limitations in clinical use. The major causes are that pulmonary angiography and lung scintigraphy are not universally available, and that pulmonary angiography is very expensive. The purpose of this thesis was to analyse different aspects of validity in regard to lung scintigraphy, pulmonary angiography, spiral CT, and ultrasound of the legs, with the subsequent intention of discussing new diagnostic strategies. Observer variations in lung scintigraphy interpretation when applying the PIOPED criteria were tested in 2 studies with 2 and 3 observers respectively and expressed as kappa values. The ability to improve agreement in lung scintigraphy interpretation was tested by training 2 observers from different hospitals. The impact of 3 observers' variations in lung scintigraphy interpretation when compared to pulmonary angiography, was tested by comparing the ROC areas of the observers. The value of combining subjectively derived numerical probabilities and the PIOPED categorical probabilities in lung scintigraphy reporting was compared to using the PIOPED categorization only, and this was tested by comparing ROC areas. The sensitivity and specificity of detecting an embolic source in the deep veins of the legs by ultrasound as a sign of PE when lung scintigraphy is inconclusive, was tested by comparison with pulmonary angiography. The sensitivity and specificity of spiral CT, compared to pulmonary angiography, was tested by comparison to pulmonary angiography. The inter- and intra-observer kappa values were in the range of moderate and fair. It was not possible to achieve better kappa values after training. Although observer variations were substantial, the accuracy did not differ significantly between the 3 observers. Incoorporating subjectively derived probabilities into lung scan reporting could not reduce

  18. Diagnostic value of imaging in infective endocarditis: a systematic review.

    Science.gov (United States)

    Gomes, Anna; Glaudemans, Andor W J M; Touw, Daan J; van Melle, Joost P; Willems, Tineke P; Maass, Alexander H; Natour, Ehsan; Prakken, Niek H J; Borra, Ronald J H; van Geel, Peter Paul; Slart, Riemer H J A; van Assen, Sander; Sinha, Bhanu

    2017-01-01

    Sensitivity and specificity of the modified Duke criteria for native valve endocarditis are both suboptimal, at approximately 80%. Diagnostic accuracy for intracardiac prosthetic material-related infection is even lower. Non-invasive imaging modalities could potentially improve diagnosis of infective endocarditis; however, their diagnostic value is unclear. We did a systematic literature review to critically appraise the evidence for the diagnostic performance of these imaging modalities, according to PRISMA and GRADE criteria. We searched PubMed, Embase, and Cochrane databases. 31 studies were included that presented original data on the performance of electrocardiogram (ECG)-gated multidetector CT angiography (MDCTA), ECG-gated MRI, (18)F-fluorodeoxyglucose ((18)F-FDG) PET/CT, and leucocyte scintigraphy in diagnosis of native valve endocarditis, intracardiac prosthetic material-related infection, and extracardiac foci in adults. We consistently found positive albeit weak evidence for the diagnostic benefit of (18)F-FDG PET/CT and MDCTA. We conclude that additional imaging techniques should be considered if infective endocarditis is suspected. We propose an evidence-based diagnostic work-up for infective endocarditis including these non-invasive techniques. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. A beamforming algorithm for bistatic SAR image formation.

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, David Alan; Wahl, Daniel Eugene; Jakowatz, Charles V., Jr.

    2010-03-01

    Beamforming is a methodology for collection-mode-independent SAR image formation. It is essentially equivalent to backprojection. The authors have in previous papers developed this idea and discussed the advantages and disadvantages of the approach to monostatic SAR image formation vis--vis the more standard and time-tested polar formatting algorithm (PFA). In this paper we show that beamforming for bistatic SAR imaging leads again to a very simple image formation algorithm that requires a minimal number of lines of code and that allows the image to be directly formed onto a three-dimensional surface model, thus automatically creating an orthorectified image. The same disadvantage of beamforming applied to monostatic SAR imaging applies to the bistatic case, however, in that the execution time for the beamforming algorithm is quite long compared to that of PFA. Fast versions of beamforming do exist to help alleviate this issue. Results of image reconstructions from phase history data are presented.

  20. A beamforming algorithm for bistatic SAR image formation

    Science.gov (United States)

    Jakowatz, Charles V., Jr.; Wahl, Daniel E.; Yocky, David A.

    2010-04-01

    Beamforming is a methodology for collection-mode-independent SAR image formation. It is essentially equivalent to backprojection. The authors have in previous papers developed this idea and discussed the advantages and disadvantages of the approach to monostatic SAR image formation vis-à-vis the more standard and time-tested polar formatting algorithm (PFA). In this paper we show that beamforming for bistatic SAR imaging leads again to a very simple image formation algorithm that requires a minimal number of lines of code and that allows the image to be directly formed onto a three-dimensional surface model, thus automatically creating an orthorectified image. The same disadvantage of beamforming applied to monostatic SAR imaging applies to the bistatic case, however, in that the execution time for the beamforming algorithm is quite long compared to that of PFA. Fast versions of beamforming do exist to help alleviate this issue. Results of image reconstructions from phase history data are presented.

  1. CS-based fast ultrasound imaging with improved FISTA algorithm

    Science.gov (United States)

    Lin, Jie; He, Yugao; Shi, Guangming; Han, Tingyu

    2015-08-01

    In ultrasound imaging system, the wave emission and data acquisition is time consuming, which can be solved by adopting the plane wave as the transmitted signal, and the compressed sensing (CS) theory for data acquisition and image reconstruction. To overcome the very high computation complexity caused by introducing CS into ultrasound imaging, in this paper, we propose an improvement of the fast iterative shrinkage-thresholding algorithm (FISTA) to achieve the fast reconstruction of the ultrasound imaging, in which a modified setting is done with the parameter of step size for each iteration. Further, the GPU strategy is designed for the proposed algorithm, to guarantee the real time implementation of imaging. The simulation results show that the GPU-based image reconstruction algorithm can achieve the fast ultrasound imaging without damaging the quality of image.

  2. Blind Source Separation Algorithms for PSF Subtraction from Direct Imaging

    Science.gov (United States)

    Shapiro, Jacob; Ranganathan, Nikhil; Savransky, Dmitry; Ruffio, Jean-Baptise; Macintosh, Bruce; GPIES Team

    2017-01-01

    The principal difficulty with detecting planets via direct imaging is that the target signal is similar in magnitude, or fainter, than the noise sources in the image. To compensate for this, several methods exist to subtract the PSF of the host star and other confounding noise sources. One of the most effective methods is Karhunen-Loève Image Processing (KLIP). The core algorithm within KLIP is Principal Component Analysis, which is a member of a class of algorithms called Blind Source Separation (BSS).We examine three other BSS algorithms that may potentially also be used for PSF subtraction: Independent Component Analysis, Stationary Subspace Analysis, and Common Spatial Pattern Filtering. The underlying principles of each of the algorithms is discussed, as well as the processing steps needed to achieve PSF subtraction. The algorithms are examined both as primary PSF subtraction techniques, as well as additional postprocessing steps used with KLIP.These algorithms have been used on data from the Gemini Planet Imager, analyzing images of β Pic b. To build a reference library, both Angular Differential Imaging and Spectral Differential Imaging were used. To compare to KLIP, three major metrics are examined: computation time, signal-to-noise ratio, and astrometric and photometric biases in different image regimes (e.g., speckle-dominated compared to Poisson-noise dominated). Preliminary results indicate that these BSS algorithms improve performance when used as an enhancement for KLIP, and that they can achieve similar SNR when used as the primary method of PSF subtraction.

  3. Cerebrovascular diagnostics - Imaging; Zerebrale Gefaessdiagnostik - Bildgebung

    Energy Technology Data Exchange (ETDEWEB)

    Roth, C. [Universitaetsklinikum des Saarlandes, Klinik fuer Diagnostische und Interventionelle Neuroradiologie, Homburg (Germany)

    2012-12-15

    Imaging of the cerebral vasculature relies mostly on computed tomography angiography (CTA), magnetic resonance angiography (MRA) and digital subtraction angiography (DSA). Although DSA is still the gold standard, many questions can be answered with CTA and/or MRA thanks to recent technological advances. The following article describes the advantages and disadvantages of these techniques with regard to different questions. Basic principles regarding the different techniques are explained. (orig.) [German] Die Bildgebung der zerebralen Gefaesse stuetzt sich im Wesentlichen auf die CT-Angiographie (CTA), MR-Angiographie (MRA) und die digitale Subtraktionsangiographie (DSA). Obwohl die DSA nach wie vor als Goldstandard gilt, lassen sich durch die technischen Neuerungen der Schnittbilddiagnostik viele Fragestellungen mithilfe von CTA und MR-A beantworten. Im nachfolgenden Artikel werden im Hinblick auf verschiedene Fragestellungen Vor- und Nachteile der einzelnen Verfahren aufgefuehrt sowie Grundlagen zu den einzelnen Techniken erlaeutert. (orig.)

  4. Diagnostic imaging advances in murine models of colitis.

    Science.gov (United States)

    Brückner, Markus; Lenz, Philipp; Mücke, Marcus M; Gohar, Faekah; Willeke, Peter; Domagk, Dirk; Bettenworth, Dominik

    2016-01-21

    Inflammatory bowel diseases (IBD) such as Crohn's disease and ulcerative colitis are chronic-remittent inflammatory disorders of the gastrointestinal tract still evoking challenging clinical diagnostic and therapeutic situations. Murine models of experimental colitis are a vital component of research into human IBD concerning questions of its complex pathogenesis or the evaluation of potential new drugs. To monitor the course of colitis, to the present day, classical parameters like histological tissue alterations or analysis of mucosal cytokine/chemokine expression often require euthanasia of animals. Recent advances mean revolutionary non-invasive imaging techniques for in vivo murine colitis diagnostics are increasingly available. These novel and emerging imaging techniques not only allow direct visualization of intestinal inflammation, but also enable molecular imaging and targeting of specific alterations of the inflamed murine mucosa. For the first time, in vivo imaging techniques allow for longitudinal examinations and evaluation of intra-individual therapeutic response. This review discusses the latest developments in the different fields of ultrasound, molecularly targeted contrast agent ultrasound, fluorescence endoscopy, confocal laser endomicroscopy as well as tomographic imaging with magnetic resonance imaging, computed tomography and fluorescence-mediated tomography, discussing their individual limitations and potential future diagnostic applications in the management of human patients with IBD.

  5. Pixel Intensity Clustering Algorithm for Multilevel Image Segmentation

    Directory of Open Access Journals (Sweden)

    Oludayo O. Olugbara

    2015-01-01

    Full Text Available Image segmentation is an important problem that has received significant attention in the literature. Over the last few decades, a lot of algorithms were developed to solve image segmentation problem; prominent amongst these are the thresholding algorithms. However, the computational time complexity of thresholding exponentially increases with increasing number of desired thresholds. A wealth of alternative algorithms, notably those based on particle swarm optimization and evolutionary metaheuristics, were proposed to tackle the intrinsic challenges of thresholding. In codicil, clustering based algorithms were developed as multidimensional extensions of thresholding. While these algorithms have demonstrated successful results for fewer thresholds, their computational costs for a large number of thresholds are still a limiting factor. We propose a new clustering algorithm based on linear partitioning of the pixel intensity set and between-cluster variance criterion function for multilevel image segmentation. The results of testing the proposed algorithm on real images from Berkeley Segmentation Dataset and Benchmark show that the algorithm is comparable with state-of-the-art multilevel segmentation algorithms and consistently produces high quality results. The attractive properties of the algorithm are its simplicity, generalization to a large number of clusters, and computational cost effectiveness.

  6. Diagnostic Medical Imaging in Pediatric Patients and Subsequent Cancer Risk.

    Science.gov (United States)

    Mulvihill, David J; Jhawar, Sachin; Kostis, John B; Goyal, Sharad

    2017-06-20

    The use of diagnostic medical imaging is becoming increasingly more commonplace in the pediatric setting. However, many medical imaging modalities expose pediatric patients to ionizing radiation, which has been shown to increase the risk of cancer development in later life. This review article provides a comprehensive overview of the available data regarding the risk of cancer development following exposure to ionizing radiation from diagnostic medical imaging. Attention is paid to modalities such as computed tomography scans and fluoroscopic procedures that can expose children to radiation doses orders of magnitude higher than standard diagnostic x-rays. Ongoing studies that seek to more precisely determine the relationship of diagnostic medical radiation in children and subsequent cancer development are discussed, as well as modern strategies to better quantify this risk. Finally, as cardiovascular imaging and intervention contribute substantially to medical radiation exposure, we discuss strategies to enhance radiation safety in these areas. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  7. Early diagnostic method for sepsis based on neutrophil MR imaging

    Directory of Open Access Journals (Sweden)

    Shanhua Han

    2015-06-01

    Conclusion: Mouse and human neutrophils could be more effectively labelled by Mannan-coated SPION in vitro than Feridex. Sepsis analog neutrophils labelled by Mannan-coated SPIONs could be efficiently detected on MR images, which may serve as an early diagnostic method for sepsis.

  8. Predicting diagnostic error in radiology via eye-tracking and image analytics: Preliminary investigation in mammography

    Energy Technology Data Exchange (ETDEWEB)

    Voisin, Sophie; Tourassi, Georgia D. [Biomedical Science and Engineering Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Pinto, Frank [School of Engineering, Science, and Technology, Virginia State University, Petersburg, Virginia 23806 (United States); Morin-Ducote, Garnetta; Hudson, Kathleen B. [Department of Radiology, University of Tennessee Medical Center at Knoxville, Knoxville, Tennessee 37920 (United States)

    2013-10-15

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists’ gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels.Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated.Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features.Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists’ gaze behavior and image content.

  9. Predicting diagnostic error in Radiology via eye-tracking and image analytics: Application in mammography

    Energy Technology Data Exchange (ETDEWEB)

    Voisin, Sophie [ORNL; Pinto, Frank M [ORNL; Morin-Ducote, Garnetta [University of Tennessee, Knoxville (UTK); Hudson, Kathy [University of Tennessee, Knoxville (UTK); Tourassi, Georgia [ORNL

    2013-01-01

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels. Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from 4 Radiology residents and 2 breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADs images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated. Results: Diagnostic error can be predicted reliably by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model (AUC=0.79). Personalized user modeling was far more accurate for the more experienced readers (average AUC of 0.837 0.029) than for the less experienced ones (average AUC of 0.667 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features. Conclusions: Diagnostic errors in mammography can be predicted reliably by leveraging the radiologists gaze behavior and image content.

  10. An improved dehazing algorithm of aerial high-definition image

    Science.gov (United States)

    Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying

    2016-01-01

    For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.

  11. The optimal algorithm for Multi-source RS image fusion.

    Science.gov (United States)

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  12. Phyllodes tumor: diagnostic imaging and histopathology findings.

    Science.gov (United States)

    Venter, Alina Cristiana; Roşca, Elena; Daina, Lucia Georgeta; Muţiu, Gabriela; Pirte, Adriana Nicoleta; Rahotă, Daniela

    2015-01-01

    Phyllodes tumors are rare breast tumors, accounting for less than 1% of all primary tumors of the breast. Histologically, phyllodes tumors can be divided into benign (60%), borderline (20%) and malignant (20%). The mammography examination was performed by means of a digital mammography system Giotto 3D Images; the ultrasound examination was performed through a GE Logiq P6 device and histological confirmation was possible after surgery or following the histological biopsy. We grouped the nine patients who presented clinically palpable nodules into two groups, namely: the six patients presenting histological benign results into Group I, and Group II where we included those with borderline and malignant histological results. Mammography performed in 77.7% revealed a well-circumscribed round or oval opacity or with contour lobules. Ultrasound examination was performed in all patients. Mammography and ultrasound have limitation in differentiating between benign lesion and phyllodes tumor. In the nine analyzed cases, mammographic and ultrasound examinations did not allow the differentiation into the three groups of phyllodes tumor. Histopathological examination is considered the golden standard for their diagnosis. Correlations between mammographic and microscopic aspects were inconclusive for determining the degree of differentiation, ultrasound changes could be correlated with the histopathological aspects.

  13. Diagnostic imaging of shoulder rotator cuff lesions

    Directory of Open Access Journals (Sweden)

    Nogueira-Barbosa Marcello Henrique

    2002-01-01

    Full Text Available Shoulder rotator cuff tendon tears were evaluated with ultrasonography (US and magnetic resonance imaging (MRI. Surgical or arthroscopical correlation were available in 25 cases. Overall costs were also considered. Shoulder impingement syndrome diagnosis was done on a clinical basis. Surgery or arthroscopy was considered when conservative treatment failure for 6 months, or when rotator cuff repair was indicated. Ultrasound was performed in 22 patients and MRI in 17 of the 25 patients. Sensitivity, specificity and accuracy were 80%, 100% and 90.9% for US and 90%, 100% and 94.12% for MRI, respectively. In 16 cases both US and MRI were obtained and in this subgroup statistical correlation was excellent (p< 0.001. We concluded that both methods are reliable for rotator cuff full thickness tear evaluation. Since US is less expensive, it could be considered as the screening method when rotator cuff integrity is the main question, and when well trained radiologists and high resolution equipment are available.

  14. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    Energy Technology Data Exchange (ETDEWEB)

    Morimoto, A.K.; Bow, W.J.; Strong, D.S. [and others

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  15. Regularized image reconstruction algorithms for dual-isotope myocardial perfusion SPECT (MPS) imaging using a cross-tracer prior.

    Science.gov (United States)

    He, Xin; Cheng, Lishui; Fessler, Jeffrey A; Frey, Eric C

    2011-06-01

    In simultaneous dual-isotope myocardial perfusion SPECT (MPS) imaging, data are simultaneously acquired to determine the distributions of two radioactive isotopes. The goal of this work was to develop penalized maximum likelihood (PML) algorithms for a novel cross-tracer prior that exploits the fact that the two images reconstructed from simultaneous dual-isotope MPS projection data are perfectly registered in space. We first formulated the simultaneous dual-isotope MPS reconstruction problem as a joint estimation problem. A cross-tracer prior that couples voxel values on both images was then proposed. We developed an iterative algorithm to reconstruct the MPS images that converges to the maximum a posteriori solution for this prior based on separable surrogate functions. To accelerate the convergence, we developed a fast algorithm for the cross-tracer prior based on the complete data OS-EM (COSEM) framework. The proposed algorithm was compared qualitatively and quantitatively to a single-tracer version of the prior that did not include the cross-tracer term. Quantitative evaluations included comparisons of mean and standard deviation images as well as assessment of image fidelity using the mean square error. We also evaluated the cross tracer prior using a three-class observer study with respect to the three-class MPS diagnostic task, i.e., classifying patients as having either no defect, reversible defect, or fixed defects. For this study, a comparison with conventional ordered subsets-expectation maximization (OS-EM) reconstruction with postfiltering was performed. The comparisons to the single-tracer prior demonstrated similar resolution for areas of the image with large intensity changes and reduced noise in uniform regions. The cross-tracer prior was also superior to the single-tracer version in terms of restoring image fidelity. Results of the three-class observer study showed that the proposed cross-tracer prior and the convergent algorithms improved the

  16. Autofluorescence-based diagnostic UV imaging of tissues and cells

    Science.gov (United States)

    Renkoski, Timothy E.

    Cancer is the second leading cause of death in the United States, and its early diagnosis is critical to improving treatment options and patient outcomes. In autofluorescence (AF) imaging, light of controlled wavelengths is projected onto tissue, absorbed by specific molecules, and re-emitted at longer wavelengths. Images of re-emitted light are used together with spectral information to infer tissue functional information and diagnosis. This dissertation describes AF imaging studies of three different organs using data collected from fresh human surgical specimens. In the ovary study, illumination was at 365 nm, and images were captured at 8 emission wavelengths. Measurements from a multispectral imaging system and fiber optic probe were used to map tissue diagnosis at every image pixel. For the colon and pancreas studies, instrumentation was developed extending AF imaging capability to sub-300 nm excitation. Images excited in the deep UV revealed tryptophan and protein content which are believed to change with disease state. Several excitation wavelength bands from 280 nm to 440 nm were investigated. Microscopic AF images collected in the pancreas study included both cultured and primary cells. Several findings are reported. A method of transforming fiber optic probe spectra for direct comparison with imager spectra was devised. Normalization of AF data by green reflectance data was found useful in correcting hemoglobin absorption. Ratio images, both AF and reflectance, were formulated to highlight growths in the colon. Novel tryptophan AF images were found less useful for colon diagnostics than the new ratio techniques. Microscopic tryptophan AF images produce useful visualization of cellular protein content, but their diagnostic value requires further study.

  17. Efficient iterative image reconstruction algorithm for dedicated breast CT

    Science.gov (United States)

    Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan

    2016-03-01

    Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.

  18. High Fidelity Imaging Algorithm for the Undique Imaging Monte Carlo Simulator

    Directory of Open Access Journals (Sweden)

    Tremblay Grégoire

    2016-01-01

    Full Text Available The Undique imaging Monte Carlo simulator (Undique hereafter was developed to reproduce the behavior of 3D imaging devices. This paper describes its high fidelity imaging algorithm.

  19. An Improved FCM Medical Image Segmentation Algorithm Based on MMTD

    Directory of Open Access Journals (Sweden)

    Ningning Zhou

    2014-01-01

    Full Text Available Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM is one of the popular clustering algorithms for medical image segmentation. But FCM is highly vulnerable to noise due to not considering the spatial information in image segmentation. This paper introduces medium mathematics system which is employed to process fuzzy information for image segmentation. It establishes the medium similarity measure based on the measure of medium truth degree (MMTD and uses the correlation of the pixel and its neighbors to define the medium membership function. An improved FCM medical image segmentation algorithm based on MMTD which takes some spatial features into account is proposed in this paper. The experimental results show that the proposed algorithm is more antinoise than the standard FCM, with more certainty and less fuzziness. This will lead to its practicable and effective applications in medical image segmentation.

  20. Investigation into diagnostic agreement using automated computer-assisted histopathology pattern recognition image analysis

    Directory of Open Access Journals (Sweden)

    Joshua D Webster

    2012-01-01

    Full Text Available The extent to which histopathology pattern recognition image analysis (PRIA agrees with microscopic assessment has not been established. Thus, a commercial PRIA platform was evaluated in two applications using whole-slide images. Substantial agreement, lacking significant constant or proportional errors, between PRIA and manual morphometric image segmentation was obtained for pulmonary metastatic cancer areas (Passing/Bablok regression. Bland-Altman analysis indicated heteroscedastic measurements and tendency toward increasing variance with increasing tumor burden, but no significant trend in mean bias. The average between-methods percent tumor content difference was -0.64. Analysis of between-methods measurement differences relative to the percent tumor magnitude revealed that method disagreement had an impact primarily in the smallest measurements (tumor burden 0.988, indicating high reproducibility for both methods, yet PRIA reproducibility was superior (C.V.: PRIA = 7.4, manual = 17.1. Evaluation of PRIA on morphologically complex teratomas led to diagnostic agreement with pathologist assessments of pluripotency on subsets of teratomas. Accommodation of the diversity of teratoma histologic features frequently resulted in detrimental trade-offs, increasing PRIA error elsewhere in images. PRIA error was nonrandom and influenced by variations in histomorphology. File-size limitations encountered while training algorithms and consequences of spectral image processing dominance contributed to diagnostic inaccuracies experienced for some teratomas. PRIA appeared better suited for tissues with limited phenotypic diversity. Technical improvements may enhance diagnostic agreement, and consistent pathologist input will benefit further development and application of PRIA.

  1. Image fusion based on expectation maximization algorithm and steerable pyramid

    Institute of Scientific and Technical Information of China (English)

    Gang Liu(刘刚); Zhongliang Jing(敬忠良); Shaoyuan Sun(孙韶媛); Jianxun Li(李建勋); Zhenhua Li(李振华); Henry Leung

    2004-01-01

    In this paper, a novel image fusion method based on the expectation maximization (EM) algorithm and steerable pyramid is proposed. The registered images are first decomposed by using steerable pyramid.The EM algorithm is used to fuse the image components in the low frequency band. The selection method involving the informative importance measure is applied to those in the high frequency band. The final fused image is then computed by taking the inverse transform on the composite coefficient representations.Experimental results show that the proposed method outperforms conventional image fusion methods.

  2. A SAR IMAGE REGISTRATION METHOD BASED ON SIFT ALGORITHM

    Directory of Open Access Journals (Sweden)

    W. Lu

    2017-09-01

    Full Text Available In order to improve the stability and rapidity of synthetic aperture radar (SAR images matching, an effective method was presented. Firstly, the adaptive smoothing filtering was employed for image denoising in image processing based on Wallis filtering to avoid the follow-up noise is amplified. Secondly, feature points were extracted by a simplified SIFT algorithm. Finally, the exact matching of the images was achieved with these points. Compared with the existing methods, it not only maintains the richness of features, but a-lso reduces the noise of the image. The simulation results show that the proposed algorithm can achieve better matching effect.

  3. a SAR Image Registration Method Based on Sift Algorithm

    Science.gov (United States)

    Lu, W.; Yue, X.; Zhao, Y.; Han, C.

    2017-09-01

    In order to improve the stability and rapidity of synthetic aperture radar (SAR) images matching, an effective method was presented. Firstly, the adaptive smoothing filtering was employed for image denoising in image processing based on Wallis filtering to avoid the follow-up noise is amplified. Secondly, feature points were extracted by a simplified SIFT algorithm. Finally, the exact matching of the images was achieved with these points. Compared with the existing methods, it not only maintains the richness of features, but a-lso reduces the noise of the image. The simulation results show that the proposed algorithm can achieve better matching effect.

  4. A dehazing algorithm with multiple simultaneously captured images

    Science.gov (United States)

    López-Martínez, José L.; Kober, Vitaly; Escalante-Torres, Manuel

    2016-09-01

    Recently, many efficient methods have been developed for dehazing using a single observed image. Such dehazing algorithms estimate scene depths and then compute the thickness of haze. However, since the problem is ill-posed, the restored image often contains artificial colors and overstretched contrast. In this work, we use multiple capturing of hazed images with three cameras to solve the dehazing problem. A new dehazing method with three multiple images is based on solution of explicit linear systems of equations derived from optimization of an objective function. The performance of the proposed method is compared with that of common dehazing algorithm in terms of accuracy of the quality of image restoration.

  5. Image Encryption Algorithm Based on Chaotic Economic Model

    Directory of Open Access Journals (Sweden)

    S. S. Askar

    2015-01-01

    Full Text Available In literature, chaotic economic systems have got much attention because of their complex dynamic behaviors such as bifurcation and chaos. Recently, a few researches on the usage of these systems in cryptographic algorithms have been conducted. In this paper, a new image encryption algorithm based on a chaotic economic map is proposed. An implementation of the proposed algorithm on a plain image based on the chaotic map is performed. The obtained results show that the proposed algorithm can successfully encrypt and decrypt the images with the same security keys. The security analysis is encouraging and shows that the encrypted images have good information entropy and very low correlation coefficients and the distribution of the gray values of the encrypted image has random-like behavior.

  6. Cryptanalysis of an image encryption algorithm based on DNA encoding

    Science.gov (United States)

    Akhavan, A.; Samsudin, A.; Akhshani, A.

    2017-10-01

    Recently an image encryption algorithm based on DNA encoding and the Elliptic Curve Cryptography (ECC) is proposed. This paper aims to investigate the security the DNA-based image encryption algorithm and its resistance against chosen plaintext attack. The results of the analysis demonstrate that security of the algorithm mainly relies on one static shuffling step, with a simple confusion operation. In this study, a practical plain image recovery method is proposed, and it is shown that the images encrypted with the same key could easily be recovered using the suggested cryptanalysis method with as low as two chosen plain images. Also, a strategy to improve the security of the algorithm is presented in this paper.

  7. Color Image Segmentation via Improved K-Means Algorithm

    Directory of Open Access Journals (Sweden)

    Ajay Kumar

    2016-03-01

    Full Text Available Data clustering techniques are often used to segment the real world images. Unsupervised image segmentation algorithms that are based on the clustering suffer from random initialization. There is a need for efficient and effective image segmentation algorithm, which can be used in the computer vision, object recognition, image recognition, or compression. To address these problems, the authors present a density-based initialization scheme to segment the color images. In the kernel density based clustering technique, the data sample is mapped to a high-dimensional space for the effective data classification. The Gaussian kernel is used for the density estimation and for the mapping of sample image into a high- dimensional color space. The proposed initialization scheme for the k-means clustering algorithm can homogenously segment an image into the regions of interest with the capability of avoiding the dead centre and the trapped centre by local minima phenomena. The performance of the experimental result indicates that the proposed approach is more effective, compared to the other existing clustering-based image segmentation algorithms. In the proposed approach, the Berkeley image database has been used for the comparison analysis with the recent clustering-based image segmentation algorithms like k-means++, k-medoids and k-mode.

  8. Design and Implementation of Image Encryption Algorithm Using Chaos

    Directory of Open Access Journals (Sweden)

    Sandhya Rani M.H.

    2014-06-01

    Full Text Available Images are widely used in diverse areas such as medical, military, science, engineering, art, advertising, entertainment, education as well as training, increasing the use of digital techniques for transmitting and storing images. So maintaining the confidentiality and integrity of images has become a major concern. This makes encryption necessary. The pixel values of neighbouring pixels in a plain image are strongly correlated. The proposed algorithm breaks this correlation increasing the entropy. Correlation is reduced by changing the pixel position this which is called confusion. Histogram is equalized by changing the pixel value this which is called diffusion. The proposed method of encryption algorithm is based on chaos theory. The plain-image is divided into blocks and then performs three levels of shuffling using different chaotic maps. In the first level the pixels within the block are shuffled. In the second level the blocks are shuffled and in the third level all the pixels in an image are shuffled. Finally the shuffled image is diffused using a chaotic sequence generated using symmetric keys, to produce the ciphered image for transmission. The experimental result demonstrates that the proposed algorithm can be used successfully to encrypt/decrypt the images with the secret keys. The analysis of the algorithm also shows that the algorithm gives larger key space and a high key sensitivity. The encrypted image has good encryption effect, information entropy and low correlation coefficient.

  9. Diagnostic imaging in psychiatry; Bildgebende Verfahren in der Psychiatrie

    Energy Technology Data Exchange (ETDEWEB)

    Stoppe, G.; Hentschel, F.; Munz, D.L. (eds.)

    2000-07-01

    The textbook presents an exhaustive survey of diagnostic imaging methods available for clinical evaluation of the entire range of significant psychiatric symptoms via imaging of the anatomy and functions of the brain. The chapters discuss: The methods and their efficient use for given diagnostic objectives, image analysis, description and interpretation of findings with respect to the clinical symptoms. Morphology and functional correlation of findings. The book is intended to help psychiatrists and neurologists as well as doctors in the radiology and nuclear medicine departments. (orig./CB) [German] Die Entwicklung der modernen Bildgebung ermoeglicht faszinierende Einblicke in Anatomie und Funktionen des Gehirns und ihre Veraenderungen bei psychiatrischen Erkrankungen. Die Methodik der Untersuchungsverfahren und die Befunde bei allen wichtigen psychiatrischen Krankheitsbildern sind in diesem Buch systematisch und umfassend beschrieben: - gezielter und effizienter Einsatz der Verfahren, - Bildanalyse und Befundbeschreibung, - Bewertung der Befunde und Beziehung zum klinischen Bild, - morphologische und funktionelle Korrelate der Befunde. Psychiater und Neurologen werden ebenso angesprochen wie Radiologen und Nuklearmediziner. (orig.)

  10. Techniques for Radar Imaging Based on MUSIC Algorithm

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    At first, the radar target scattering centers model and MUSIC algorithm are analyzed in this paper. How to efficiently set the parameters of the MUSIC algorithms is given by a great deal of simulated radar data in experiments. After that, according to measured data from two kinds of plane targets on fully polarized and high range resolution radar system, the author mainly investigated particular utilization of MUSIC algorithm in radar imaging. And two-dimensional radar images are generated for two targets measured in compact range. In the end, a conclusion is drew about the relation of radar target scattering properties and imaging results.

  11. A CT Image Segmentation Algorithm Based on Level Set Method

    Institute of Scientific and Technical Information of China (English)

    QU Jing-yi; SHI Hao-shan

    2006-01-01

    Level Set methods are robust and efficient numerical tools for resolving curve evolution in image segmentation. This paper proposes a new image segmentation algorithm based on Mumford-Shah module. The method is used to CT images and the experiment results demonstrate its efficiency and veracity.

  12. Motion tracking in infrared imaging for quantitative medical diagnostic applications

    Science.gov (United States)

    Cheng, Tze-Yuan; Herman, Cila

    2014-01-01

    In medical applications, infrared (IR) thermography is used to detect and examine the thermal signature of skin abnormalities by quantitatively analyzing skin temperature in steady state conditions or its evolution over time, captured in an image sequence. However, during the image acquisition period, the involuntary movements of the patient are unavoidable, and such movements will undermine the accuracy of temperature measurement for any particular location on the skin. In this study, a tracking approach using a template-based algorithm is proposed, to follow the involuntary motion of the subject in the IR image sequence. The motion tacking will allow to associate a temperature evolution to each spatial location on the body while the body moves relative to the image frame. The affine transformation model is adopted to estimate the motion parameters of the template image. The Lucas-Kanade algorithm is applied to search for the optimized parameters of the affine transformation. A weighting mask is incorporated into the algorithm to ensure its tracking robustness. To evaluate the feasibility of the tracking approach, two sets of IR image sequences with random in-plane motion were tested in our experiments. A steady-state (no heating or cooling) IR image sequence in which the skin temperature is in equilibrium with the environment was considered first. The thermal recovery IR image sequence, acquired when the skin is recovering from 60-s cooling, was the second case analyzed. By proper selection of the template image along with template update, satisfactory tracking results were obtained for both IR image sequences. The achieved tracking accuracies are promising in terms of satisfying the demands imposed by clinical applications of IR thermography.

  13. Application aspects of advanced antenna diagnostics with the 3D reconstruction algorithm

    DEFF Research Database (Denmark)

    Cappellin, Cecilia; Pivnenko, Sergey

    2015-01-01

    This paper focuses on two important applications of the 3D reconstruction algorithm of the commercial software DIATOOL for antenna diagnostics. The first one is the accurate and detailed identification of array malfunctioning, thanks to the available enhanced spatial resolution of the reconstructed...

  14. Requesting wrist radiographs in emergency department triage: developing a training program and diagnostic algorithm.

    Science.gov (United States)

    Streppa, Joanna; Schneidman, Valerie; Biron, Alain D

    2014-01-01

    Crowding is extremely problematic in Canada, as the emergency department (ED) utilization is considerably higher than in any other country. Consequently, an increase has been noted in waiting times for patients who present with injuries of lesser acuity such as wrist injuries. Wrist fractures are the most common broken bone in patients younger than 65 years. Many nurses employed within EDs are requesting wrist radiographs for patients who present with wrist complaints as a norm within their working practice. Significant potential advantages can ensue if EDs adopt a triage nurse-requested radiographic protocol; patients can benefit from a significant time-saving of 36% in ED length of stay (M. Lindley-Jones & B. J Finlayson, 2000)— when nurses initiated radiographs in triage. In addition, the literature suggests that increased rates of patient and staff satisfaction may be achieved, without compromising quality of radiographic request or quality of service (W. Parris,S. McCarthy, A. M. Kelly, & S. Richardson, 1997). Studies have shown that nurses are capable of requesting appropriate radiographs on the basis of a preset protocol. As there are no standardized set of rules for assessing patients, presenting with suspected wrist fractures, a training program as well as a diagnostic algorithm was developed to prepare emergency nurses to appropriately request wrist radiographs. The triage nurse-specific training program includes the following topics: wrist anatomy and physiology, commonly occurring wrist injuries, mechanisms of injury, physical assessment techniques, and types of radiographic images required. The triage nurse algorithm includes the clinical decision-making process. Providing triage nurses with up-to-date evidence-based educational material not only allowed triage nurses to independently assess and request wrist radiographs for patients with potential wrist fractures but also strengthening the link between competent nursing care and better patient

  15. Image standards in Tissue-Based Diagnosis (Diagnostic Surgical Pathology

    Directory of Open Access Journals (Sweden)

    Vollmer Ekkehard

    2008-04-01

    Full Text Available Abstract Background Progress in automated image analysis, virtual microscopy, hospital information systems, and interdisciplinary data exchange require image standards to be applied in tissue-based diagnosis. Aims To describe the theoretical background, practical experiences and comparable solutions in other medical fields to promote image standards applicable for diagnostic pathology. Theory and experiences Images used in tissue-based diagnosis present with pathology – specific characteristics. It seems appropriate to discuss their characteristics and potential standardization in relation to the levels of hierarchy in which they appear. All levels can be divided into legal, medical, and technological properties. Standards applied to the first level include regulations or aims to be fulfilled. In legal properties, they have to regulate features of privacy, image documentation, transmission, and presentation; in medical properties, features of disease – image combination, human – diagnostics, automated information extraction, archive retrieval and access; and in technological properties features of image acquisition, display, formats, transfer speed, safety, and system dynamics. The next lower second level has to implement the prescriptions of the upper one, i.e. describe how they are implemented. Legal aspects should demand secure encryption for privacy of all patient related data, image archives that include all images used for diagnostics for a period of 10 years at minimum, accurate annotations of dates and viewing, and precise hardware and software information. Medical aspects should demand standardized patients' files such as DICOM 3 or HL 7 including history and previous examinations, information of image display hardware and software, of image resolution and fields of view, of relation between sizes of biological objects and image sizes, and of access to archives and retrieval. Technological aspects should deal with image

  16. Image standards in tissue-based diagnosis (diagnostic surgical pathology).

    Science.gov (United States)

    Kayser, Klaus; Görtler, Jürgen; Goldmann, Torsten; Vollmer, Ekkehard; Hufnagl, Peter; Kayser, Gian

    2008-04-18

    Progress in automated image analysis, virtual microscopy, hospital information systems, and interdisciplinary data exchange require image standards to be applied in tissue-based diagnosis. To describe the theoretical background, practical experiences and comparable solutions in other medical fields to promote image standards applicable for diagnostic pathology. THEORY AND EXPERIENCES: Images used in tissue-based diagnosis present with pathology-specific characteristics. It seems appropriate to discuss their characteristics and potential standardization in relation to the levels of hierarchy in which they appear. All levels can be divided into legal, medical, and technological properties. Standards applied to the first level include regulations or aims to be fulfilled. In legal properties, they have to regulate features of privacy, image documentation, transmission, and presentation; in medical properties, features of disease-image combination, human-diagnostics, automated information extraction, archive retrieval and access; and in technological properties features of image acquisition, display, formats, transfer speed, safety, and system dynamics. The next lower second level has to implement the prescriptions of the upper one, i.e. describe how they are implemented. Legal aspects should demand secure encryption for privacy of all patient related data, image archives that include all images used for diagnostics for a period of 10 years at minimum, accurate annotations of dates and viewing, and precise hardware and software information. Medical aspects should demand standardized patients' files such as DICOM 3 or HL 7 including history and previous examinations, information of image display hardware and software, of image resolution and fields of view, of relation between sizes of biological objects and image sizes, and of access to archives and retrieval. Technological aspects should deal with image acquisition systems (resolution, colour temperature, focus

  17. Hybrid Neural-Network: Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics Developed and Demonstrated

    Science.gov (United States)

    Kobayashi, Takahisa; Simon, Donald L.

    2002-01-01

    As part of the NASA Aviation Safety Program, a unique model-based diagnostics method that employs neural networks and genetic algorithms for aircraft engine performance diagnostics has been developed and demonstrated at the NASA Glenn Research Center against a nonlinear gas turbine engine model. Neural networks are applied to estimate the internal health condition of the engine, and genetic algorithms are used for sensor fault detection, isolation, and quantification. This hybrid architecture combines the excellent nonlinear estimation capabilities of neural networks with the capability to rank the likelihood of various faults given a specific sensor suite signature. The method requires a significantly smaller data training set than a neural network approach alone does, and it performs the combined engine health monitoring objectives of performance diagnostics and sensor fault detection and isolation in the presence of nominal and degraded engine health conditions.

  18. Quantum Image Steganography and Steganalysis Based On LSQu-Blocks Image Information Concealing Algorithm

    Science.gov (United States)

    A. AL-Salhi, Yahya E.; Lu, Songfeng

    2016-08-01

    Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.

  19. SAR Image Segmentation Based On Hybrid PSOGSA Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Amandeep Kaur

    2014-09-01

    Full Text Available Image segmentation is useful in many applications. It can identify the regions of interest in a scene or annotate the data. It categorizes the existing segmentation algorithm into region-based segmentation, data clustering, and edge-base segmentation. Region-based segmentation includes the seeded and unseeded region growing algorithms, the JSEG, and the fast scanning algorithm. Due to the presence of speckle noise, segmentation of Synthetic Aperture Radar (SAR images is still a challenging problem. We proposed a fast SAR image segmentation method based on Particle Swarm Optimization-Gravitational Search Algorithm (PSO-GSA. In this method, threshold estimation is regarded as a search procedure that examinations for an appropriate value in a continuous grayscale interval. Hence, PSO-GSA algorithm is familiarized to search for the optimal threshold. Experimental results indicate that our method is superior to GA based, AFS based and ABC based methods in terms of segmentation accuracy, segmentation time, and Thresholding.

  20. Imaging the pregnant patient for nonobstetric conditions: algorithms and radiation dose considerations.

    Science.gov (United States)

    Patel, Shital J; Reede, Deborah L; Katz, Douglas S; Subramaniam, Raja; Amorosa, Judith K

    2007-01-01

    Use of diagnostic imaging studies for evaluation of pregnant patients with medical conditions not related to pregnancy poses a persistent and recurring dilemma. Although a theoretical risk of carcinogenesis exists, there are no known risks for development of congenital malformations or mental retardation in a fetus exposed to ionizing radiation at the levels typically used for diagnostic imaging. An understanding of the effects of ionizing radiation on the fetus at different gestational stages and the estimated exposure dose received by the fetus from various imaging modalities facilitates appropriate choices for diagnostic imaging of pregnant patients with nonobstetric conditions. Other aspects of imaging besides radiation (ie, contrast agents) also carry potential for fetal injury and must be taken into consideration. Imaging algorithms based on a review of the current literature have been developed for specific nonobstetric conditions: pulmonary embolism, acute appendicitis, urolithiasis, biliary disease, and trauma. Imaging modalities that do not use ionizing radiation (ie, ultrasonography and magnetic resonance imaging) are preferred for pregnant patients. If ionizing radiation is used, one must adhere to the principle of using a dose that is as low as reasonably achievable after a discussion of risks versus benefits with the patient.

  1. Use of hyperspectral imaging technology to develop a diagnostic support system for gastric cancer

    Science.gov (United States)

    Goto, Atsushi; Nishikawa, Jun; Kiyotoki, Shu; Nakamura, Munetaka; Nishimura, Junichi; Okamoto, Takeshi; Ogihara, Hiroyuki; Fujita, Yusuke; Hamamoto, Yoshihiko; Sakaida, Isao

    2015-01-01

    Hyperspectral imaging (HSI) is a new technology that obtains spectroscopic information and renders it in image form. This study examined the difference in the spectral reflectance (SR) of gastric tumors and normal mucosa recorded with a hyperspectral camera equipped with HSI technology and attempted to determine the specific wavelength that is useful for the diagnosis of gastric cancer. A total of 104 gastric tumors removed by endoscopic submucosal dissection from 96 patients at Yamaguchi University Hospital were recorded using a hyperspectral camera. We determined the optimal wavelength and the cut-off value for differentiating tumors from normal mucosa to establish a diagnostic algorithm. We also attempted to highlight tumors by image processing using the hyperspectral camera's analysis software. A wavelength of 770 nm and a cut-off value of 1/4 the corrected SR were selected as the respective optimal wavelength and cut-off values. The rates of sensitivity, specificity, and accuracy of the algorithm's diagnostic capability were 71%, 98%, and 85%, respectively. It was possible to enhance tumors by image processing at the 770-nm wavelength. HSI can be used to measure the SR in gastric tumors and to differentiate between tumorous and normal mucosa.

  2. Ultrasonic particle image velocimetry for improved flow gradient imaging: algorithms, methodology and validation.

    Science.gov (United States)

    Niu, Lili; Qian, Ming; Wan, Kun; Yu, Wentao; Jin, Qiaofeng; Ling, Tao; Gao, Shen; Zheng, Hairong

    2010-04-01

    This paper presents a new algorithm for ultrasonic particle image velocimetry (Echo PIV) for improving the flow velocity measurement accuracy and efficiency in regions with high velocity gradients. The conventional Echo PIV algorithm has been modified by incorporating a multiple iterative algorithm, sub-pixel method, filter and interpolation method, and spurious vector elimination algorithm. The new algorithms' performance is assessed by analyzing simulated images with known displacements, and ultrasonic B-mode images of in vitro laminar pipe flow, rotational flow and in vivo rat carotid arterial flow. Results of the simulated images show that the new algorithm produces much smaller bias from the known displacements. For laminar flow, the new algorithm results in 1.1% deviation from the analytically derived value, and 8.8% for the conventional algorithm. The vector quality evaluation for the rotational flow imaging shows that the new algorithm produces better velocity vectors. For in vivo rat carotid arterial flow imaging, the results from the new algorithm deviate 6.6% from the Doppler-measured peak velocities averagely compared to 15% of that from the conventional algorithm. The new Echo PIV algorithm is able to effectively improve the measurement accuracy in imaging flow fields with high velocity gradients.

  3. A Constructive Algorithm for Feedforward Neural Networks for Medical Diagnostic Reasoning

    CERN Document Server

    Siddiquee, Abu Bakar; Kamruzzaman, S M

    2010-01-01

    This research is to search for alternatives to the resolution of complex medical diagnosis where human knowledge should be apprehended in a general fashion. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic system. Our research describes a constructive neural network algorithm with backpropagation; offer an approach for the incremental construction of nearminimal neural network architectures for pattern classification. The algorithm starts with minimal number of hidden units in the single hidden layer; additional units are added to the hidden layer one at a time to improve the accuracy of the network and to get an optimal size of a neural network. Our algorithm was tested on several benchmarking classification problems including Cancer1, Heart, and Diabetes with good generalization ability.

  4. Computers in Diagnostic Nuclear Medicine Imaging - A Review

    Directory of Open Access Journals (Sweden)

    K. K. Kapoor

    1989-07-01

    Full Text Available Digital computers are becoming increasingly popular for a variety of purposes in nuclear medicine. They are particuiarly useful in the areas of nuclear imaging and gamma camera image processing,radionuclide inventory and patient record keeping. By far the most important use of the digital computer is in array processors which are commonly available with emission computed systems for fast reconstruction of images in transverse, coronal and sagittal views, particularly when the data to be handled is enormous and involves filtration and correction processes. The addition of array processors to computer systems has helped the clinicians in improving diagnostic nuclear medicine imaging capability. This paper reviews briefly therole of computers in the field of nuclear medicine imaging.

  5. A FAST CONVERGING SPARSE RECONSTRUCTION ALGORITHM IN GHOST IMAGING

    Institute of Scientific and Technical Information of China (English)

    Li Enrong; Chen Mingliang; Gong Wenlin; Wang Hui; Han Shensheng

    2012-01-01

    A fast converging sparse reconstruction algorithm in ghost imaging is presented.It utilizes total variation regularization and its formulation is based on the Karush-Kuhn-Tucker (KKT) theorem in the theory of convex optimization.Tests using experimental data show that,compared with the algorithm of Gradient Projection for Sparse Reconstruction (GPSR),the proposed algorithm yields better results with less computation work.

  6. Greylevel Difference Classification Algorithm inFractal Image Compression

    Institute of Scientific and Technical Information of China (English)

    陈毅松; 卢坚; 孙正兴; 张福炎

    2002-01-01

    This paper proposes the notion of a greylevel difference classification algorithm in fractal image compression. Then an example of the greylevel difference classification algo rithm is given as an improvement of the quadrant greylevel and variance classification in the quadtree-based encoding algorithm. The algorithm incorporates the frequency feature in spatial analysis using the notion of average quadrant greylevel difference, leading to an enhancement in terms of encoding time, PSNR value and compression ratio.

  7. New Algorithms and Sparse Regularization for Synthetic Aperture Radar Imaging

    Science.gov (United States)

    2015-10-26

    AFRL-AFOSR-VA-TR-2015-0343 New Algorithms and Sparse Regularization for Synthetic Aperture Radar Imaging Laurent Demanet MASSACHUSETTS INSTITUTE OF...26-10-2015 2. REPORT TYPE Final Performance 3. DATES COVERED (From - To) 14-06-2014 to 14-06-2015 4. TITLE AND SUBTITLE New Algorithms and Sparse...method must fail -- at the target detection task. The analysis identifies the algorithms that perform well, and those that don’t, even in the case of

  8. Optimizing Patient-centered Communication and Multidisciplinary Care Coordination in Emergency Diagnostic Imaging: A Research Agenda.

    Science.gov (United States)

    Sabbatini, Amber K; Merck, Lisa H; Froemming, Adam T; Vaughan, William; Brown, Michael D; Hess, Erik P; Applegate, Kimberly E; Comfere, Nneka I

    2015-12-01

    Patient-centered emergency diagnostic imaging relies on efficient communication and multispecialty care coordination to ensure optimal imaging utilization. The construct of the emergency diagnostic imaging care coordination cycle with three main phases (pretest, test, and posttest) provides a useful framework to evaluate care coordination in patient-centered emergency diagnostic imaging. This article summarizes findings reached during the patient-centered outcomes session of the 2015 Academic Emergency Medicine consensus conference "Diagnostic Imaging in the Emergency Department: A Research Agenda to Optimize Utilization." The primary objective was to develop a research agenda focused on 1) defining component parts of the emergency diagnostic imaging care coordination process, 2) identifying gaps in communication that affect emergency diagnostic imaging, and 3) defining optimal methods of communication and multidisciplinary care coordination that ensure patient-centered emergency diagnostic imaging. Prioritized research questions provided the framework to define a research agenda for multidisciplinary care coordination in emergency diagnostic imaging.

  9. Image processing algorithm acceleration using reconfigurable macro processor model

    Institute of Scientific and Technical Information of China (English)

    孙广富; 陈华明; 卢焕章

    2004-01-01

    The concept and advantage of reconfigurable technology is introduced. A kind of processor architecture of reconfigurable macro processor (RMP) model based on FPGA array and DSP is put forward and has been implemented.Two image algorithms are developed: template-based automatic target recognition and zone labeling. One is estimating for motion direction in the infrared image background, another is line picking-up algorithm based on image zone labeling and phase grouping technique. It is a kind of "hardware" function that can be called by the DSP in high-level algorithm.It is also a kind of hardware algorithm of the DSP. The results of experiments show the reconfigurable computing technology based on RMP is an ideal accelerating means to deal with the high-speed image processing tasks. High real time performance is obtained in our two applications on RMP.

  10. Image Combination Analysis in SPECAN Algorithm of Spaceborne SAR

    Institute of Scientific and Technical Information of China (English)

    臧铁飞; 李方慧; 龙腾

    2003-01-01

    An analysis of image combination in SPECAN algorithm is delivered in time-frequency domain in detail and a new image combination method is proposed. For four multi-looks processing one sub-aperture data in every three sub-apertures is processed in this combination method. The continual sub-aperture processing in SPECAN algorithm is realized and the processing efficiency can be dramatically increased. A new parameter is also put forward to measure the processing efficient of SAR image processing. Finally, the raw data of RADARSAT are used to test the method and the result proves that this method is feasible to be used in SPECAN algorithm of spaceborne SAR and can improve processing efficiently. SPECAN algorithm with this method can be used in quick-look imaging.

  11. Target Image Matching Algorithm Based on Binocular CCD Ranging

    Directory of Open Access Journals (Sweden)

    Dongming Li

    2014-01-01

    Full Text Available This paper proposed target image in a subpixel level matching algorithm for binocular CCD ranging, which is based on the principle of binocular CCD ranging. In the paper, firstly, we introduced the ranging principle of the binocular ranging system and deduced a binocular parallax formula. Secondly, we deduced the algorithm which was named improved cross-correlation matching algorithm and cubic surface fitting algorithm for target images matched, and it could achieve a subpixel level matching for binocular CCD ranging images. Lastly, through experiment we have analyzed and verified the actual CCD ranging images, then analyzed the errors of the experimental results and corrected the formula of calculating system errors. Experimental results showed that the actual measurement accuracy of a target within 3 km was higher than 0.52%, which meet the accuracy requirements of the high precision binocular ranging.

  12. Fingerprint Image Segmentation Algorithm Based on Contourlet Transform Technology

    Directory of Open Access Journals (Sweden)

    Guanghua Zhang

    2016-09-01

    Full Text Available This paper briefly introduces two classic algorithms for fingerprint image processing, which include the soft threshold denoise algorithm of wavelet domain based on wavelet domain and the fingerprint image enhancement algorithm based on Gabor function. Contourlet transform has good texture sensitivity and can be used for the segmentation enforcement of the fingerprint image. The method proposed in this paper has attained the final fingerprint segmentation image through utilizing a modified denoising for a high-frequency coefficient after Contourlet decomposition, highlighting the fingerprint ridge line through modulus maxima detection and finally connecting the broken fingerprint line using a value filter in direction. It can attain richer direction information than the method based on wavelet transform and Gabor function and can make the positioning of detailed features more accurate. However, its ridge should be more coherent. Experiments have shown that this algorithm is obviously superior in fingerprint features detection.

  13. Medical Images Watermarking Algorithm Based on Improved DCT

    Directory of Open Access Journals (Sweden)

    Yv-fan SHANG

    2013-12-01

    Full Text Available Targeting at the incessant securities problems of digital information management system in modern medical system, this paper presents the robust watermarking algorithm for medical images based on Arnold transformation and DCT. The algorithm first deploys the scrambling technology to encrypt the watermark information and then combines it with the visual feature vector of the image to generate a binary logic series through the hash function. The sequence as taken as keys and stored in the third party to obtain ownership of the original image. Having no need for artificial selection of a region of interest, no capacity constraint, no participation of the original medical image, such kind of watermark extracting solves security and speed problems in the watermark embedding and extracting. The simulation results also show that the algorithm is simple in operation and excellent in robustness and invisibility. In a word, it is more practical compared with other algorithms

  14. Healthcare provider and patient perspectives on diagnostic imaging investigations

    Science.gov (United States)

    Bergh, Anne-Marie; Hoffmann, Willem A.

    2015-01-01

    Background: Much has been written about the patient-centred approach in doctor–patient consultations. Little is known about interactions and communication processes regarding healthcare providers’ and patients’ perspectives on expectations and experiences of diagnostic imaging investigations within the medical encounter. Patients journey through the health system from the point of referral to the imaging investigation itself and then to the post-imaging consultation. Aim and setting: To explore healthcare provider and patient perspectives on interaction and communication processes during diagnostic imaging investigations as part of their clinical journey through a healthcare complex. Methods: A qualitative study was conducted, with two phases of data collection. Twenty-four patients were conveniently selected at a public district hospital complex and were followed throughout their journey in the hospital system, from admission to discharge. The second phase entailed focus group interviews conducted with providers in the district hospital and adjacent academic hospital (medical officers and family physicians, nurses, radiographers, radiology consultants and registrars). Results: Two main themes guided our analysis: (1) provider perspectives; and (2) patient dispositions and reactions. Golden threads that cut across these themes are interactions and communication processes in the context of expectations, experiences of the imaging investigations and the outcomes thereof. Conclusion: Insights from this study provide a better understanding of the complexity of the processes and interactions between providers and patients during the imaging investigations conducted as part of their clinical pathway. The interactions and communication processes are provider–patient centred when a referral for a diagnostic imaging investigation is included. PMID:26245604

  15. An Efficient Algorithm for Image Enhancement

    Directory of Open Access Journals (Sweden)

    Manglesh Khandelwal

    2011-02-01

    Full Text Available In the digital image processing field enhancement and removing the noise in the image is a critical issue. We have proposed a new algorithmto enhance color Image corrupted by Gaussian noise using fuzzy logic which describes uncertain features of images with modification of median filter. . The performance of the proposed technique has been evaluated and compared to the existing mean and median filter.

  16. A new modified fast fractal image compression algorithm

    DEFF Research Database (Denmark)

    Salarian, Mehdi; Nadernejad, Ehsan; MiarNaimi, Hossein

    2013-01-01

    In this paper, a new fractal image compression algorithm is proposed, in which the time of the encoding process is considerably reduced. The algorithm exploits a domain pool reduction approach, along with the use of innovative predefined values for contrast scaling factor, S, instead of searching...

  17. Adaptive image contrast enhancement algorithm for point-based rendering

    Science.gov (United States)

    Xu, Shaoping; Liu, Xiaoping P.

    2015-03-01

    Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.

  18. Combining a thermal-imaging diagnostic with an existing imaging VISAR diagnostic at the National Ignition Facility (NIF)

    Science.gov (United States)

    Malone, Robert M.; Celeste, John R.; Celliers, Peter M.; Frogget, Brent C.; Guyton, Robert L.; Kaufman, Morris I.; Lee, Tony L.; MacGowan, Brian J.; Ng, Edmund W.; Reinbachs, Imants P.; Robinson, Ronald B.; Seppala, Lynn G.; Tunnell, Thomas W.; Watts, Phillip W.

    2005-08-01

    Optical diagnostics are currently being designed to analyze high-energy density physics experiments at the National Ignition Facility (NIF). Two independent line-imaging Velocity Interferometer System for Any Reflector (VISAR) interferometers have been fielded to measure shock velocities, breakout times, and emission of targets having sizes of 1-5 mm. An 8-inch-diameter, fused silica triplet lens collects light at f/3 inside the 30-foot-diameter NIF vacuum chamber. VISAR recordings use a 659.5-nm probe laser. By adding a specially coated beam splitter to the interferometer table, light at wavelengths from 540 to 645 nm is spilt into a thermal-imaging diagnostic. Because fused silica lenses are used in the first triplet relay, the intermediate image planes for different wavelengths separate by considerable distances. A corrector lens on the interferometer table reunites these separated wavelength planes to provide a good image. Thermal imaging collects light at f/5 from a 2-mm object placed at Target Chamber Center (TCC). Streak cameras perform VISAR and thermal-imaging recording. All optical lenses are on kinematic mounts so that pointing accuracy of the optical axis may be checked. Counter-propagating laser beams (orange and red) are used to align both diagnostics. The red alignment laser is selected to be at the 50 percent reflection point of the beam splitter. This alignment laser is introduced at the recording streak cameras for both diagnostics and passes through this special beam splitter on its way into the NIF vacuum chamber.

  19. Combining a thermal-imaging diagnostic with an existing imaging VISAR diagnostic at the National Ignition Facility (NIF)

    Energy Technology Data Exchange (ETDEWEB)

    Robert M. Malone; John R. Celesteb; Peter M. Celliers; Brent C. Froggeta; Robert L. Guyton; Morris I. Kaufman; Tony L. Lee; Brian J. MacGowan; Edmund W. Ng; Imants P. Reinbachs; Ronald B. Robinson; Lynn G. Seppala; Tom W. Tunnell; Phillip W. Watts

    2005-01-01

    Optical diagnostics are currently being designed to analyze high-energy density physics experiments at the National Ignition Facility (NIF). Two independent line-imaging Velocity Interferometer System for Any Reflector (VISAR) interferometers have been fielded to measure shock velocities, breakout times, and emission of targets having sizes of 1–5 mm. An 8-inch-diameter, fused silica triplet lens collects light at f/3 inside the 30-foot-diameter NIF vacuum chamber. VISAR recordings use a 659.5-nm probe laser. By adding a specially coated beam splitter to the interferometer table, light at wavelengths from 540 to 645 nm is spilt into a thermal-imaging diagnostic. Because fused silica lenses are used in the first triplet relay, the intermediate image planes for different wavelengths separate by considerable distances. A corrector lens on the interferometer table reunites these separated wavelength planes to provide a good image. Thermal imaging collects light at f/5 from a 2-mm object placed at Target Chamber Center (TCC). Streak cameras perform VISAR and thermal-imaging recording. All optical lenses are on kinematic mounts so that pointing accuracy of the optical axis may be checked. Counter-propagating laser beams (orange and red) are used to align both diagnostics. The red alignment laser is selected to be at the 50 percent reflection point of the beam splitter. This alignment laser is introduced at the recording streak cameras for both diagnostics and passes through this special beam splitter on its way into the NIF vacuum chamber.

  20. Combining a thermal-imaging diagnostic with an existing imaging VISAR diagnostic at the National Ignition Facility (NIF)

    Energy Technology Data Exchange (ETDEWEB)

    Malone, R; Celeste, J; Celliers, P; Frogget, B; Guyton, R L; Kaufman, M; Lee, T; MacGowan, B; Ng, E W; Reinbachs, I P; Robinson, R B; Seppala, L; Tunnell, T W; Watts, P

    2005-07-07

    Optical diagnostics are currently being designed to analyze high-energy density physics experiments at the National Ignition Facility (NIF). Two independent line-imaging Velocity Interferometer System for Any Reflector (VISAR) interferometers have been fielded to measure shock velocities, breakout times, and emission of targets having sizes of 1-5 mm. An 8-inch-diameter, fused silica triplet lens collects light at f/3 inside the 30-foot-diameter NIF vacuum chamber. VISAR recordings use a 659.5-nm probe laser. By adding a specially coated beam splitter to the interferometer table, light at wavelengths from 540 to 645 nm is spilt into a thermal-imaging diagnostic. Because fused silica lenses are used in the first triplet relay, the intermediate image planes for different wavelengths separate by considerable distances. A corrector lens on the interferometer table reunites these separated wavelength planes to provide a good image. Thermal imaging collects light at f/5 from a 2-mm object placed at Target Chamber Center (TCC). Streak cameras perform VISAR and thermal-imaging recording. All optical lenses are on kinematic mounts so that pointing accuracy of the optical axis may be checked. Counter-propagating laser beams (orange and red) are used to align both diagnostics. The red alignment laser is selected to be at the 50 percent reflection point of the beam splitter. This alignment laser is introduced at the recording streak cameras for both diagnostics and passes through this special beam splitter on its way into the NIF vacuum chamber.

  1. Confirmation of Thermal Images and Vibration Signals for Intelligent Machine Fault Diagnostics

    Directory of Open Access Journals (Sweden)

    Achmad Widodo

    2012-01-01

    Full Text Available This paper deals with the maintenance technique for industrial machinery using the artificial neural network so-called self-organizing map (SOM. The aim of this work is to develop intelligent maintenance system for machinery based on an alternative way, namely, thermal images instead of vibration signals. SOM is selected due to its simplicity and is categorized as an unsupervised algorithm. Following the SOM training, machine fault diagnostics is performed by using the pattern recognition technique of machine conditions. The data used in this work are thermal images and vibration signals, which were acquired from machine fault simulator (MFS. It is a reliable tool and is able to simulate several conditions of faulty machine such as unbalance, misalignment, looseness, and rolling element bearing faults (outer race, inner race, ball, and cage defects. Data acquisition were conducted simultaneously by infrared thermography camera and vibration sensors installed in the MFS. The experimental data are presented as thermal image and vibration signal in the time domain. Feature extraction was carried out to obtain salient features sensitive to machine conditions from thermal images and vibration signals. These features are then used to train the SOM for intelligent machine diagnostics process. The results show that SOM can perform intelligent fault diagnostics with plausible accuracies.

  2. A color correction algorithm for noisy multi-view images

    Institute of Scientific and Technical Information of China (English)

    Feng Shao; Gangyi Jiang; Mei Yu; Ken Chen

    2007-01-01

    A novel color correction algorithm for noisy multi-view images is presented. The key idea is to use the improved Karhunen-Loeve (K-L) transform to obtain correction matrix that can eliminate noise effect to the fullest extent. Noise variance estimation is first performed in the algorithm. In the end, wavelet transform is applied to denoise the corrected image. Experimental results show that, compared with traditional correction method, a well-performed correction result is achieved using the proposed method,and the visual effect of the denoised corrected image is almost consistent with ideal corrected image.

  3. Relevance Feedback Algorithm Based on Collaborative Filtering in Image Retrieval

    Directory of Open Access Journals (Sweden)

    Yan Sun

    2010-12-01

    Full Text Available Content-based image retrieval is a very dynamic study field, and in this field, how to improve retrieval speed and retrieval accuracy is a hot issue. The retrieval performance can be improved when applying relevance feedback to image retrieval and introducing the participation of people to the retrieval process. However, as for many existing image retrieval methods, there are disadvantages of relevance feedback with information not being fully saved and used, and their accuracy and flexibility are relatively poor. Based on this, the collaborative filtering technology was combined with relevance feedback in this study, and an improved relevance feedback algorithm based on collaborative filtering was proposed. In the method, the collaborative filtering technology was used not only to predict the semantic relevance between images in database and the retrieval samples, but to analyze feedback log files in image retrieval, which can make the historical data of relevance feedback be fully used by image retrieval system, and further to improve the efficiency of feedback. The improved algorithm presented has been tested on the content-based image retrieval database, and the performance of the algorithm has been analyzed and compared with the existing algorithms. The experimental results showed that, compared with the traditional feedback algorithms, this method can obviously improve the efficiency of relevance feedback, and effectively promote the recall and precision of image retrieval.

  4. Naturalness preserved enhancement algorithm for non-uniform illumination images.

    Science.gov (United States)

    Wang, Shuhang; Zheng, Jin; Hu, Hai-Miao; Li, Bo

    2013-09-01

    Image enhancement plays an important role in image processing and analysis. Among various enhancement algorithms, Retinex-based algorithms can efficiently enhance details and have been widely adopted. Since Retinex-based algorithms regard illumination removal as a default preference and fail to limit the range of reflectance, the naturalness of non-uniform illumination images cannot be effectively preserved. However, naturalness is essential for image enhancement to achieve pleasing perceptual quality. In order to preserve naturalness while enhancing details, we propose an enhancement algorithm for non-uniform illumination images. In general, this paper makes the following three major contributions. First, a lightness-order-error measure is proposed to access naturalness preservation objectively. Second, a bright-pass filter is proposed to decompose an image into reflectance and illumination, which, respectively, determine the details and the naturalness of the image. Third, we propose a bi-log transformation, which is utilized to map the illumination to make a balance between details and naturalness. Experimental results demonstrate that the proposed algorithm can not only enhance the details but also preserve the naturalness for non-uniform illumination images.

  5. A New Hybrid Watermarking Algorithm for Images in Frequency Domain

    Directory of Open Access Journals (Sweden)

    AhmadReza Naghsh-Nilchi

    2008-03-01

    Full Text Available In recent years, digital watermarking has become a popular technique for digital images by hiding secret information which can protect the copyright. The goal of this paper is to develop a hybrid watermarking algorithm. This algorithm used DCT coefficient and DWT coefficient to embedding watermark, and the extracting procedure is blind. The proposed approach is robust to a variety of signal distortions, such as JPEG, image cropping and scaling.

  6. Color Image Segmentation Method Based on Improved Spectral Clustering Algorithm

    OpenAIRE

    Dong Qin

    2014-01-01

    Contraposing to the features of image data with high sparsity of and the problems on determination of clustering numbers, we try to put forward an color image segmentation algorithm, combined with semi-supervised machine learning technology and spectral graph theory. By the research of related theories and methods of spectral clustering algorithms, we introduce information entropy conception to design a method which can automatically optimize the scale parameter value. So it avoids the unstab...

  7. A NUFFT Based Step-frequency Chirp Signal High Resolution Imaging Algorithm and Target Recognition Algorithm

    Directory of Open Access Journals (Sweden)

    Xiang Yin

    2015-12-01

    Full Text Available Radar Automatic Target Recognition (RATR is the key technique to be breaked through in the fuure development of intelligent weapon system. Compared to the 2-D SAR image target recognition, High Resolution Range Profile (HRRP target recognition has the advantage of low data dimension, low requirement of radar system's calculation and storage ability, and the imaging algorithm is also not complicated. HRRP imaging is the first and the key process in target recognition, its speed and imaging quality can directly influence the real-time capability and accuracy of target recognition. In this paper a new HRRP imaging algorithm — NUFFT algorithm is proposed, the derivation of mathematical expression is given, both for the echo simulation process and the imaging process. In the meantime, by analyzing each step's calculation complexity, we compared the calculation complexity of four different imaging algorithms, we also simulate two target's imaging and target recognition processing. Theoretical analysis and simulation both prove that the proposed algorithm's calculation complexity is improved in various degree compared with the others, thus can be effectively used in target recognition.

  8. Digital Image Encryption Algorithm Design Based on Genetic Hyperchaos

    Directory of Open Access Journals (Sweden)

    Jian Wang

    2016-01-01

    Full Text Available In view of the present chaotic image encryption algorithm based on scrambling (diffusion is vulnerable to choosing plaintext (ciphertext attack in the process of pixel position scrambling, we put forward a image encryption algorithm based on genetic super chaotic system. The algorithm, by introducing clear feedback to the process of scrambling, makes the scrambling effect related to the initial chaos sequence and the clear text itself; it has realized the image features and the organic fusion of encryption algorithm. By introduction in the process of diffusion to encrypt plaintext feedback mechanism, it improves sensitivity of plaintext, algorithm selection plaintext, and ciphertext attack resistance. At the same time, it also makes full use of the characteristics of image information. Finally, experimental simulation and theoretical analysis show that our proposed algorithm can not only effectively resist plaintext (ciphertext attack, statistical attack, and information entropy attack but also effectively improve the efficiency of image encryption, which is a relatively secure and effective way of image communication.

  9. Abdomen disease diagnosis in CT images using flexiscale curvelet transform and improved genetic algorithm.

    Science.gov (United States)

    Sethi, Gaurav; Saini, B S

    2015-12-01

    This paper presents an abdomen disease diagnostic system based on the flexi-scale curvelet transform, which uses different optimal scales for extracting features from computed tomography (CT) images. To optimize the scale of the flexi-scale curvelet transform, we propose an improved genetic algorithm. The conventional genetic algorithm assumes that fit parents will likely produce the healthiest offspring that leads to the least fit parents accumulating at the bottom of the population, reducing the fitness of subsequent populations and delaying the optimal solution search. In our improved genetic algorithm, combining the chromosomes of a low-fitness and a high-fitness individual increases the probability of producing high-fitness offspring. Thereby, all of the least fit parent chromosomes are combined with high fit parent to produce offspring for the next population. In this way, the leftover weak chromosomes cannot damage the fitness of subsequent populations. To further facilitate the search for the optimal solution, our improved genetic algorithm adopts modified elitism. The proposed method was applied to 120 CT abdominal images; 30 images each of normal subjects, cysts, tumors and stones. The features extracted by the flexi-scale curvelet transform were more discriminative than conventional methods, demonstrating the potential of our method as a diagnostic tool for abdomen diseases.

  10. Imaging VISAR diagnostic for the National Ignition Facility (NIF)

    Science.gov (United States)

    Malone, Robert M.; Bower, John R.; Bradley, David K.; Capelle, Gene A.; Celeste, John R.; Celliers, Peter M.; Collins, Gilbert W.; Eckart, Mark J.; Eggert, Jon H.; Frogget, Brent C.; Guyton, Robert L.; Hicks, Damien G.; Kaufman, Morris I.; MacGowan, Brian J.; Montelongo, Samuel; Ng, Edmund W.; Robinson, Ronald B.; Tunnell, Thomas W.; Watts, Phillip W.; Zapata, Paul G.

    2005-03-01

    The National Ignition Facility (NIF) requires diagnostics to analyze high-energy density physics experiments. A VISAR (Velocity Interferometry System for Any Reflector) diagnostic has been designed to measure shock velocities, shock breakout times, and shock emission of targets with sizes from 1 to 5 mm. An 8-inch-diameter fused silica triplet lens collects light at f/3 inside the 30-foot-diameter vacuum chamber. The optical relay sends the image out an equatorial port, through a 2-inch-thick vacuum window, and into two interferometers. A 60-kW VISAR probe laser operates at 659.5 nm with variable pulse width. Special coatings on the mirrors and cutoff filters are used to reject the NIF drive laser wavelengths and to pass a band of wavelengths for VISAR, passive shock breakout light, or thermal imaging light (bypassing the interferometers). The first triplet can be no closer than 500 mm from the target chamber center and is protected from debris by a blast window that is replaced after every event. The front end of the optical relay can be temporarily removed from the equatorial port, allowing other experimenters to use that port. A unique resolution pattern has been designed to validate the VISAR diagnostic before each use. All optical lenses are on kinematic mounts so that the pointing accuracy of the optical axis can be checked. Seven CCD cameras monitor the diagnostic alignment.

  11. Imaging VISAR diagnostic for the National Ignition Facility (NIF)

    Energy Technology Data Exchange (ETDEWEB)

    Malone, R M; Bower, J R; Bradley, D K; Capelle, G A; Celeste, J R; Celliers, P M; Collins, G W; Eckart, M J; Eggert, J H; Frogget, B C; Guyton, R L; Hicks, D G; Kaufman, M I; MacGowan, B J; Montelongo, S; Ng, E W; Robinson, R B; Tunnell, T W; Watts, P W; Zapata, P G

    2004-08-30

    The National Ignition Facility (NIF) requires diagnostics to analyze high-energy density physics experiments. A VISAR (Velocity Interferometry System for Any Reflector) diagnostic has been designed to measure shock velocities, shock breakout times, and shock emission of targets with sizes from 1 to 5 mm. An 8-inch-diameter fused silica triplet lens collects light at f/3 inside the 30-foot-diameter vacuum chamber. The optical relay sends the image out an equatorial port, through a 2-inch-thick vacuum window, and into two interferometers. A 60-kW VISAR probe laser operates at 659.5 nm with variable pulse width. Special coatings on the mirrors and cutoff filters are used to reject the NIF drive laser wavelengths and to pass a band of wavelengths for VISAR, passive shock breakout light, or thermal imaging light (bypassing the interferometers). The first triplet can be no closer than 500 mm from the target chamber center and is protected from debris by a blast window that is replaced after every event. The front end of the optical relay can be temporarily removed from the equatorial port, allowing other experimenters to use that port. A unique resolution pattern has been designed to validate the VISAR diagnostic before each use. All optical lenses are on kinematic mounts so that the pointing accuracy of the optical axis can be checked. Seven CCD cameras monitor the diagnostic alignment.

  12. Multiobjective image recognition algorithm in the fully automatic die bonder

    Institute of Scientific and Technical Information of China (English)

    JIANG Kai; CHEN Hai-xia; YUAN Sen-miao

    2006-01-01

    It is a very important task to automatically fix the number of die in the image recognition system of a fully automatic die bonder.A multiobjective image recognition algorithm based on clustering Genetic Algorithm (GA),is proposed in this paper.In the evolutionary process of GA,a clustering method is provided that utilizes information from the template and the fitness landscape of the current population..The whole population is grouped into different niches by the clustering method.Experimental results demonstrated that the number of target images could be determined by the algorithm automatically,and multiple targets could be recognized at a time.As a result,time consumed by one image recognition is shortened,the performance of the image recognition system is improved,and the atomization of the system is fulfilled.

  13. New Autism Diagnostic Interview-Revised Algorithms for Toddlers and Young Preschoolers from 12 to 47 Months of Age

    Science.gov (United States)

    Kim, So Hyun; Lord, Catherine

    2012-01-01

    Autism Diagnostic Interview-Revised (Rutter et al. in "Autism diagnostic interview-revised." Western Psychological Services, Los Angeles, 2003) diagnostic algorithms specific to toddlers and young preschoolers were created using 829 assessments of children aged from 12 to 47 months with ASD, nonspectrum disorders, and typical development. The…

  14. New Autism Diagnostic Interview-Revised Algorithms for Toddlers and Young Preschoolers from 12 to 47 Months of Age

    Science.gov (United States)

    Kim, So Hyun; Lord, Catherine

    2012-01-01

    Autism Diagnostic Interview-Revised (Rutter et al. in "Autism diagnostic interview-revised." Western Psychological Services, Los Angeles, 2003) diagnostic algorithms specific to toddlers and young preschoolers were created using 829 assessments of children aged from 12 to 47 months with ASD, nonspectrum disorders, and typical development. The…

  15. Structured diagnostic imaging in patients with multiple trauma; Strukturierte radiologische Diagnostik beim Polytrauma

    Energy Technology Data Exchange (ETDEWEB)

    Linsenmaier, U.; Rieger, J.; Rock, C.; Pfeifer, K.J.; Reiser, M. [Institut fuer Klinische Radiologie, Klinikum der Universitaet Muenchen, Innenstadt (Germany); Kanz, K.G. [Chirurgische Klinik, Klinikum der Universitaet Muenchen, Innenstadt (Germany)

    2002-07-01

    Purpose. Development of a concept for structured diagnostic imaging in patients with multiple trauma.Material and methods. Evaluation of data from a prospective trial with over 2400 documented patients with multiple trauma. All diagnostic and therapeutic steps, primary and secondary death and the 90 days lethality were documented.Structured diagnostic imaging of multiple injured patients requires the integration of an experienced radiologist in an interdisciplinary trauma team consisting of anesthesia, radiology and trauma surgery. Radiology itself deserves standardized concepts for equipment, personnel and logistics to perform diagnostic imaging for a 24-h-coverage with constant quality.Results. This paper describes criteria for initiation of a shock room or emergency room treatment, strategies for documentation and interdisciplinary algorithms for the early clinical care coordinating diagnostic imaging and therapeutic procedures following standardized guidelines. Diagnostic imaging consists of basic diagnosis, radiological ABC-rule, radiological follow-up and structured organ diagnosis using CT. Radiological trauma scoring allows improved quality control of diagnosis and therapy of multiple injured patients.Conclusion. Structured diagnostic imaging of multiple injured patients leads to a standardization of diagnosis and therapy and ensures constant process quality. (orig.) [German] Fragestellung. Entwicklung eines strukturierten Konzeptes zur radiologischen Diagnostik polytraumatisierter Patienten.Methodik. Die Datenevaluation erfolgte auf Basis einer prospektiven interdisziplinaere Polytraumastudie mit ueber 2400 Patienten. Alle diagnostischen und therapeutischen Schritte werden jeweils unter Angabe von Zeitpunkt und auftretenden Komplikationen erfasst, ein primaeres oder sekundaeres Versterben und die 90-Tage-Letalitaet werden dokumentiert.Die strukturierte radiologische Diagnostik von Mehrfachverletzen verlangt die Integration eines erfahrenen Radiologen in

  16. A novel image encryption algorithm based on DNA subsequence operation.

    Science.gov (United States)

    Zhang, Qiang; Xue, Xianglian; Wei, Xiaopeng

    2012-01-01

    We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc.) combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack.

  17. A Novel Image Encryption Algorithm Based on DNA Subsequence Operation

    Science.gov (United States)

    Zhang, Qiang; Xue, Xianglian; Wei, Xiaopeng

    2012-01-01

    We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc.) combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack. PMID:23093912

  18. A Novel Image Encryption Algorithm Based on DNA Subsequence Operation

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2012-01-01

    Full Text Available We present a novel image encryption algorithm based on DNA subsequence operation. Different from the traditional DNA encryption methods, our algorithm does not use complex biological operation but just uses the idea of DNA subsequence operations (such as elongation operation, truncation operation, deletion operation, etc. combining with the logistic chaotic map to scramble the location and the value of pixel points from the image. The experimental results and security analysis show that the proposed algorithm is easy to be implemented, can get good encryption effect, has a wide secret key's space, strong sensitivity to secret key, and has the abilities of resisting exhaustive attack and statistic attack.

  19. Algorithms for digital image processing in diabetic retinopathy.

    Science.gov (United States)

    Winder, R J; Morrow, P J; McRitchie, I N; Bailie, J R; Hart, P M

    2009-12-01

    This work examined recent literature on digital image processing in the field of diabetic retinopathy. Algorithms were categorized into 5 steps (preprocessing; localization and segmentation of the optic disk; segmentation of the retinal vasculature; localization of the macula and fovea; localization and segmentation of retinopathy). The variety of outcome measures, use of a gold standard or ground truth, data sample sizes and the use of image databases is discussed. It is intended that our classification of algorithms into a small number of categories, definition of terms and discussion of evolving techniques will provide guidance to algorithm designers for diabetic retinopathy.

  20. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    Energy Technology Data Exchange (ETDEWEB)

    Murari, A.; Barana, O. [Consorzio RFX Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, Padua (Italy); Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F. [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon (United Kingdom); Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D. [Association EURATOM-CEA, CEA Cadarache, 13 - Saint-Paul-lez-Durance (France); Albanese, R. [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC (Italy); Arena, P.; Bruno, M. [Assoc. Euratom-ENEA-CREATE, Univ.di Catania (Italy); Ambrosino, G.; Ariola, M. [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico Napoli (Italy); Crisanti, F. [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E. de la; Sanchez, J. [Associacion EURATOM CIEMAT para Fusion, Madrid (Spain)

    2004-07-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  1. Vertigo in childhood: proposal for a diagnostic algorithm based upon clinical experience.

    Science.gov (United States)

    Casani, A P; Dallan, I; Navari, E; Sellari Franceschini, S; Cerchiai, N

    2015-06-01

    The aim of this paper is to analyse, after clinical experience with a series of patients with established diagnoses and review of the literature, all relevant anamnestic features in order to build a simple diagnostic algorithm for vertigo in childhood. This study is a retrospective chart review. A series of 37 children underwent complete clinical and instrumental vestibular examination. Only neurological disorders or genetic diseases represented exclusion criteria. All diagnoses were reviewed after applying the most recent diagnostic guidelines. In our experience, the most common aetiology for dizziness is vestibular migraine (38%), followed by acute labyrinthitis/neuritis (16%) and somatoform vertigo (16%). Benign paroxysmal vertigo was diagnosed in 4 patients (11%) and paroxysmal torticollis was diagnosed in a 1-year-old child. In 8% (3 patients) of cases, the dizziness had a post-traumatic origin: 1 canalolithiasis of the posterior semicircular canal and 2 labyrinthine concussions, respectively. Menière's disease was diagnosed in 2 cases. A bilateral vestibular failure of unknown origin caused chronic dizziness in 1 patient. In conclusion, this algorithm could represent a good tool for guiding clinical suspicion to correct diagnostic assessment in dizzy children where no neurological findings are detectable. The algorithm has just a few simple steps, based mainly on two aspects to be investigated early: temporal features of vertigo and presence of hearing impairment. A different algorithm has been proposed for cases in which a traumatic origin is suspected.

  2. Algorithms for Image Analysis and Combination of Pattern Classifiers with Application to Medical Diagnosis

    Science.gov (United States)

    Georgiou, Harris

    2009-10-01

    Medical Informatics and the application of modern signal processing in the assistance of the diagnostic process in medical imaging is one of the more recent and active research areas today. This thesis addresses a variety of issues related to the general problem of medical image analysis, specifically in mammography, and presents a series of algorithms and design approaches for all the intermediate levels of a modern system for computer-aided diagnosis (CAD). The diagnostic problem is analyzed with a systematic approach, first defining the imaging characteristics and features that are relevant to probable pathology in mammo-grams. Next, these features are quantified and fused into new, integrated radio-logical systems that exhibit embedded digital signal processing, in order to improve the final result and minimize the radiological dose for the patient. In a higher level, special algorithms are designed for detecting and encoding these clinically interest-ing imaging features, in order to be used as input to advanced pattern classifiers and machine learning models. Finally, these approaches are extended in multi-classifier models under the scope of Game Theory and optimum collective deci-sion, in order to produce efficient solutions for combining classifiers with minimum computational costs for advanced diagnostic systems. The material covered in this thesis is related to a total of 18 published papers, 6 in scientific journals and 12 in international conferences.

  3. Parkinson's disease: diagnostic utility of volumetric imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Wei-Che; Chen, Meng-Hsiang [Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Department of Diagnostic Radiology, Kaohsiung (China); Chou, Kun-Hsien [National Yang-Ming University, Brain Research Center, Taipei (China); Lee, Pei-Lin [National Yang-Ming University, Department of Biomedical Imaging and Radiological Sciences, Taipei (China); Tsai, Nai-Wen; Lu, Cheng-Hsien [Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Department of Neurology, Kaohsiung (China); Chen, Hsiu-Ling [Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Department of Diagnostic Radiology, Kaohsiung (China); National Yang-Ming University, Department of Biomedical Imaging and Radiological Sciences, Taipei (China); Hsu, Ai-Ling [National Taiwan University, Institute of Biomedical Electronics and Bioinformatics, Taipei (China); Huang, Yung-Cheng [Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Department of Nuclear Medicine, Kaohsiung (China); Lin, Ching-Po [National Yang-Ming University, Brain Research Center, Taipei (China); National Yang-Ming University, Department of Biomedical Imaging and Radiological Sciences, Taipei (China)

    2017-04-15

    This paper aims to examine the effectiveness of structural imaging as an aid in the diagnosis of Parkinson's disease (PD). High-resolution T{sub 1}-weighted magnetic resonance imaging was performed in 72 patients with idiopathic PD (mean age, 61.08 years) and 73 healthy subjects (mean age, 58.96 years). The whole brain was parcellated into 95 regions of interest using composite anatomical atlases, and region volumes were calculated. Three diagnostic classifiers were constructed using binary multiple logistic regression modeling: the (i) basal ganglion prior classifier, (ii) data-driven classifier, and (iii) basal ganglion prior/data-driven hybrid classifier. Leave-one-out cross validation was used to unbiasedly evaluate the predictive accuracy of imaging features. Pearson's correlation analysis was further performed to correlate outcome measurement using the best PD classifier with disease severity. Smaller volume in susceptible regions is diagnostic for Parkinson's disease. Compared with the other two classifiers, the basal ganglion prior/data-driven hybrid classifier had the highest diagnostic reliability with a sensitivity of 74%, specificity of 75%, and accuracy of 74%. Furthermore, outcome measurement using this classifier was associated with disease severity. Brain structural volumetric analysis with multiple logistic regression modeling can be a complementary tool for diagnosing PD. (orig.)

  4. 3D image registration using a fast noniterative algorithm.

    Science.gov (United States)

    Zhilkin, P; Alexander, M E

    2000-11-01

    This note describes the implementation of a three-dimensional (3D) registration algorithm, generalizing a previous 2D version [Alexander, Int J Imaging Systems and Technology 1999;10:242-57]. The algorithm solves an integrated form of linearized image matching equation over a set of 3D rectangular sub-volumes ('patches') in the image domain. This integrated form avoids numerical instabilities due to differentiation of a noisy image over a lattice, and in addition renders the algorithm robustness to noise. Registration is implemented by first convolving the unregistered images with a set of computationally fast [O(N)] filters, providing four bandpass images for each input image, and integrating the image matching equation over the given patch. Each filter and each patch together provide an independent set of constraints on the displacement field derived by solving a set of linear regression equations. Furthermore, the filters are implemented at a variety of spatial scales, enabling registration parameters at one scale to be used as an input approximation for deriving refined values of those parameters at a finer scale of resolution. This hierarchical procedure is necessary to avoid false matches occurring. Both downsampled and oversampled (undecimating) filtering is implemented. Although the former is computationally fast, it lacks the translation invariance of the latter. Oversampling is required for accurate interpolation that is used in intermediate stages of the algorithm to reconstruct the partially registered from the unregistered image. However, downsampling is useful, and computationally efficient, for preliminary stages of registration when large mismatches are present. The 3D registration algorithm was implemented using a 12-parameter affine model for the displacement: u(x) = Ax + b. Linear interpolation was used throughout. Accuracy and timing results for registering various multislice images, obtained by scanning a melon and human volunteers in various

  5. Robust Algorithm for Face Detection in Color Images

    Directory of Open Access Journals (Sweden)

    Hlaing Htake Khaung Tin

    2012-03-01

    Full Text Available Robust Algorithm is presented for frontal face detection in color images. Face detection is an important task in facial analysis systems in order to have a priori localized faces in a given image. Applications such as face tracking, facial expression recognition, gesture recognition, etc., for example, have a pre-requisite that a face is already located in the given image or the image sequence. Facial features such as eyes, nose and mouth are automatically detected based on properties of the associated image regions. On detecting a mouth, a nose and two eyes, a face verification step based on Eigen face theory is applied to a normalized search space in the image relative to the distance between the eye feature points. The experiments were carried out on test images taken from the internet and various other randomly selected sources. The algorithm has also been tested in practice with a webcam, giving (near real-time performance and good extraction results.

  6. Manifold learning based registration algorithms applied to multimodal images.

    Science.gov (United States)

    Azampour, Mohammad Farid; Ghaffari, Aboozar; Hamidinekoo, Azam; Fatemizadeh, Emad

    2014-01-01

    Manifold learning algorithms are proposed to be used in image processing based on their ability in preserving data structures while reducing the dimension and the exposure of data structure in lower dimension. Multi-modal images have the same structure and can be registered together as monomodal images if only structural information is shown. As a result, manifold learning is able to transform multi-modal images to mono-modal ones and subsequently do the registration using mono-modal methods. Based on this application, in this paper novel similarity measures are proposed for multi-modal images in which Laplacian eigenmaps are employed as manifold learning algorithm and are tested against rigid registration of PET/MR images. Results show the feasibility of using manifold learning as a way of calculating the similarity between multimodal images.

  7. An Image Processing Algorithm Based On FMAT

    Science.gov (United States)

    Wang, Lui; Pal, Sankar K.

    1995-01-01

    Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.

  8. A Color Image Edge Detection Algorithm Based on Color Difference

    Science.gov (United States)

    Zhuo, Li; Hu, Xiaochen; Jiang, Liying; Zhang, Jing

    2016-12-01

    Although image edge detection algorithms have been widely applied in image processing, the existing algorithms still face two important problems. On one hand, to restrain the interference of noise, smoothing filters are generally exploited in the existing algorithms, resulting in loss of significant edges. On the other hand, since the existing algorithms are sensitive to noise, many noisy edges are usually detected, which will disturb the subsequent processing. Therefore, a color image edge detection algorithm based on color difference is proposed in this paper. Firstly, a new operation called color separation is defined in this paper, which can reflect the information of color difference. Then, for the neighborhood of each pixel, color separations are calculated in four different directions to detect the edges. Experimental results on natural and synthetic images show that the proposed algorithm can remove a large number of noisy edges and be robust to the smoothing filters. Furthermore, the proposed edge detection algorithm is applied in road foreground segmentation and shadow removal, which achieves good performances.

  9. The Verification of Hybrid Image Deformation algorithm for PIV

    Directory of Open Access Journals (Sweden)

    Novotný Jan

    2016-06-01

    Full Text Available The aim of this paper was to test a newly designed algorithm for more accurate calculation of the image displacement of seeding particles when taking measurement using the Particle Image Velocimetry method. The proposed algorithm is based on modification of a classical iterative approach using a three-point subpixel interpolation and method using relative deformation of individual areas for accurate detection of signal peak position. The first part briefly describes the tested algorithm together with the results of the performed synthetic tests. The other part describes the measurement setup and the overall layout of the experiment. Subsequently, a comparison of results of the classical iterative scheme and our designed algorithm is carried out. The conclusion discusses the benefits of the tested algorithm, its advantages and disadvantages.

  10. Brain MR image segmentation improved algorithm based on probability

    Science.gov (United States)

    Liao, Hengxu; Liu, Gang; Guo, Xiantang

    2017-08-01

    Local weight voting algorithm is a kind of current mainstream segmentation algorithm. It takes full account of the influences of the likelihood of image likelihood and the prior probabilities of labels on the segmentation results. But this method still can be improved since the essence of this method is to get the label with the maximum probability. If the probability of a label is 70%, it may be acceptable in mathematics. But in the actual segmentation, it may be wrong. So we use the matrix completion algorithm as a supplement. When the probability of the former is larger, the result of the former algorithm is adopted. When the probability of the later is larger, the result of the later algorithm is adopted. This is equivalent to adding an automatic algorithm selection switch that can theoretically ensure that the accuracy of the algorithm we propose is superior to the local weight voting algorithm. At the same time, we propose an improved matrix completion algorithm based on enumeration method. In addition, this paper also uses a multi-parameter registration model to reduce the influence that the registration made on the segmentation. The experimental results show that the accuracy of the algorithm is better than the common segmentation algorithm.

  11. Dose and diagnostic image quality in digital tomosynthesis imaging of facial bones in pediatrics

    Science.gov (United States)

    King, J. M.; Hickling, S.; Elbakri, I. A.; Reed, M.; Wrogemann, J.

    2011-03-01

    The purpose of this study was to evaluate the use of digital tomosynthesis (DT) for pediatric facial bone imaging. We compared the eye lens dose and diagnostic image quality of DT facial bone exams relative to digital radiography (DR) and computed tomography (CT), and investigated whether we could modify our current DT imaging protocol to reduce patient dose while maintaining sufficient diagnostic image quality. We measured the dose to the eye lens for all three modalities using high-sensitivity thermoluminescent dosimeters (TLDs) and an anthropomorphic skull phantom. To assess the diagnostic image quality of DT compared to the corresponding DR and CT images, we performed an observer study where the visibility of anatomical structures in the DT phantom images were rated on a four-point scale. We then acquired DT images at lower doses and had radiologists indicate whether the visibility of each structure was adequate for diagnostic purposes. For typical facial bone exams, we measured eye lens doses of 0.1-0.4 mGy for DR, 0.3-3.7 mGy for DT, and 26 mGy for CT. In general, facial bone structures were visualized better with DT then DR, and the majority of structures were visualized well enough to avoid the need for CT. DT imaging provides high quality diagnostic images of the facial bones while delivering significantly lower doses to the lens of the eye compared to CT. In addition, we found that by adjusting the imaging parameters, the DT effective dose can be reduced by up to 50% while maintaining sufficient image quality.

  12. Reconstruction Algorithms in Undersampled AFM Imaging

    DEFF Research Database (Denmark)

    Arildsen, Thomas; Oxvig, Christian Schou; Pedersen, Patrick Steffen

    2016-01-01

    This paper provides a study of spatial undersampling in atomic force microscopy (AFM) imaging followed by different image reconstruction techniques based on sparse approximation as well as interpolation. The main reasons for using undersampling is that it reduces the path length and thereby the s...

  13. An Evolutionary Algorithm for Enhanced Magnetic Resonance Imaging Classification

    Directory of Open Access Journals (Sweden)

    T.S. Murunya

    2014-11-01

    Full Text Available This study presents an image classification method for retrieval of images from a multi-varied MRI database. With the development of sophisticated medical imaging technology which helps doctors in diagnosis, medical image databases contain a huge amount of digital images. Magnetic Resonance Imaging (MRI is a widely used imaging technique which picks signals from a body's magnetic particles spinning to magnetic tune and through a computer converts scanned data into pictures of internal organs. Image processing techniques are required to analyze medical images and retrieve it from database. The proposed framework extracts features using Moment Invariants (MI and Wavelet Packet Tree (WPT. Extracted features are reduced using Correlation based Feature Selection (CFS and a CFS with cuckoo search algorithm is proposed. Naïve Bayes and K-Nearest Neighbor (KNN classify the selected features. National Biomedical Imaging Archive (NBIA dataset including colon, brain and chest is used to evaluate the framework.

  14. An overview on recent radiation transport algorithm development for optical tomography imaging

    Energy Technology Data Exchange (ETDEWEB)

    Charette, Andre [Groupe de Recherche en Ingenierie des Procedes et Systemes, Universite du Quebec a Chicoutimi, Chicoutimi, QC, G7H 2B1 (Canada)], E-mail: Andre_Charette@uqac.ca; Boulanger, Joan [Laboratoire des Turbines a Gaz, Institut pour la Recherche Aerospatiale-Conseil National de Recherche du Canada, Ottawa, ON, K1A 0R6 (Canada); Kim, Hyun K [Department of Biomedical Engineering, Columbia University, New York, NY 10027 (United States)

    2008-11-15

    Optical tomography belongs to the promising set of non-invasive methods for probing applications of semi-transparent media. This covers a wide range of fields. Nowadays, it is mainly driven by medical imaging in search of new less aggressive and affordable diagnostic means. This paper aims at presenting the most recent research accomplished in the authors' laboratories as well as that of collaborative institutions concerning the development of imaging algorithms. The light transport modelling is not a difficult question as it used to be. Research is now focused on data treatment and reconstruction. Since the turn of the century, the rapid expansion of low cost computing has permitted the development of enhanced imaging algorithms with great potential. Some of these developments are already on the verge of clinical applications. This paper presents these developments and also provides some insights on still unresolved challenges. Intrinsic difficulties are identified and promising directions for solutions are discussed.

  15. A hybrid algorithm for speckle noise reduction of ultrasound images.

    Science.gov (United States)

    Singh, Karamjeet; Ranade, Sukhjeet Kaur; Singh, Chandan

    2017-09-01

    Medical images are contaminated by multiplicative speckle noise which significantly reduce the contrast of ultrasound images and creates a negative effect on various image interpretation tasks. In this paper, we proposed a hybrid denoising approach which collaborate the both local and nonlocal information in an efficient manner. The proposed hybrid algorithm consist of three stages in which at first stage the use of local statistics in the form of guided filter is used to reduce the effect of speckle noise initially. Then, an improved speckle reducing bilateral filter (SRBF) is developed to further reduce the speckle noise from the medical images. Finally, to reconstruct the diffused edges we have used the efficient post-processing technique which jointly considered the advantages of both bilateral and nonlocal mean (NLM) filter for the attenuation of speckle noise efficiently. The performance of proposed hybrid algorithm is evaluated on synthetic, simulated and real ultrasound images. The experiments conducted on various test images demonstrate that our proposed hybrid approach outperforms the various traditional speckle reduction approaches included recently proposed NLM and optimized Bayesian-based NLM. The results of various quantitative, qualitative measures and by visual inspection of denoise synthetic and real ultrasound images demonstrate that the proposed hybrid algorithm have strong denoising capability and able to preserve the fine image details such as edge of a lesion better than previously developed methods for speckle noise reduction. The denoising and edge preserving capability of hybrid algorithm is far better than existing traditional and recently proposed speckle reduction (SR) filters. The success of proposed algorithm would help in building the lay foundation for inventing the hybrid algorithms for denoising of ultrasound images. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Tuning, Diagnostics & Data Preparation for Generalized Linear Models Supervised Algorithm in Data Mining Technologies

    Directory of Open Access Journals (Sweden)

    Sachin Bhaskar

    2015-07-01

    Full Text Available Data mining techniques are the result of a long process of research and product development. Large amount of data are searched by the practice of Data Mining to find out the trends and patterns that go beyond simple analysis. For segmentation of data and also to evaluate the possibility of future events, complex mathematical algorithms are used here. Specific algorithm produces each Data Mining model. More than one algorithms are used to solve in best way by some Data Mining problems. Data Mining technologies can be used through Oracle. Generalized Linear Models (GLM Algorithm is used in Regression and Classification Oracle Data Mining functions. For linear modelling, GLM is one the popular statistical techniques. For regression and binary classification, GLM is implemented by Oracle Data Mining. Row diagnostics as well as model statistics and extensive co-efficient statistics are provided by GLM. It also supports confidence bounds.. This paper outlines and produces analysis of GLM algorithm, which will guide to understand the tuning, diagnostics & data preparation process and the importance of Regression & Classification supervised Oracle Data Mining functions and it is utilized in marketing, time series prediction, financial forecasting, overall business planning, trend analysis, environmental modelling, biomedical and drug response modelling, etc.

  17. Reduction of beam hardening artifacts in cone-beam CT imaging via SMART-RECON algorithm

    Science.gov (United States)

    Li, Yinsheng; Garrett, John; Chen, Guang-Hong

    2016-03-01

    When an automatic exposure control is introduced in C-arm cone beam CT data acquisition, the spectral inconsistencies between acquired projection data are exacerbated. As a result, conventional water/bone correction schemes are not as effective as in conventional diagnostic x-ray CT acquisitions with a fixed tube potential. In this paper, a new method was proposed to reconstruct several images with different degrees of spectral consistency and thus different levels of beam hardening artifacts. The new method relies neither on prior knowledge of the x-ray beam spectrum nor on prior compositional information of the imaging object. Numerical simulations were used to validate the algorithm.

  18. FACT. New image parameters based on the watershed-algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Linhoff, Lena; Bruegge, Kai Arno; Buss, Jens [TU Dortmund (Germany). Experimentelle Physik 5b; Collaboration: FACT-Collaboration

    2016-07-01

    FACT, the First G-APD Cherenkov Telescope, is the first imaging atmospheric Cherenkov telescope that is using Geiger-mode avalanche photodiodes (G-APDs) as photo sensors. The raw data produced by this telescope are processed in an analysis chain, which leads to a classification of the primary particle that induce a shower and to an estimation of its energy. One important step in this analysis chain is the parameter extraction from shower images. By the application of a watershed algorithm to the camera image, new parameters are computed. Perceiving the brightness of a pixel as height, a set of pixels can be seen as 'landscape' with hills and valleys. A watershed algorithm groups all pixels to a cluster that belongs to the same hill. From the emerging segmented image, one can find new parameters for later analysis steps, e.g. number of clusters, their shape and containing photon charge. For FACT data, the FellWalker algorithm was chosen from the class of watershed algorithms, because it was designed to work on discrete distributions, in this case the pixels of a camera image. The FellWalker algorithm is implemented in FACT-tools, which provides the low level analysis framework for FACT. This talk will focus on the computation of new, FellWalker based, image parameters, which can be used for the gamma-hadron separation. Additionally, their distributions concerning real and Monte Carlo Data are compared.

  19. A New Method for Medical Image Clustering Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Akbar Shahrzad Khashandarag

    2013-01-01

    Full Text Available Segmentation is applied in medical images when the brightness of the images becomes weaker so that making different in recognizing the tissues borders. Thus, the exact segmentation of medical images is an essential process in recognizing and curing an illness. Thus, it is obvious that the purpose of clustering in medical images is the recognition of damaged areas in tissues. Different techniques have been introduced for clustering in different fields such as engineering, medicine, data mining and so on. However, there is no standard technique of clustering to present ideal results for all of the imaging applications. In this paper, a new method combining genetic algorithm and k-means algorithm is presented for clustering medical images. In this combined technique, variable string length genetic algorithm (VGA is used for the determination of the optimal cluster centers. The proposed algorithm has been compared with the k-means clustering algorithm. The advantage of the proposed method is the accuracy in selecting the optimal cluster centers compared with the above mentioned technique.

  20. Binary Image Watermarking Algorithm Using Matrix of Complexity Index

    Institute of Scientific and Technical Information of China (English)

    ZHANG Fan; ZHANG Jun-liang; SHEN Xia-jiong

    2008-01-01

    A new watermarking algorithm of binary image is proposed.The complexity index of pixels is presented to reflect the change degree of pixels and to evaluate the modifiable degree of pixels.Firstly,in a small image block,the complexity index of "jumping-change" is calculated in vertical and horizontal direction.Secondly,the matrix of the complexity index is calculated by integrating the complexity index of pixels in two directions.Finally,the matrix of the complexity index is used to embed the watermark in binary images.Experimental results show that the proposed algorithm has a good performance.

  1. Strategic planning for radiology: opening an outpatient diagnostic imaging center.

    Science.gov (United States)

    Leepson, Evan

    2003-01-01

    Launching a new diagnostic imaging center involves very specific requirements and roadmaps, including five major areas of change that have a direct impact on planning: Imaging and communication technology Finances and reimbursement Ownership structure of imaging entities Critical workforce shortages Imaging is moving outside radiology First, planning must focus on the strategic level of any organization, whether it is a multi-national corporation or a six-person radiology group. Think of all organizations as a triangle with three horizontal levels: strategic, managerial and operational. The strategic level of decision-making is at the top of the triangle, and here is where planning must take place. For strategic planning to work, there must be focused time and energy spent on this activity, usually away from the reading room and imaging center. There are five planning strategies, which must have the explicit goal of developing and growing the imaging center. The five strategies are: Clinical and quality issues, Governance and administration, Technology, Relationships, Marketing and business development. The best way to plan and implement these strategies is to create work groups of radiologists, technologists, and administrative and support staff. Once the group agrees on the strategy and tactic, it takes responsibility for implementation. Embarking on the launch of a new outpatient diagnostic imaging center is no small undertaking, and anyone who has struggled with such an endeavor can readily attest to the associated challenges and benefits. Success depends on many things, and one of the most important factors relates to the amount of time and the quality of effort spent on strategic planning at the outset. Neglecting or skimping on this phase may lead to unforeseen obstacles that could potentially derail the project.

  2. Automated Photogrammetric Image Matching with Sift Algorithm and Delaunay Triangulation

    Science.gov (United States)

    Karagiannis, Georgios; Antón Castro, Francesc; Mioc, Darka

    2016-06-01

    An algorithm for image matching of multi-sensor and multi-temporal satellite images is developed. The method is based on the SIFT feature detector proposed by Lowe in (Lowe, 1999). First, SIFT feature points are detected independently in two images (reference and sensed image). The features detected are invariant to image rotations, translations, scaling and also to changes in illumination, brightness and 3-dimensional viewpoint. Afterwards, each feature of the reference image is matched with one in the sensed image if, and only if, the distance between them multiplied by a threshold is shorter than the distances between the point and all the other points in the sensed image. Then, the matched features are used to compute the parameters of the homography that transforms the coordinate system of the sensed image to the coordinate system of the reference image. The Delaunay triangulations of each feature set for each image are computed. The isomorphism of the Delaunay triangulations is determined to guarantee the quality of the image matching. The algorithm is implemented in Matlab and tested on World-View 2, SPOT6 and TerraSAR-X image patches.

  3. Color Image Inpainting By an Improved Criminisi Algorithm

    Directory of Open Access Journals (Sweden)

    He Yu-Ting

    2017-01-01

    Full Text Available Due to the incorrect filling order and the fixed size of patch, the traditional examplar-based image inpainting algorithm tends to cause the image structure fracture, texture error extension and so on. So in this paper, it proposes an improved Criminisi algorithm with adaptive adjustment with gradient variation to color image inpainting algorithm. Firstly, to overcome the discontinuity of the edge structure caused by the incorrect filling order, using curvature of isophotes to constraint the filling order. Secondly, in order to solve the lack of the step effect in rich texture region, it adaptively adjusts the sample patch size according to the variation of local gradient. Finally, the local search method is used to find the best matching patch. The experimental results show that the proposed algorithm’s PSNR increased by 1-3dB and obtain better results in terms of different types of images.

  4. Feature Based Correspondence: A Comparative Study on Image Matching Algorithms

    Directory of Open Access Journals (Sweden)

    Munim Tanvir

    2016-03-01

    Full Text Available Image matching and recognition are the crux of computer vision and have a major part to play in everyday lives. From industrial robots to surveillance cameras, from autonomous vehicles to medical imaging and from missile guidance to space exploration vehicles computer vision and hence image matching is embedded in our lives. This communication presents a comparative study on the prevalent matching algorithms, addressing their restrictions and providing a criterion to define the level of efficiency likely to be expected from an algorithm. The study includes the feature detection and matching techniques used by these prevalent algorithms to allow a deeper insight. The chief aim of the study is to deliver a source of comprehensive reference for the researchers involved in image matching, regardless of specific applications.

  5. An image encryption algorithm utilizing julia sets and hilbert curves.

    Science.gov (United States)

    Sun, Yuanyuan; Chen, Lina; Xu, Rudan; Kong, Ruiqing

    2014-01-01

    Image encryption is an important and effective technique to protect image security. In this paper, a novel image encryption algorithm combining Julia sets and Hilbert curves is proposed. The algorithm utilizes Julia sets' parameters to generate a random sequence as the initial keys and gets the final encryption keys by scrambling the initial keys through the Hilbert curve. The final cipher image is obtained by modulo arithmetic and diffuse operation. In this method, it needs only a few parameters for the key generation, which greatly reduces the storage space. Moreover, because of the Julia sets' properties, such as infiniteness and chaotic characteristics, the keys have high sensitivity even to a tiny perturbation. The experimental results indicate that the algorithm has large key space, good statistical property, high sensitivity for the keys, and effective resistance to the chosen-plaintext attack.

  6. Remote Sensing Image Resolution Enlargement Algorithm Based on Wavelet Transformation

    Directory of Open Access Journals (Sweden)

    Samiul Azam

    2014-05-01

    Full Text Available In this paper, we present a new image resolution enhancement algorithm based on cycle spinning and stationary wavelet subband padding. The proposed technique or algorithm uses stationary wavelet transformation (SWT to decompose the low resolution (LR image into frequency subbands. All these frequency subbands are interpolated using either bicubic or lanczos interpolation, and these interpolated subbands are put into inverse SWT process for generating intermediate high resolution (HR image. Finally, cycle spinning (CS is applied on this intermediate high resolution image for reducing blocking artifacts, followed by, traditional Laplacian sharpening filter is used to make the generated high resolution image sharper. This new technique has been tested on several satellite images. Experimental result shows that the proposed technique outperforms the conventional and the state-of-the-art techniques in terms of peak signal to noise ratio, root mean square error, entropy, as well as, visual perspective.

  7. Anisotropic conductivity imaging with MREIT using equipotential projection algorithm.

    Science.gov (United States)

    Değirmenci, Evren; Eyüboğlu, B Murat

    2007-12-21

    Magnetic resonance electrical impedance tomography (MREIT) combines magnetic flux or current density measurements obtained by magnetic resonance imaging (MRI) and surface potential measurements to reconstruct images of true conductivity with high spatial resolution. Most of the biological tissues have anisotropic conductivity; therefore, anisotropy should be taken into account in conductivity image reconstruction. Almost all of the MREIT reconstruction algorithms proposed to date assume isotropic conductivity distribution. In this study, a novel MREIT image reconstruction algorithm is proposed to image anisotropic conductivity. Relative anisotropic conductivity values are reconstructed iteratively, using only current density measurements without any potential measurement. In order to obtain true conductivity values, only either one potential or conductivity measurement is sufficient to determine a scaling factor. The proposed technique is evaluated on simulated data for isotropic and anisotropic conductivity distributions, with and without measurement noise. Simulation results show that the images of both anisotropic and isotropic conductivity distributions can be reconstructed successfully.

  8. A Novel Image Fusion Algorithm for Visible and PMMW Images based on Clustering and NSCT

    OpenAIRE

    Xiong Jintao; Xie Weichao; Yang Jianyu; Fu Yanlong; Hu Kuan; Zhong Zhibin

    2016-01-01

    Aiming at the fusion of visible and Passive Millimeter Wave (PMMW) images, a novel algorithm based on clustering and NSCT (Nonsubsampled Contourlet Transform) is proposed. It takes advantages of the particular ability of PMMW image in presenting metal target and uses the clustering algorithm for PMMW image to extract the potential target regions. In the process of fusion, NSCT is applied to both input images, and then the decomposition coefficients on different scale are combined using differ...

  9. Image quality evaluation of iterative CT reconstruction algorithms: a perspective from spatial domain noise texture measures

    Science.gov (United States)

    Pachon, Jan H.; Yadava, Girijesh; Pal, Debashish; Hsieh, Jiang

    2012-03-01

    Non-linear iterative reconstruction (IR) algorithms have shown promising improvements in image quality at reduced dose levels. However, IR images sometimes may be perceived as having different image noise texture than traditional filtered back projection (FBP) reconstruction. Standard linear-systems-based image quality evaluation metrics are limited in characterizing such textural differences and non-linear image-quality vs. dose trade-off behavior, hence limited in predicting potential impact of such texture differences in diagnostic task. In an attempt to objectively characterize and measure dose dependent image noise texture and statistical properties of IR and FBP images, we have investigated higher order moments and Haralicks Gray Level Co-occurrence Matrices (GLCM) based texture features on phantom images reconstructed by an iterative and a traditional FBP method. In this study, the first 4 central order moments, and multiple texture features from Haralick GLCM in 4 directions at 6 different ROI sizes and four dose levels were computed. For resolution, noise and texture trade-off analysis, spatial frequency domain NPS and contrastdependent MTF were also computed. Preliminary results of the study indicate that higher order moments, along with spatial domain measures of energy, contrast, correlation, homogeneity, and entropy consistently capture the textural differences between FBP and IR as dose changes. These metrics may be useful in describing the perceptual differences in randomness, coarseness, contrast, and smoothness of images reconstructed by non-linear algorithms.

  10. The diagnostic accuracy of MR imaging in osteoid osteoma

    Energy Technology Data Exchange (ETDEWEB)

    Davies, Mark; Cassar-Pullicino, Victor N.; McCall, Iain W.; Tyrrell, Prudencia N.M. [Department of Radiology, The Robert Jones and Agnes Hunt Orthopaedic Hospital, Oswestry, Shropshire, SY10 7AG (United Kingdom); Davies, Mark A. [The MRI Centre, Royal Orthopaedic Hospital, Birmingham (United Kingdom)

    2002-10-01

    To analyse the MR imaging appearances of a large series of osteoid osteomas, to assess the ability of MR imaging to detect the tumour, and to identify potential reasons for misdiagnosis.Design and patients. The MR imaging findings of 43 patients with osteoid osteoma were reviewed retrospectively and then compared with other imaging modalities to assess the accuracy of MR localisation and interpretation.Results. The potential for a missed diagnosis was 35% based solely on the MR investigations. This included six tumours which were not seen and nine which were poorly visualised. The major determinants of the diagnostic accuracy of MR imaging were the MR technique, skeletal location, and preliminary radiographic appearances. There was a wide spectrum of MR signal appearances of the lesion. The tumour was identified in 65% of sequences performed in the axial plane. The nidus was present in only one slice of the optimal sequence in 27 patients. Reactive bone changes were present in 33 and soft tissue changes in 37 patients.Conclusion. Reliance on MR imaging alone may lead to misdiagnosis. As the osteoid osteoma may be difficult to identify and the MR features easily misinterpreted, optimisation of MR technique is crucial in reducing the risk of missing the diagnosis. Unexplained areas of bone marrow oedema in particular require further imaging (scintigraphy and CT) to exclude an osteoid osteoma. (orig.)

  11. [Diagnostic-therapeutic Algorithm in a Blunt Injury of the Thorax.].

    Science.gov (United States)

    Vyhnánek, F; Fanta, J; Lisý, P; Vojtísek, O; Cáp, F

    2000-01-01

    Based on the group of 22 patients operated on for a blunt injury of the thorax a diagnostic-therapeutic algorithm was evaluated in the procedure of the treatment of a severe trauma of the thorax. Acute thoracotomy or laparotomy was performed in 17 patients and in 5 of them thoracotomy was indicated only after some time interval. In the patients with acute surgery the indication was a rupture of diaphragm, massive hemotorax in case of lung laceration of bleeding from thoracic wall, rupture of bronchus and an associated injury of intra-abdominal parenchymal organs. Thoracotomy was after some time interval performed in case of empyema of thorax, post-injury paresis of diaphragm and residual hematoma in the lung parenchyma. Key words: blunt injury of thorax, diagnostic-therapeutic algorithm, indication to an acute or postponed operation.

  12. IMPLANT-ASSOCIATED PATHOLOGY: AN ALGORITHM FOR IDENTIFYING PARTICLES IN HISTOPATHOLOGIC SYNOVIALIS/SLIM DIAGNOSTICS

    Directory of Open Access Journals (Sweden)

    V. Krenn

    2014-01-01

    Full Text Available In histopathologic SLIM diagnostic (synovial-like interface membrane, SLIM apart from diagnosing periprosthetic infection particle identification has an important role to play. The differences in particle pathogenesis and variability of materials in endoprosthetics explain the particle heterogeneity that hampers the diagnostic identification of particles. For this reason, a histopathological particle algorithm has been developed. With minimal methodical complexity this histopathological particle algorithm offers a guide to prosthesis material-particle identification. Light microscopic-morphological as well as enzyme-histochemical characteristics and polarization-optical proporties have set and particles are defined by size (microparticles, macroparticles and supra- macroparticles and definitely characterized in accordance with a dichotomous principle. Based on these criteria, identification and validation of the particles was carried out in 120 joint endoprosthesis pathological cases. A histopathological particle score (HPS is proposed that summarizes the most important information for the orthopedist, material scientist and histopathologist concerning particle identification in the SLIM.

  13. Simulated annealing spectral clustering algorithm for image segmentation

    Institute of Scientific and Technical Information of China (English)

    Yifang Yang; and Yuping Wang

    2014-01-01

    The similarity measure is crucial to the performance of spectral clustering. The Gaussian kernel function based on the Euclidean distance is usual y adopted as the similarity mea-sure. However, the Euclidean distance measure cannot ful y reveal the complex distribution data, and the result of spectral clustering is very sensitive to the scaling parameter. To solve these problems, a new manifold distance measure and a novel simulated anneal-ing spectral clustering (SASC) algorithm based on the manifold distance measure are proposed. The simulated annealing based on genetic algorithm (SAGA), characterized by its rapid conver-gence to the global optimum, is used to cluster the sample points in the spectral mapping space. The proposed algorithm can not only reflect local and global consistency better, but also reduce the sensitivity of spectral clustering to the kernel parameter, which improves the algorithm’s clustering performance. To efficiently ap-ply the algorithm to image segmentation, the Nystr¨om method is used to reduce the computation complexity. Experimental re-sults show that compared with traditional clustering algorithms and those popular spectral clustering algorithms, the proposed algorithm can achieve better clustering performances on several synthetic datasets, texture images and real images.

  14. Filtered gradient reconstruction algorithm for compressive spectral imaging

    Science.gov (United States)

    Mejia, Yuri; Arguello, Henry

    2017-04-01

    Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.

  15. Accuracy of diagnostic imaging in nephroblastoma before preoperative chemotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Rieden, K. [Radiologische Klinik, Abt. Klinische Radiologie, Heidelberg (Germany); Weirich, A. [Kinderklinik, Univ. of Heidelberg (Germany); Troeger, J. [Radiologische Klinik, Abt. Paediatrische Radiologie, Univ. of Heidelberg (Germany); Gamroth, A.H. [Deutsches Krebsforschungszentrum, Heidelberg (Germany); Raschke, K. [Radiologische Klinik, Abt. Paediatrische Radiologie, Univ. of Heidelberg (Germany); Ludwig, R. [Kinderklinik, Univ. of Heidelberg (Germany)

    1993-04-01

    From July 1988 to February 1991, 130 children with the tentative diagnosis of nephroblastoma were treated preoperatively. The initial diagnostic images (excretory urography, ultrasound, CT, MRI) have been analysed both prospectively and retrospectively and the findings correlated with the intraoperative and histological results. Of the preoperatively treated patients 93.8% had a Wilms` tumour or one of its variants. Five patients had a different malignant tumour and 3 patients, i.e. 2.3% of those preoperatively treated or 1.6% of all registered patients, had benign tumours of the kidney. Wilms` tumour generally presented as a well-defined mass with an inhomogeneous morphology on CT. On ultrasound only 24% of the tumours were homogeneous. Intratumoral haemorrhage and cystic areas occurred frequently; calcifications were rare (8%). With regard to caval involvement only ultrasound and MRI enabled the correct diagnosis, while CT could not differentiate compressions from invasion. The pretherapeutic diagnostic imaging was of sufficient accuracy to start preoperative chemotherapy without diagnostic biopsy. (orig.)

  16. Application of aptamers in diagnostics, drug-delivery and imaging

    Indian Academy of Sciences (India)

    CHETAN CHANDOLA; SHEETAL KALME; MARCO G CASTELEIJN; ARTO URTTI; MUNIASAMY NEERATHILINGAM

    2016-09-01

    Aptamers are small, single-stranded oligonucleotides (DNA or RNA) that bind to their target with high specificity andaffinity. Although aptamers are analogous to antibodies for a wide range of target recognition and variety ofapplications, they have significant advantages over antibodies. Since aptamers have recently emerged as a class ofbiomolecules with an application in a wide array of fields, we need to summarize the latest developments herein. Inthis review we will discuss about the latest developments in using aptamers in diagnostics, drug delivery and imaging.We begin with diagnostics, discussing the application of aptamers for the detection of infective agents itself, antigens/toxins (bacteria), biomarkers (cancer), or a combination. The ease of conjugation and labelling of aptamers makesthem a potential tool for diagnostics. Also, due to the reduced off-target effects of aptamers, their use as a potentialdrug delivery tool is emerging rapidly. Hence, we discuss their use in targeted delivery in conjugation with siRNAs,nanoparticles, liposomes, drugs and antibodies. Finally, we discuss about the conjugation strategies applicable forRNA and DNA aptamers for imaging. Their stability and self-assembly after heating makes them superior overprotein-based binding molecules in terms of labelling and conjugation strategies.

  17. Background Simulation and Correction Algorithm in Spot Weld Image Processing

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    One of the chief works for inspecting spot weld quality by X-ray to obtain an ideal and uniform digital image. This paper introduces three methods of image background simulation algorithm, and the effect of background correction was compared. It may be safely said that Kalman filter method is simple and fast for general image; the FFT method has a good adaptability for background simulation.

  18. Note: thermal imaging enhancement algorithm for gas turbine aerothermal characterization.

    Science.gov (United States)

    Beer, S K; Lawson, S A

    2013-08-01

    An algorithm was developed to convert radiation intensity images acquired using a black and white CCD camera to thermal images without requiring knowledge of incident background radiation. This unique infrared (IR) thermography method was developed to determine aerothermal characteristics of advanced cooling concepts for gas turbine cooling application. Compared to IR imaging systems traditionally used for gas turbine temperature monitoring, the system developed for the current study is relatively inexpensive and does not require calibration with surface mounted thermocouples.

  19. Clinical guidelines development and usage: a critical insight and literature review: thyroid disease diagnostic algorithms.

    Science.gov (United States)

    Murgić, Jure; Salopek, Daniela; Prpić, Marin; Jukić, Tomislav; Kusić, Zvonko

    2008-12-01

    Clinical guidelines have been increasingly used in medicine. They represent a system of recommendations for the conduction of specific procedures used in fields from public health to different diagnostic and therapeutic procedures in clinical medicine. Guidelines are designed to facilitate to medical practitioners the adoption, evaluation and application of an increasing body of evidence and arising number of expert opinions regarding the presently best treatment and to help in delivering proper decision for the management of a patient or condition. Clinical guidelines represent a part of complementary activity by which research is implemented into praxis, standards are defined and clinical excellence is promoted in all health care fields. There are specific conditions which quality guidelines should meet. First of all, they need to be founded on comprehensive literature review, apart from clinical studies and trials in the target field. Also, there are more systems for analyzing and grading the strength of clinical evidence and the level of recommendation emerging from it. Algorithms are used to organize and summarize guidelines. The algorithm itself has a form of an informatic record and a logical flow. Algorithms, especially in case of clinical uncertainty, must be used for the improvement of health care, increasing it's availability and integration of the newest scientific knowledge. They should have an important role in the health care rationalisation, fight against non-rational diagnostics manifested as diagnostic procedures with no clinical indications, it's unnecessary repetition and wrong sequence. Several diagnostic algorithms used in the field of thyroid diseases are presented, since they have been proved to be of great use.

  20. Choice of diagnostic and therapeutic imaging in periodontics and implantology.

    Science.gov (United States)

    Chakrapani, Swarna; Sirisha, K; Srilalitha, Anumadi; Srinivas, Moogala

    2013-11-01

    Imaging forms an integral component for diagnosis of dental and in specific periodontal diseases. To date, intra-oral radiographic techniques are the main non-invasive diagnostic aids for the detection and assessment of internal changes in mineralized periodontal tissues like alveolar bone. These analog radiographic techniques suffer from inherent limitations like: Two dimensional projection, magnification, distortion, superimposition and misrepresentation of anatomic structures. The evolution of novel imaging modalities, namely cone beam computed tomography, tuned aperture CT empowered dental researchers to visualize the periodontium three dimensionally. This improves interpretation of structural and biophysical changes, ensures densitometric assessments of dentoalveolar structures including variations in alveolar bone density, and peri-implant bone healing more precisely. This detailed review, highlights current leading edge concepts, envisions a wide range of imaging modalities which pave the way for better understanding and early intervention of periodontal diseases.

  1. Comparison of Different Post-Processing Algorithms for Dynamic Susceptibility Contrast Perfusion Imaging of Cerebral Gliomas.

    Science.gov (United States)

    Kudo, Kohsuke; Uwano, Ikuko; Hirai, Toshinori; Murakami, Ryuji; Nakamura, Hideo; Fujima, Noriyuki; Yamashita, Fumio; Goodwin, Jonathan; Higuchi, Satomi; Sasaki, Makoto

    2017-04-10

    The purpose of the present study was to compare different software algorithms for processing DSC perfusion images of cerebral tumors with respect to i) the relative CBV (rCBV) calculated, ii) the cutoff value for discriminating low- and high-grade gliomas, and iii) the diagnostic performance for differentiating these tumors. Following approval of institutional review board, informed consent was obtained from all patients. Thirty-five patients with primary glioma (grade II, 9; grade III, 8; and grade IV, 18 patients) were included. DSC perfusion imaging was performed with 3-Tesla MRI scanner. CBV maps were generated by using 11 different algorithms of four commercially available software and one academic program. rCBV of each tumor compared to normal white matter was calculated by ROI measurements. Differences in rCBV value were compared between algorithms for each tumor grade. Receiver operator characteristics analysis was conducted for the evaluation of diagnostic performance of different algorithms for differentiating between different grades. Several algorithms showed significant differences in rCBV, especially for grade IV tumors. When differentiating between low- (II) and high-grade (III/IV) tumors, the area under the ROC curve (Az) was similar (range 0.85-0.87), and there were no significant differences in Az between any pair of algorithms. In contrast, the optimal cutoff values varied between algorithms (range 4.18-6.53). rCBV values of tumor and cutoff values for discriminating low- and high-grade gliomas differed between software packages, suggesting that optimal software-specific cutoff values should be used for diagnosis of high-grade gliomas.

  2. Algorithm for image fusion via gradient correlation and difference statistics

    Science.gov (United States)

    Han, Jing; Wang, Li-juan; Zhang, Yi; Bai, Lian-fa; Mao, Ningjie

    2016-10-01

    In order to overcome the shortcoming of traditional image fusion based on discrete wavelet transform (DWT), a novel image fusion algorithm based on gradient correlation and difference statistics is proposed in this paper. The source images are decomposed into low-frequency coefficients and high-frequency coefficients by DWT: the former are fused by a local gradient correlation based scheme to extract the local feature information in source images; the latter are fused by a neighbor difference statistics based scheme to reserve the conspicuous edge information. Finally, the fused image is reconstructed by inverse DWT. Experimental results show that the proposed method performs better than other methods in reserving details.

  3. Experience with CANDID: Comparison algorithm for navigating digital image databases

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, P.; Cannon, M.

    1994-10-01

    This paper presents results from the authors experience with CANDID (Comparison Algorithm for Navigating Digital Image Databases), which was designed to facilitate image retrieval by content using a query-by-example methodology. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized similarity measure between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to a user-provided example image. Results for three test applications are included.

  4. INFORMATION SECURITY THROUGH IMAGE WATERMARKING USING LEAST SIGNIFICANT BIT ALGORITHM

    Directory of Open Access Journals (Sweden)

    Puneet Kr Sharma

    2012-05-01

    Full Text Available The rapid advancement of internet has made it easier to send the data/image accurate and faster to the destination. Besides this, it is easier to modify and misuse the valuable information through hacking at the same time. In order to transfer the data/image securely to the destination without any modifications, there are many approaches like Cryptography, Watermarking and Steganography. This paper presents the general overview of image watermarking and different security issues. In this paper, Image Watermarking using Least Significant Bit (LSB algorithm has been used for embedding the message/logo into the image. This work has been implemented through MATLAB.

  5. Distributed computing of Seismic Imaging Algorithms

    CERN Document Server

    Emami, Masnida; Jaberi, Nasrin

    2012-01-01

    The primary use of technical computing in the oil and gas industries is for seismic imaging of the earth's subsurface, driven by the business need for making well-informed drilling decisions during petroleum exploration and production. Since each oil/gas well in exploration areas costs several tens of millions of dollars, producing high-quality seismic images in a reasonable time can significantly reduce the risk of drilling a "dry hole". Similarly, these images are important as they can improve the position of wells in a billion-dollar producing oil field. However seismic imaging is very data- and compute-intensive which needs to process terabytes of data and require Gflop-years of computation (using "flop" to mean floating point operation per second). Due to the data/computing intensive nature of seismic imaging, parallel computing are used to process data to reduce the time compilation. With introducing of Cloud computing, MapReduce programming model has been attracted a lot of attention in parallel and di...

  6. Medical Image Segmentation through Bat-Active Contour Algorithm

    Directory of Open Access Journals (Sweden)

    Rabiu O. Isah

    2017-01-01

    Full Text Available In this research work, an improved active contour method called Bat-Active Contour Method (BAACM using bat algorithm has been developed. The bat algorithm is incorporated in order to escape local minima entrapped into by the classical active contour method, stabilize contour (snake movement and accurately, reach boundary concavity. Then, the developed Bat-Active Contour Method was applied to a dataset of medical images of the human heart, bone of knee and vertebra which were obtained from Auckland MRI Research Group (Cardiac Atlas Website, University of Auckland. Set of similarity metrics, including Jaccard index and Dice similarity measures were adopted to evaluate the performance of the developed algorithm. Jaccard index values of 0.9310, 0.9234 and 0.8947 and Dice similarity values of 0.8341, 0.8616 and 0.9138 were obtained from the human heart, vertebra and bone of knee images respectively. The results obtained show high similarity measures between BA-ACM algorithm and expert segmented images. Moreso, traditional ACM produced Jaccard index values 0.5873, 0.5601, 0.6009 and Dice similarity values of 0.5974, 0.6079, 0.6102 in the human heart, vertebra and bone of knee images respectively. The results obtained for traditional ACM show low similarity measures between it and expertly segmented images. It is evident from the results obtained that the developed algorithm performed better compared to the traditional ACM

  7. Physical therapist practice and the role of diagnostic imaging.

    Science.gov (United States)

    Boyles, Robert E; Gorman, Ira; Pinto, Daniel; Ross, Michael D

    2011-11-01

    For healthcare providers involved in the management of patients with musculoskeletal disorders, the ability to order diagnostic imaging is a beneficial adjunct to screening for medical referral and differential diagnosis. A trial of conservative treatment, such as physical therapy, is often recommended prior to the use of imaging in many treatment guidelines for the management of musculoskeletal conditions. In the United States, physical therapists are becoming more autonomous and can practice some degree of direct access in 48 states and Washington, DC. Referral for imaging privileges could increase the effectiveness and efficiency of healthcare delivery, particularly in combination with direct access management. This clinical commentary proposes that, given the American Physical Therapy Association's goal to have physical therapists as primary care musculoskeletal specialists of choice, it would be beneficial for physical therapists to have imaging privileges in their practice. The purpose of this commentary is 3-fold: (1) to make a case for the use of imaging privileges by physical therapists, using a historical perspective; (2) to discuss the barriers preventing physical therapists from having this privilege; and (3) to offer suggestions on strategies and guidelines to facilitate the appropriate inclusion of referral for imaging privileges in physical therapist practice. J Orthop Sports Phys Ther 2011;41(11):829-837. doi:10.2519/jospt.2011.3556.

  8. Meteosat Images Encryption based on AES and RSA Algorithms

    Directory of Open Access Journals (Sweden)

    Boukhatem Mohammed Belkaid

    2015-06-01

    Full Text Available Satellite image Security is playing a vital role in the field of communication system and Internet. This work is interested in securing transmission of Meteosat images on the Internet, in public or local networks. To enhance the security of Meteosat transmission in network communication, a hybrid encryption algorithm based on Advanced Encryption Standard (AES and Rivest Shamir Adleman (RSA algorithms is proposed. AES algorithm is used for data transmission because of its higher efficiency in block encryption and RSA algorithm is used for the encryption of the key of the AES because of its management advantages in key cipher. Our encryption system generates a unique password every new session of encryption. Cryptanalysis and various experiments have been carried out and the results were reported in this paper, which demonstrate the feasibility and flexibility of the proposed scheme.

  9. A New Algorithm of Sub-pixels Image Matching

    Institute of Scientific and Technical Information of China (English)

    Wu Jianming(吴建明); Xu Zhiyang; Shi Pengfei

    2004-01-01

    This paper discusses a new algorithm of sub-pixels image matching and analyzes the characteristics of resampling and surface fitting methods. In order to meet the matching demands and to alleviate the computation workload, the following improvement algorithms are used. First, resample the model n-times, putt out (2n-1) sub-models, and calculate the NCs between each sub-model and image. Then choose the maximum between the sub-model and the displacement corresponding to this sub-model which requires the sub-pixel displacement. Finally, put forward a new algorithm that combines the resampling with surface fitting methods. Experimental results show the validity of the algorithm.

  10. A diagnostic assessment of evolutionary algorithms for multi-objective surface water reservoir control

    Science.gov (United States)

    Zatarain Salazar, Jazmin; Reed, Patrick M.; Herman, Jonathan D.; Giuliani, Matteo; Castelletti, Andrea

    2016-06-01

    Globally, the pressures of expanding populations, climate change, and increased energy demands are motivating significant investments in re-operationalizing existing reservoirs or designing operating policies for new ones. These challenges require an understanding of the tradeoffs that emerge across the complex suite of multi-sector demands in river basin systems. This study benchmarks our current capabilities to use Evolutionary Multi-Objective Direct Policy Search (EMODPS), a decision analytic framework in which reservoirs' candidate operating policies are represented using parameterized global approximators (e.g., radial basis functions) then those parameterized functions are optimized using multi-objective evolutionary algorithms to discover the Pareto approximate operating policies. We contribute a comprehensive diagnostic assessment of modern MOEAs' abilities to support EMODPS using the Conowingo reservoir in the Lower Susquehanna River Basin, Pennsylvania, USA. Our diagnostic results highlight that EMODPS can be very challenging for some modern MOEAs and that epsilon dominance, time-continuation, and auto-adaptive search are helpful for attaining high levels of performance. The ɛ-MOEA, the auto-adaptive Borg MOEA, and ɛ-NSGAII all yielded superior results for the six-objective Lower Susquehanna benchmarking test case. The top algorithms show low sensitivity to different MOEA parameterization choices and high algorithmic reliability in attaining consistent results for different random MOEA trials. Overall, EMODPS poses a promising method for discovering key reservoir management tradeoffs; however algorithmic choice remains a key concern for problems of increasing complexity.

  11. Australian per caput dose from diagnostic imaging and nuclear medicine.

    Science.gov (United States)

    Hayton, A; Wallace, A; Marks, P; Edmonds, K; Tingey, D; Johnston, P

    2013-10-01

    The largest man-made contributor to the ionising radiation dose to the Australian population is from diagnostic imaging and nuclear medicine. The last estimation of this dose was made in 2004 (1.3 mSv), this paper describes a recent re-evaluation of this dose to reflect the changes in imaging trends and technology. The estimation was calculated by summing the dose from five modalities, computed tomography (CT), general radiography/fluoroscopy, interventional procedures, mammography and nuclear medicine. Estimates were made using Australian frequency data and dose data from a range of Australian and international sources of average effective dose values. The ionising radiation dose to the Australian population in 2010 from diagnostic imaging and nuclear medicine is estimated to be 1.7 mSv (1.11 mSv CT, 0.30 mSv general radiography/fluoroscopy, 0.17 mSv interventional procedures, 0.03 mSv mammography and 0.10 mSv nuclear medicine). This exceeds the estimate of 1.5 mSv per person from natural background and cosmic radiation.

  12. Iris recognition using image moments and k-means algorithm.

    Science.gov (United States)

    Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed

    2014-01-01

    This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%.

  13. Segmentation algorithms for ear image data towards biomechanical studies.

    Science.gov (United States)

    Ferreira, Ana; Gentil, Fernanda; Tavares, João Manuel R S

    2014-01-01

    In recent years, the segmentation, i.e. the identification, of ear structures in video-otoscopy, computerised tomography (CT) and magnetic resonance (MR) image data, has gained significant importance in the medical imaging area, particularly those in CT and MR imaging. Segmentation is the fundamental step of any automated technique for supporting the medical diagnosis and, in particular, in biomechanics studies, for building realistic geometric models of ear structures. In this paper, a review of the algorithms used in ear segmentation is presented. The review includes an introduction to the usually biomechanical modelling approaches and also to the common imaging modalities. Afterwards, several segmentation algorithms for ear image data are described, and their specificities and difficulties as well as their advantages and disadvantages are identified and analysed using experimental examples. Finally, the conclusions are presented as well as a discussion about possible trends for future research concerning the ear segmentation.

  14. Research on image self-recovery algorithm based on DCT

    Directory of Open Access Journals (Sweden)

    Shengbing Che

    2010-06-01

    Full Text Available Image compression operator based on discrete cosine transform was brought up. A securer scrambling locational operator was put forward based on the concept of anti-tamper radius. The basic idea of the algorithm is that it first combined image block compressed data with eigenvalue of image block and its offset block, then scrambled or encrypted and embeded them into least significant bit of corresponding offset block. This algorithm could pinpoint tampered image block and tampering type accurately. It could recover tampered block with good image quality when tamper occured within the limits of the anti-tamper radius. It could effectively resist vector quantization and synchronous counterfeiting attacks on self-embedding watermarking schemes.

  15. FCM Clustering Algorithms for Segmentation of Brain MR Images

    Directory of Open Access Journals (Sweden)

    Yogita K. Dubey

    2016-01-01

    Full Text Available The study of brain disorders requires accurate tissue segmentation of magnetic resonance (MR brain images which is very important for detecting tumors, edema, and necrotic tissues. Segmentation of brain images, especially into three main tissue types: Cerebrospinal Fluid (CSF, Gray Matter (GM, and White Matter (WM, has important role in computer aided neurosurgery and diagnosis. Brain images mostly contain noise, intensity inhomogeneity, and weak boundaries. Therefore, accurate segmentation of brain images is still a challenging area of research. This paper presents a review of fuzzy c-means (FCM clustering algorithms for the segmentation of brain MR images. The review covers the detailed analysis of FCM based algorithms with intensity inhomogeneity correction and noise robustness. Different methods for the modification of standard fuzzy objective function with updating of membership and cluster centroid are also discussed.

  16. Diagnostic imaging, a "parallel" discipline. Can current technology provide a reliable digital diagnostic radiology department?

    Science.gov (United States)

    Moore, C J; Eddleston, B

    1985-04-01

    Only recently has any detailed criticism been voiced about the practicalities of the introduction of generalised, digital, imaging complexes in diagnostic radiology. Although attendant technological problems are highlighted we argue that the fundamental causes of current difficulties are not in the generation but in the processing, filing and subsequent retrieval for display of digital image records. In the real world, looking at images is a parallel process of some complexity and so it is perhaps untimely to expect versatile handling of vast image data bases by existing computer hardware and software which, by their current nature, perform tasks serially. Successes in applying new imaging devices using digital technology, numerical methods and more easily available computing power are directing radiology towards the concept of all-digital departmental complexes. Hence a critical discussion of fundamental problems should be encouraged, to promote a thorough understanding of what may be involved (Gray et al, 1984) in following such a course. It is equally important to gain some perspective about the development possibilities for existing, commercially available equipment being offered to the medical community.

  17. Optimization-Based Image Segmentation by Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    H. Laurent

    2008-05-01

    Full Text Available Many works in the literature focus on the definition of evaluation metrics and criteria that enable to quantify the performance of an image processing algorithm. These evaluation criteria can be used to define new image processing algorithms by optimizing them. In this paper, we propose a general scheme to segment images by a genetic algorithm. The developed method uses an evaluation criterion which quantifies the quality of an image segmentation result. The proposed segmentation method can integrate a local ground truth when it is available in order to set the desired level of precision of the final result. A genetic algorithm is then used in order to determine the best combination of information extracted by the selected criterion. Then, we show that this approach can either be applied for gray-levels or multicomponents images in a supervised context or in an unsupervised one. Last, we show the efficiency of the proposed method through some experimental results on several gray-levels and multicomponents images.

  18. Optimization-Based Image Segmentation by Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Rosenberger C

    2008-01-01

    Full Text Available Abstract Many works in the literature focus on the definition of evaluation metrics and criteria that enable to quantify the performance of an image processing algorithm. These evaluation criteria can be used to define new image processing algorithms by optimizing them. In this paper, we propose a general scheme to segment images by a genetic algorithm. The developed method uses an evaluation criterion which quantifies the quality of an image segmentation result. The proposed segmentation method can integrate a local ground truth when it is available in order to set the desired level of precision of the final result. A genetic algorithm is then used in order to determine the best combination of information extracted by the selected criterion. Then, we show that this approach can either be applied for gray-levels or multicomponents images in a supervised context or in an unsupervised one. Last, we show the efficiency of the proposed method through some experimental results on several gray-levels and multicomponents images.

  19. Analyzing the Efficiency of Text-to-Image Encryption Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmad Abusukhon

    2012-12-01

    Full Text Available Today many of the activities are performed online through the Internet. One of the methods used to protect the data while sending it through the Internet is cryptography. In a previous work we proposed the Text-to-Image Encryption algorithm (TTIE as a novel algorithm for network security. In this paper we investigate the efficiency of (TTIE for large scale collection.

  20. Optimization of image processing algorithms on mobile platforms

    Science.gov (United States)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  1. COMPARISON OF DIFFERENT SEGMENTATION ALGORITHMS FOR DERMOSCOPIC IMAGES

    Directory of Open Access Journals (Sweden)

    A.A. Haseena Thasneem

    2015-05-01

    Full Text Available This paper compares different algorithms for the segmentation of skin lesions in dermoscopic images. The basic segmentation algorithms compared are Thresholding techniques (Global and Adaptive, Region based techniques (K-means, Fuzzy C means, Expectation Maximization and Statistical Region Merging, Contour models (Active Contour Model and Chan - Vese Model and Spectral Clustering. Accuracy, sensitivity, specificity, Border error, Hammoude distance, Hausdorff distance, MSE, PSNR and elapsed time metrices were used to evaluate various segmentation techniques.

  2. An infrared image enhancement algorithm based on HVS

    Science.gov (United States)

    Xue, Rongkun; He, Wei; Liu, Jiahui; Li, Yufeng

    2016-10-01

    Because the infrared images have the disadvantage of low contrast and fuzzy edges, it is not suitable for us to observe them, so it is necessary to first make enhanced processing before recognition. Though the existing enhancement methods do not take into account the characteristics of HVS, the visual effect of the processed images is not good. Therefore, the paper proposes an enhancement algorithm of infrared images that combine multi-resolution wavelet transform with Retinex theory, it blends with the characteristics of HVS in order to make high-frequency details of infrared images strengthen and illumination uniformity strength and the brightness of IR images moderate. Through experimental results and data analysis, it not only improves the infrared images of low contrast and fuzzy detail, but also suppresses the noise in images to strengthen the overall visual effect of the infrared images.

  3. Oil exploration oriented multi-sensor image fusion algorithm

    Science.gov (United States)

    Xiaobing, Zhang; Wei, Zhou; Mengfei, Song

    2017-04-01

    In order to accurately forecast the fracture and fracture dominance direction in oil exploration, in this paper, we propose a novel multi-sensor image fusion algorithm. The main innovations of this paper lie in that we introduce Dual-tree complex wavelet transform (DTCWT) in data fusion and divide an image to several regions before image fusion. DTCWT refers to a new type of wavelet transform, and it is designed to solve the problem of signal decomposition and reconstruction based on two parallel transforms of real wavelet. We utilize DTCWT to segment the features of the input images and generate a region map, and then exploit normalized Shannon entropy of a region to design the priority function. To test the effectiveness of our proposed multi-sensor image fusion algorithm, four standard pairs of images are used to construct the dataset. Experimental results demonstrate that the proposed algorithm can achieve high accuracy in multi-sensor image fusion, especially for images of oil exploration.

  4. Parallel transformation of K-SVD solar image denoising algorithm

    Science.gov (United States)

    Liang, Youwen; Tian, Yu; Li, Mei

    2017-02-01

    The images obtained by observing the sun through a large telescope always suffered with noise due to the low SNR. K-SVD denoising algorithm can effectively remove Gauss white noise. Training dictionaries for sparse representations is a time consuming task, due to the large size of the data involved and to the complexity of the training algorithms. In this paper, an OpenMP parallel programming language is proposed to transform the serial algorithm to the parallel version. Data parallelism model is used to transform the algorithm. Not one atom but multiple atoms updated simultaneously is the biggest change. The denoising effect and acceleration performance are tested after completion of the parallel algorithm. Speedup of the program is 13.563 in condition of using 16 cores. This parallel version can fully utilize the multi-core CPU hardware resources, greatly reduce running time and easily to transplant in multi-core platform.

  5. Rationale diagnostic approach to biliary tract imaging; Rationale Diagnostik der Gallenwege

    Energy Technology Data Exchange (ETDEWEB)

    Helmberger, H.; Huppertz, A.; Ruell, T. [Technische Univ. Muenchen (Germany). Inst. fuer Roentgendiagnostik; Zillinger, C.; Ehrenberg, C.; Roesch, T. [Technische Univ. Muenchen (Germany). 2. Medizinische Klinik und Poliklinik

    1998-04-01

    Since the introduction of MR cholangiography (MRC) diagnostic imaging of the biliary tract has been significantly improved. While percutaneous ultrasonography is still the primary examination, computed tomography (CT), conventional magnetic resonance imaging (MRI), as well as the direct imaging modalities of the biliary tract - iv cholangiography, endoscopic-retrograde-cholangiography (ERC), and percutaneous-transhepatic-cholangiography (PTC) are in use. This article discusses the clinical value of the different diagnostic techniques for the various biliary pathologies with special attention to recent developments in MRC techniques. An algorithm is presented offering a rational approach to biliary disorders. With further technical improvement shifts from ERC(P) to MRC(P) for biliary imaging could be envisioned, ERCP further concentrating on its role as a minimal invasive treatment option. (orig.) [Deutsch] Die Diagnostik der Gallenwege hat durch die Einfuehrung der MR-Cholangiographie (MRC) entscheidende neue Impulse erfahren. Neben der als Ausgangsuntersuchung zur Verfuegung stehenden perkutanen Sonographie kommen die Computertomographie (CT), die klassische Magnetresonanztomographie (MRT) sowie die Verfahren der direkten Gallengangsdarstellung, i.v. Cholangiographie, endoskopisch-retrograde Cholangiographie (ERC) und perkutane-transhepatische Cholangiographie (PTC), zum Einsatz. Der vorliegende Artikel analysiert die diagnostischen Wertigkeiten der einzelnen bildgebenden Verfahren fuer die unterschiedlichen Krankheitsgruppen des biliaeren Systems unter besonderer Beruecksichtigung der neuesten Entwicklungen auf dem Gebiet der MRC. In einem Strategiealgorithmus wird ein Vorschlag fuer das diagnostische Vorgehen bei Erkrankungen des biliaeren Gangsystems erarbeitet. Die bisherigen Ergebnisse lassen erwarten, dass die MRT einschliesslich MRC(P) in Kuerze die rein diagnostische ERC(P) ersetzen wird. (orig.)

  6. Chaos-Based Image Encryption Algorithm Using Decomposition

    Directory of Open Access Journals (Sweden)

    Xiuli Song

    2013-07-01

    Full Text Available The proposed chaos-based image encryption algorithm consists of four stages: decomposition, shuffle, diffusion and combination. Decomposition is that an original image is decomposed to components according to some rule. The purpose of the shuffle is to mask original organization of the pixels of the image, and the diffusion is to change their values. Combination is not necessary in the sender. To improve the efficiency, the parallel architecture is taken to process the shuffle and diffusion. To enhance the security of the algorithm, firstly, a permutation of the labels is designed. Secondly, two Logistic maps are used in diffusion stage to encrypt the components. One map encrypts the odd rows of the component and another map encrypts the even rows. Experiment results and security analysis demonstrate that the encryption algorithm not only is robust and flexible, but also can withstand common attacks such as statistical attacks and differential attacks.

  7. The Peak Pairs algorithm for strain mapping from HRTEM images

    Energy Technology Data Exchange (ETDEWEB)

    Galindo, Pedro L. [Departamento de Lenguajes y Sistemas Informaticos, CASEM, Universidad de Cadiz, Pol. Rio San Pedro s/n. 11510, Puerto Real, Cadiz (Spain)], E-mail: pedro.galindo@uca.es; Kret, Slawomir [Institute of Physics, PAS, AL. Lotnikow 32/46, 02-668 Warsaw (Poland); Sanchez, Ana M. [Departamento de Ciencia de los Materiales e Ing. Metalurgica y Q. Inorganica, Facultad de Ciencias, Universidad de Cadiz, Pol. Rio San Pedro s/n. 11510, Puerto Real, Cadiz (Spain); Laval, Jean-Yves [Laboratoire de Physique du Solide, UPR5 CNRS-ESPCI, Paris (France); Yanez, Andres; Pizarro, Joaquin; Guerrero, Elisa [Departamento de Lenguajes y Sistemas Informaticos, CASEM, Universidad de Cadiz, Pol. Rio San Pedro s/n. 11510, Puerto Real, Cadiz (Spain); Ben, Teresa; Molina, Sergio I. [Departamento de Ciencia de los Materiales e Ing. Metalurgica y Q. Inorganica, Facultad de Ciencias, Universidad de Cadiz, Pol. Rio San Pedro s/n. 11510, Puerto Real, Cadiz (Spain)

    2007-11-15

    Strain mapping is defined as a numerical image-processing technique that measures the local shifts of image details around a crystal defect with respect to the ideal, defect-free, positions in the bulk. Algorithms to map elastic strains from high-resolution transmission electron microscopy (HRTEM) images may be classified into two categories: those based on the detection of peaks of intensity in real space and the Geometric Phase approach, calculated in Fourier space. In this paper, we discuss both categories and propose an alternative real space algorithm (Peak Pairs) based on the detection of pairs of intensity maxima in an affine transformed space dependent on the reference area. In spite of the fact that it is a real space approach, the Peak Pairs algorithm exhibits good behaviour at heavily distorted defect cores, e.g. interfaces and dislocations. Quantitative results are reported from experiments to determine local strain in different types of semiconductor heterostructures.

  8. Diagnostic performance of line-immunoassay based algorithms for incident HIV-1 infection

    Directory of Open Access Journals (Sweden)

    Schüpbach Jörg

    2012-04-01

    Full Text Available Abstract Background Serologic testing algorithms for recent HIV seroconversion (STARHS provide important information for HIV surveillance. We have previously demonstrated that a patient's antibody reaction pattern in a confirmatory line immunoassay (INNO-LIA™ HIV I/II Score provides information on the duration of infection, which is unaffected by clinical, immunological and viral variables. In this report we have set out to determine the diagnostic performance of Inno-Lia algorithms for identifying incident infections in patients with known duration of infection and evaluated the algorithms in annual cohorts of HIV notifications. Methods Diagnostic sensitivity was determined in 527 treatment-naive patients infected for up to 12 months. Specificity was determined in 740 patients infected for longer than 12 months. Plasma was tested by Inno-Lia and classified as either incident ( Results The 10 best algorithms had a mean raw sensitivity of 59.4% and a mean specificity of 95.1%. Adjustment for overrepresentation of patients in the first quarter year of infection further reduced the sensitivity. In the preferred model, the mean adjusted sensitivity was 37.4%. Application of the 10 best algorithms to four annual cohorts of HIV-1 notifications totalling 2'595 patients yielded a mean IIR of 0.35 in 2005/6 (baseline and of 0.45, 0.42 and 0.35 in 2008, 2009 and 2010, respectively. The increase between baseline and 2008 and the ensuing decreases were highly significant. Other adjustment models yielded different absolute IIR, although the relative changes between the cohorts were identical for all models. Conclusions The method can be used for comparing IIR in annual cohorts of HIV notifications. The use of several different algorithms in combination, each with its own sensitivity and specificity to detect incident infection, is advisable as this reduces the impact of individual imperfections stemming primarily from relatively low sensitivities and

  9. Lesion detection in magnetic resonance brain images by hyperspectral imaging algorithms

    Science.gov (United States)

    Xue, Bai; Wang, Lin; Li, Hsiao-Chi; Chen, Hsian Min; Chang, Chein-I.

    2016-05-01

    Magnetic Resonance (MR) images can be considered as multispectral images so that MR imaging can be processed by multispectral imaging techniques such as maximum likelihood classification. Unfortunately, most multispectral imaging techniques are not particularly designed for target detection. On the other hand, hyperspectral imaging is primarily developed to address subpixel detection, mixed pixel classification for which multispectral imaging is generally not effective. This paper takes advantages of hyperspectral imaging techniques to develop target detection algorithms to find lesions in MR brain images. Since MR images are collected by only three image sequences, T1, T2 and PD, if a hyperspectral imaging technique is used to process MR images it suffers from the issue of insufficient dimensionality. To address this issue, two approaches to nonlinear dimensionality expansion are proposed, nonlinear correlation expansion and nonlinear band ratio expansion. Once dimensionality is expanded hyperspectral imaging algorithms are readily applied. The hyperspectral detection algorithm to be investigated for lesion detection in MR brain is the well-known subpixel target detection algorithm, called Constrained Energy Minimization (CEM). In order to demonstrate the effectiveness of proposed CEM in lesion detection, synthetic images provided by BrainWeb are used for experiments.

  10. Adaptive wavelet transform algorithm for image compression applications

    Science.gov (United States)

    Pogrebnyak, Oleksiy B.; Manrique Ramirez, Pablo

    2003-11-01

    A new algorithm of locally adaptive wavelet transform is presented. The algorithm implements the integer-to-integer lifting scheme. It performs an adaptation of the wavelet function at the prediction stage to the local image data activity. The proposed algorithm is based on the generalized framework for the lifting scheme that permits to obtain easily different wavelet coefficients in the case of the (N~,N) lifting. It is proposed to perform the hard switching between (2, 4) and (4, 4) lifting filter outputs according to an estimate of the local data activity. When the data activity is high, i.e., in the vicinity of edges, the (4, 4) lifting is performed. Otherwise, in the plain areas, the (2,4) decomposition coefficients are calculated. The calculations are rather simples that permit the implementation of the designed algorithm in fixed point DSP processors. The proposed adaptive transform possesses the perfect restoration of the processed data and possesses good energy compactation. The designed algorithm was tested on different images. The proposed adaptive transform algorithm can be used for image/signal lossless compression.

  11. A novel highly parallel algorithm for linearly unmixing hyperspectral images

    Science.gov (United States)

    Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto

    2014-10-01

    Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.

  12. [Positron emission tomography: diagnostic imaging on a molecular level].

    Science.gov (United States)

    Allemann, K; Wyss, M; Wergin, M; Bley, C Rohrer; Ametamay, S; Bruehlmeier, M; Kaser-Hotz, B

    2004-08-01

    In human medicine positron emission tomography (PET) is a modern diagnostic imaging method. In the present paper we outline the physical principles of PET and give an overview over the main clinic fields where PET is being used, such as neurology, cardiology and oncology. Moreover, we present a current project in veterinary medicine (in collaboration with the Paul Scherrer Institute and the University Hospital Zurich), where a hypoxia tracer is applied to dogs and cats suffering from spontaneous tumors. Finally new developments in the field of PET were discussed.

  13. Simultaneous imaging/reflectivity measurements to assess diagnostic mirror cleaninga)

    Science.gov (United States)

    Skinner, C. H.; Gentile, C. A.; Doerner, R.

    2012-10-01

    Practical methods to clean ITER's diagnostic mirrors and restore reflectivity will be critical to ITER's plasma operations. We describe a technique to assess the efficacy of mirror cleaning techniques and detect any damage to the mirror surface. The method combines microscopic imaging and reflectivity measurements in the red, green, and blue spectral regions and at selected wavelengths. The method has been applied to laser cleaning of single crystal molybdenum mirrors coated with either carbon or beryllium films 150-420 nm thick. It is suitable for hazardous materials such as beryllium as the mirrors remain sealed in a vacuum chamber.

  14. Simultaneous imaging/reflectivity measurements to assess diagnostic mirror cleaning.

    Science.gov (United States)

    Skinner, C H; Gentile, C A; Doerner, R

    2012-10-01

    Practical methods to clean ITER's diagnostic mirrors and restore reflectivity will be critical to ITER's plasma operations. We describe a technique to assess the efficacy of mirror cleaning techniques and detect any damage to the mirror surface. The method combines microscopic imaging and reflectivity measurements in the red, green, and blue spectral regions and at selected wavelengths. The method has been applied to laser cleaning of single crystal molybdenum mirrors coated with either carbon or beryllium films 150-420 nm thick. It is suitable for hazardous materials such as beryllium as the mirrors remain sealed in a vacuum chamber.

  15. Incident Light Frequency-Based Image Defogging Algorithm

    Directory of Open Access Journals (Sweden)

    Wenbo Zhang

    2017-01-01

    Full Text Available To solve the color distortion problem produced by the dark channel prior algorithm, an improved method for calculating transmittance of all channels, respectively, was proposed in this paper. Based on the Beer-Lambert Law, the influence between the frequency of the incident light and the transmittance was analyzed, and the ratios between each channel’s transmittance were derived. Then, in order to increase efficiency, the input image was resized to a smaller size before acquiring the refined transmittance which will be resized to the same size of original image. Finally, all the transmittances were obtained with the help of the proportion between each color channel, and then they were used to restore the defogging image. Experiments suggest that the improved algorithm can produce a much more natural result image in comparison with original algorithm, which means the problem of high color saturation was eliminated. What is more, the improved algorithm speeds up by four to nine times compared to the original algorithm.

  16. Statistical Methods for Analyzing Tissue Microarray Images - Algorithmic Scoring and Co-training

    CERN Document Server

    Yan, Donghui; Knudsen, Beatrice S; Linden, Michael; Randolph, Timothy W

    2011-01-01

    Recent advances in tissue microarray technology have allowed immunohistochemistry to become a powerful medium-to-high throughput analysis tool, particularly for the validation of diagnostic and prognostic biomarkers. However, as study size grows, the manual evaluation of these assays becomes a prohibitive limitation; it vastly reduces throughput and greatly increases variability and expense. We propose an algorithm - Tissue Array Co-Occurrence Matrix Analysis (TACOMA) - for quantifying cellular phenotypes based on textural regularity summarized by local inter-pixel relationships. The algorithm can be easily trained for any staining pattern, is absent of sensitive tuning parameters and has the ability to report salient pixels in an image that contribute to its score. Pathologists' input via informative training patches is an important aspect of the algorithm that allows the training for any specific marker or cell type. With co-training, TACOMA can be trained with a radically small training sample (e.g., with ...

  17. Optimum image compression rate maintaining diagnostic image quality of digital intraoral radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Song, Ju Seop; Koh, Kwang Joon [Dept. of Oral and Maxillofacial Radiology and Institute of Oral Bio Science, School of Dentistry, Chonbuk National University, Chonju (Korea, Republic of)

    2000-12-15

    The aims of the present study are to determine the optimum compression rate in terms of file size reduction and diagnostic quality of the images after compression and evaluate the transmission speed of original or each compressed images. The material consisted of 24 extracted human premolars and molars. The occlusal surfaces and proximal surfaces of the teeth had a clinical disease spectrum that ranged from sound to varying degrees of fissure discoloration and cavitation. The images from Digora system were exported in TIFF and the images from conventional intraoral film were scanned and digitalized in TIFF by Nikon SF-200 scanner(Nikon, Japan). And six compression factors were chosen and applied on the basis of the results from a pilot study. The total number of images to be assessed were 336. Three radiologists assessed the occlusal and proximal surfaces of the teeth with 5-rank scale. Finally diagnosed as either sound or carious lesion by one expert oral pathologist. And sensitivity and specificity and kappa value for diagnostic agreement was calculated. Also the area (Az) values under the ROC curve were calculated and paired t-test and oneway ANOVA test was performed. Thereafter, transmission time of the image files of the each compression level were compared with that of the original image files. No significant difference was found between original and the corresponding images up to 7% (1:14) compression ratio for both the occlusal and proximal caries (p<0.05). JPEG3 (1:14) image files are transmitted fast more than 10 times, maintained diagnostic information in image, compared with original image files. 1:14 compressed image file may be used instead of the original image and reduce storage needs and transmission time.

  18. A comparative study of Image Region-Based Segmentation Algorithms

    Directory of Open Access Journals (Sweden)

    Lahouaoui LALAOUI

    2013-07-01

    Full Text Available Image segmentation has recently become an essential step in image processing as it mainly conditions the interpretation which is done afterwards. It is still difficult to justify the accuracy of a segmentation algorithm, regardless of the nature of the treated image. In this paper we perform an objective comparison of region-based segmentation techniques such as supervised and unsupervised deterministic classification, non-parametric and parametric probabilistic classification. Eight methods among the well-known and used in the scientific community have been selected and compared. The Martin’s(GCE, LCE, probabilistic Rand Index (RI, Variation of Information (VI and Boundary Displacement Error (BDE criteria are used to evaluate the performance of these algorithms on Magnetic Resonance (MR brain images, synthetic MR image, and synthetic images. MR brain image are composed of the gray matter (GM, white matter (WM and cerebrospinal fluid (CSF and others, and the synthetic MR image composed of the same for real image and the plus edema, and the tumor. Results show that segmentation is an image dependent process and that some of the evaluated methods are well suited for a better segmentation.

  19. Multiphoton microscopy as a diagnostic imaging modality for lung cancer

    Science.gov (United States)

    Pavlova, Ina; Hume, Kelly R.; Yazinski, Stephanie A.; Peters, Rachel M.; Weiss, Robert S.; Webb, Watt W.

    2010-02-01

    Lung cancer is the leading killer among all cancers for both men and women in the US, and is associated with one of the lowest 5-year survival rates. Current diagnostic techniques, such as histopathological assessment of tissue obtained by computed tomography guided biopsies, have limited accuracy, especially for small lesions. Early diagnosis of lung cancer can be improved by introducing a real-time, optical guidance method based on the in vivo application of multiphoton microscopy (MPM). In particular, we hypothesize that MPM imaging of living lung tissue based on twophoton excited intrinsic fluorescence and second harmonic generation can provide sufficient morphologic and spectroscopic information to distinguish between normal and diseased lung tissue. Here, we used an experimental approach based on MPM with multichannel fluorescence detection for initial discovery that MPM spectral imaging could differentiate between normal and neoplastic lung in ex vivo samples from a murine model of lung cancer. Current results indicate that MPM imaging can directly distinguish normal and neoplastic lung tissues based on their distinct morphologies and fluorescence emission properties in non-processed lung tissue. Moreover, we found initial indication that MPM imaging differentiates between normal alveolar tissue, inflammatory foci, and lung neoplasms. Our long-term goal is to apply results from ex vivo lung specimens to aid in the development of multiphoton endoscopy for in vivo imaging of lung abnormalities in various animal models, and ultimately for the diagnosis of human lung cancer.

  20. Automatic Image Registration Algorithm Based on Wavelet Transform

    Institute of Scientific and Technical Information of China (English)

    LIU Qiong; NI Guo-qiang

    2006-01-01

    An automatic image registration approach based on wavelet transform is proposed. This proposed method utilizes multiscale wavelet transform to extract feature points. A coarse-to-fine feature matching method is utilized in the feature matching phase. A two-way matching method based on cross-correlation to get candidate point pairs and a fine matching based on support strength combine to form the matching algorithm. At last, based on an affine transformation model, the parameters are iteratively refined by using the least-squares estimation approach. Experimental results have verified that the proposed algorithm can realize automatic registration of various kinds of images rapidly and effectively.

  1. A New Algorithm for Total Variation Based Image Denoising

    Institute of Scientific and Technical Information of China (English)

    Yi-ping XU

    2012-01-01

    We propose a new algorithm for the total variation based on image denoising problem.The split Bregman method is used to convert an unconstrained minimization denoising problem to a linear system in the outer iteration.An algebraic multi-grid method is applied to solve the linear system in the inner iteration.Furthermore,Krylov subspace acceleration is adopted to improve convergence in the outer iteration.Numerical experiments demonstrate that this algorithm is efficient even for images with large signal-to-noise ratio.

  2. Algorithm-Architecture Matching for Signal and Image Processing

    CERN Document Server

    Gogniat, Guy; Morawiec, Adam; Erdogan, Ahmet

    2011-01-01

    Advances in signal and image processing together with increasing computing power are bringing mobile technology closer to applications in a variety of domains like automotive, health, telecommunication, multimedia, entertainment and many others. The development of these leading applications, involving a large diversity of algorithms (e.g. signal, image, video, 3D, communication, cryptography) is classically divided into three consecutive steps: a theoretical study of the algorithms, a study of the target architecture, and finally the implementation. Such a linear design flow is reaching its li

  3. Categorization and Searching of Color Images Using Mean Shift Algorithm

    Directory of Open Access Journals (Sweden)

    Prakash PANDEY

    2009-07-01

    Full Text Available Now a day’s Image Searching is still a challenging problem in content based image retrieval (CBIR system. Most CBIR system operates on all images without pre-sorting the images. The image search result contains many unrelated image. The aim of this research is to propose a new object based indexing system Based on extracting salient region representative from the image, categorizing the image into different types and search images that are similar to given query images.In our approach, the color features are extracted using the mean shift algorithm, a robust clustering technique, Dominant objects are obtained by performing region grouping of segmented thumbnails. The category for an image is generated automatically by analyzing the image for the presence of a dominant object. The images in the database are clustered based on region feature similarity using Euclidian distance. Placing an image into a category can help the user to navigate retrieval results more effectively. Extensive experimental results illustrate excellent performance.

  4. CANDID: Comparison algorithm for navigating digital image databases

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, P.M.; Cannon, T.M.

    1994-02-21

    In this paper, we propose a method for calculating the similarity between two digital images. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized distance between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to an example target image. This algorithm is applied to the problem of search and retrieval for database containing pulmonary CT imagery, and experimental results are provided.

  5. AN ALGORITHM FOR ASSEMBLING A COMMON IMAGE OF VLSI LAYOUT

    Directory of Open Access Journals (Sweden)

    Y. Y. Lankevich

    2015-01-01

    Full Text Available We consider problem of assembling a common image of VLSI layout. Common image is composedof frames obtained by electron microscope photographing. Many frames require a lot of computation for positioning each frame inside the common image. Employing graphics processing units enables acceleration of computations. We realize algorithms and programs for assembling a common image of VLSI layout. Specificity of this work is to use abilities of CUDA to reduce computation time. Experimental results show efficiency of the proposed programs.

  6. A hybrid genetic algorithm for multi-modal image registration

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper describes a new method for three-dimensional medical image registration. In the interactive image-guided HIFU (High Intensity Focused Ultrasound) therapy system, a fast and precise localization of the tumor is very important. An automatic system is developed for registering pre-operative MR images with intra-operative ultrasound images based on the vessels visible in both of the modalities. When the MR and the ultrasound images are aligned, the centerline points of the vessels in the MR image will align with bright intensities in the ultrasound image. The method applies an optimization strategy combining the genetic algorithm with the conjugated gradients algorithm to minimize the objective function. It provides a feasible way of determining the global solution and makes the method robust to local maximum and insensitive to initial position. Two experiments were designed to evaluate the method, and the results show that our method has better registration accuracy and convergence rate than the other two classic algorithms.

  7. An efficient feedback calibration algorithm for direct imaging radio telescopes

    Science.gov (United States)

    Beardsley, Adam P.; Thyagarajan, Nithyanandan; Bowman, Judd D.; Morales, Miguel F.

    2017-10-01

    We present the E-field Parallel Imaging Calibration (EPICal) algorithm, which addresses the need for a fast calibration method for direct imaging radio astronomy correlators. Direct imaging involves a spatial fast Fourier transform of antenna signals, alleviating an O(Na ^2) computational bottleneck typical in radio correlators, and yielding a more gentle O(Ng log _2 Ng) scaling, where Na is the number of antennas in the array and Ng is the number of gridpoints in the imaging analysis. This can save orders of magnitude in computation cost for next generation arrays consisting of hundreds or thousands of antennas. However, because antenna signals are mixed in the imaging correlator without creating visibilities, gain correction must be applied prior to imaging, rather than on visibilities post-correlation. We develop the EPICal algorithm to form gain solutions quickly and without ever forming visibilities. This method scales as the number of antennas, and produces results comparable to those from visibilities. We use simulations to demonstrate the EPICal technique and study the noise properties of our gain solutions, showing they are similar to visibility-based solutions in realistic situations. By applying EPICal to 2 s of Long Wavelength Array data, we achieve a 65 per cent dynamic range improvement compared to uncalibrated images, showing this algorithm is a promising solution for next generation instruments.

  8. Assessing the value of diagnostic imaging: the role of perception

    Science.gov (United States)

    Potchen, E. J.; Cooper, Thomas G.

    2000-04-01

    The value of diagnostic radiology rests in its ability to provide information. Information is defined as a reduction in randomness. Quality improvement in any system requires diminution in the variation in its performance. The major variation in performance of the system of diagnostic radiology occurs in observer performance and in the communication of information from the observer to someone who will apply that information to the benefit of the patient. The ability to provide information can be determined by observer performance studies using a receiver-operating characteristic (ROC) curve analysis. The amount of information provided by each observer can be measured in terms of the uncertainty they reduce. Using a set of standardized radiographs, some normal and some abnormal, sorting them randomly, and then asking an observer to redistribute them according to their probability of normality can measure the difference in the value added by different observers. By applying this observer performance measure, we have been able to characterize individual radiologists, groups of radiologists, and regions of the United States in their ability to add value in chest radiology. The use of these technologies in health care may improve upon the contribution of diagnostic imaging.

  9. Validation of a diagnostic algorithm for the discrimination of actinic keratosis from normal skin and squamous cell carcinoma by means of high-definition optical coherence tomography.

    Science.gov (United States)

    Marneffe, Alice; Suppa, Mariano; Miyamoto, Makiko; Del Marmol, Véronique; Boone, Marc

    2016-09-01

    Actinic keratoses (AKs) commonly arise on sun-damaged skin. Visible lesions are often associated with subclinical lesions on surrounding skin, giving rise to field cancerization. To avoid multiple biopsies to diagnose subclinical/early invasive lesions, there is an increasing interest in non-invasive diagnostic tools, such as high-definition optical coherence tomography (HD-OCT). We previously developed a HD-OCT-based diagnostic algorithm for the discrimination of AK from squamous cell carcinoma (SCC) and normal skin. The aim of this study was to test the applicability of HD-OCT for non-invasive discrimination of AK from SCC and normal skin using this algorithm. Three-dimensional (3D) HD-OCT images of histopathologically proven AKs and SCCs and images of normal skin were collected. All images were shown in a random sequence to three independent observers with different experience in HD-OCT, blinded to the clinical and histopathological data and with different experience with HD-OCT. Observers classified each image as AK, SCC or normal skin based on the diagnostic algorithm. A total of 106 (38 AKs, 16 SCCs and 52 normal skin sites) HD-OCT images from 71 patients were included. Sensitivity and specificity for the most experienced observer were 81.6% and 92.6% for AK diagnosis and 93.8% and 98.9% for SCC diagnosis. A moderate interobserver agreement was demonstrated. HD-OCT represents a promising technology for the non-invasive diagnosis of AKs. Thanks to its high potential in discriminating SCC from AK, HD-OCT could be used as a relevant tool for second-level examination, increasing diagnostic confidence and sparing patients unnecessary excisions.

  10. Image recombination transform algorithm for superresolution structured illumination microscopy

    Science.gov (United States)

    Zhou, Xing; Lei, Ming; Dan, Dan; Yao, Baoli; Yang, Yanlong; Qian, Jia; Chen, Guangde; Bianco, Piero R.

    2016-09-01

    Structured illumination microscopy (SIM) is an attractive choice for fast superresolution imaging. The generation of structured illumination patterns made by interference of laser beams is broadly employed to obtain high modulation depth of patterns, while the polarizations of the laser beams must be elaborately controlled to guarantee the high contrast of interference intensity, which brings a more complex configuration for the polarization control. The emerging pattern projection strategy is much more compact, but the modulation depth of patterns is deteriorated by the optical transfer function of the optical system, especially in high spatial frequency near the diffraction limit. Therefore, the traditional superresolution reconstruction algorithm for interference-based SIM will suffer from many artifacts in the case of projection-based SIM that possesses a low modulation depth. Here, we propose an alternative reconstruction algorithm based on image recombination transform, which provides an alternative solution to address this problem even in a weak modulation depth. We demonstrated the effectiveness of this algorithm in the multicolor superresolution imaging of bovine pulmonary arterial endothelial cells in our developed projection-based SIM system, which applies a computer controlled digital micromirror device for fast fringe generation and multicolor light-emitting diodes for illumination. The merit of the system incorporated with the proposed algorithm allows for a low excitation intensity fluorescence imaging even less than 1 W/cm2, which is beneficial for the long-term, in vivo superresolved imaging of live cells and tissues.

  11. Photoacoustic image reconstruction based on Bayesian compressive sensing algorithm

    Institute of Scientific and Technical Information of China (English)

    Mingjian Sun; Naizhang Feng; Yi Shen; Jiangang Li; Liyong Ma; Zhenghua Wu

    2011-01-01

    The photoacoustic tomography (PAT) method, based on compressive sensing (CS) theory, requires that,for the CS reconstruction, the desired image should have a sparse representation in a known transform domain. However, the sparsity of photoacoustic signals is destroyed because noises always exist. Therefore,the original sparse signal cannot be effectively recovered using the general reconstruction algorithm. In this study, Bayesian compressive sensing (BCS) is employed to obtain highly sparse representations of photoacoustic images based on a set of noisy CS measurements. Results of simulation demonstrate that the BCS-reconstructed image can achieve superior performance than other state-of-the-art CS-reconstruction algorithms.%@@ The photoacoustic tomography (PAT) method, based on compressive sensing (CS) theory, requires that,for the CS reconstruction, the desired image should have a sparse representation in a known transform domain.However, the sparsity of photoacoustic signals is destroyed because noises always exist.Therefore,the original sparse signal cannot be effectively recovered using the general reconstruction algorithm.In this study, Bayesian compressive sensing (BCS) is employed to obtain highly sparse representations of photoacoustic inages based on a set of noisy CS measurements.Results of simulation demonstrate that the BCS-reconstructed image can achieve superior performance than other state-of-the-art CS-reconstruction algorithms.

  12. Simple mineral mapping algorithm based on multitype spectral diagnostic absorption features: a case study at Cuprite, Nevada

    Science.gov (United States)

    Wei, Jing; Ming, Yanfang; Jia, Qiang; Yang, Dongxu

    2017-04-01

    Hyperspectral remote sensing has been widely used in mineral identification using the particularly useful short-wave infrared (SWIR) wavelengths (1.0 to 2.5 μm). Current mineral mapping methods are easily limited by the sensor's radiometric sensitivity and atmospheric effects. Therefore, a simple mineral mapping algorithm (SMMA) based on the combined application with multitype diagnostic SWIR absorption features for hyperspectral data is proposed. A total of nine absorption features are calculated, respectively, from the airborne visible/infrared imaging spectrometer data, the Hyperion hyperspectral data, and the ground reference spectra data collected from the United States Geological Survey (USGS) spectral library. Based on spectral analysis and statistics, a mineral mapping decision-tree model for the Cuprite mining district in Nevada, USA, is constructed. Then, the SMMA algorithm is used to perform mineral mapping experiments. The mineral map from the USGS (USGS map) in the Cuprite area is selected for validation purposes. Results showed that the SMMA algorithm is able to identify most minerals with high coincidence with USGS map results. Compared with Hyperion data (overall accuracy=74.54%), AVIRIS data showed overall better mineral mapping results (overall accuracy=94.82%) due to low signal-to-noise ratio and high spatial resolution.

  13. Image Reconstruction Using a Genetic Algorithm for Electrical Capacitance Tomography

    Institute of Scientific and Technical Information of China (English)

    MOU Changhua; PENG Lihui; YAO Danya; XIAO Deyun

    2005-01-01

    Electrical capacitance tomography (ECT) has been used for more than a decade for imaging dielectric processes. However, because of its ill-posedness and non-linearity, ECT image reconstruction has always been a challenge. A new genetic algorithm (GA) developed for ECT image reconstruction uses initial results from a linear back-projection, which is widely used for ECT image reconstruction to optimize the threshold and the maximum and minimum gray values for the image. The procedure avoids optimizing the gray values pixel by pixel and significantly reduces the search space dimension. Both simulations and static experimental results show that the method is efficient and capable of reconstructing high quality images. Evaluation criteria show that the GA-based method has smaller image error and greater correlation coefficients. In addition, the GA-based method converges quickly with a small number of iterations.

  14. An Improved Image Segmentation Based on Mean Shift Algorithm

    Institute of Scientific and Technical Information of China (English)

    CHENHanfeng; QIFeihu

    2003-01-01

    Gray image segmentation is to segment an image into some homogeneous regions and only one gray level is defined for each region as the result. These grayl evels are called major gray levels. Mean shift algorithm(MSA) has shown its efficiency in image segmentation. An improved gray image segmentation method based on MSAis proposed in this paper since usual image segmentation methods based on MSA often fail in segmenting imageswith weak edges. Corrupted block and its J-value are defined firstly in the proposed method. Then, J-matrix gotten from corrupted blocks are proposed to measure whether weak edges appear in the image. According to the J-matrix, major gray levels gotten with usual segmen-tation methods based on MSA are augmented and corre-sponding allocation windows are modified to detect weak edges. Experimental results demonstrate the effectiveness of the proposed method in gray image segmentation.

  15. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    Science.gov (United States)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  16. Meeting the Needs for Radiation Protection: Diagnostic Imaging.

    Science.gov (United States)

    Frush, Donald P

    2017-02-01

    Radiation and potential risk during medical imaging is one of the foremost issues for the imaging community. Because of this, there are growing demands for accountability, including appropriate use of ionizing radiation in diagnostic and image-guided procedures. Factors contributing to this include increasing use of medical imaging; increased scrutiny (from awareness to alarm) by patients/caregivers and the public over radiation risk; and mounting calls for accountability from regulatory, accrediting, healthcare coverage (e.g., Centers for Medicare and Medicaid Services), and advisory agencies and organizations as well as industry (e.g., NEMA XR-29, Standard Attributes on CT Equipment Related to Dose Optimization and Management). Current challenges include debates over uncertainty with risks with low-level radiation; lack of fully developed and targeted products for diagnostic imaging and radiation dose monitoring; lack of resources for and clarity surrounding dose monitoring programs; inconsistencies across and between practices for design, implementation and audit of dose monitoring programs; lack of interdisciplinary programs for radiation protection of patients; potential shortages in personnel for these and other consensus efforts; and training concerns as well as inconsistencies for competencies throughout medical providers' careers for radiation protection of patients. Medical care providers are currently in a purgatory between quality- and value-based imaging paradigms, a state that has yet to mature to reward this move to quality-based performance. There are also deficits in radiation expertise personnel in medicine. For example, health physics academic programs and graduates have recently declined, and medical physics residency openings are currently at a third of the number of graduates. However, leveraging solutions to the medical needs will require money and resources, beyond personnel alone. Energy and capital will need to be directed to

  17. Beam hardening correction algorithm in microtomography images

    Energy Technology Data Exchange (ETDEWEB)

    Sales, Erika S.; Lima, Inaya C.B.; Lopes, Ricardo T., E-mail: esales@con.ufrj.b, E-mail: ricardo@lin.ufrj.b [Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Lab. de Instrumentacao Nuclear; Assis, Joaquim T. de, E-mail: joaquim@iprj.uerj.b [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Inst. Politecnico. Dept. de Engenharia Mecanica

    2009-07-01

    Quantification of mineral density of bone samples is directly related to the attenuation coefficient of bone. The X-rays used in microtomography images are polychromatic and have a moderately broad spectrum of energy, which makes the low-energy X-rays passing through a sample to be absorbed, causing a decrease in the attenuation coefficient and possibly artifacts. This decrease in the attenuation coefficient is due to a process called beam hardening. In this work the beam hardening of microtomography images of vertebrae of Wistar rats subjected to a study of hyperthyroidism was corrected by the method of linearization of the projections. It was discretized using a spectrum in energy, also called the spectrum of Herman. The results without correction for beam hardening showed significant differences in bone volume, which could lead to a possible diagnosis of osteoporosis. But the data with correction showed a decrease in bone volume, but this decrease was not significant in a confidence interval of 95%. (author)

  18. Fingerprint matching algorithm for poor quality images

    Directory of Open Access Journals (Sweden)

    Vedpal Singh

    2015-04-01

    Full Text Available The main aim of this study is to establish an efficient platform for fingerprint matching for low-quality images. Generally, fingerprint matching approaches use the minutiae points for authentication. However, it is not such a reliable authentication method for low-quality images. To overcome this problem, the current study proposes a fingerprint matching methodology based on normalised cross-correlation, which would improve the performance and reduce the miscalculations during authentication. It would decrease the computational complexities. The error rate of the proposed method is 5.4%, which is less than the two-dimensional (2D dynamic programming (DP error rate of 5.6%, while Lee's method produces 5.9% and the combined method has 6.1% error rate. Genuine accept rate at 1% false accept rate is 89.3% but at 0.1% value it is 96.7%, which is higher. The outcome of this study suggests that the proposed methodology has a low error rate with minimum computational effort as compared with existing methods such as Lee's method and 2D DP and the combined method.

  19. Efficient generation of image chips for training deep learning algorithms

    Science.gov (United States)

    Han, Sanghui; Fafard, Alex; Kerekes, John; Gartley, Michael; Ientilucci, Emmett; Savakis, Andreas; Law, Charles; Parhan, Jason; Turek, Matt; Fieldhouse, Keith; Rovito, Todd

    2017-05-01

    Training deep convolutional networks for satellite or aerial image analysis often requires a large amount of training data. For a more robust algorithm, training data need to have variations not only in the background and target, but also radiometric variations in the image such as shadowing, illumination changes, atmospheric conditions, and imaging platforms with different collection geometry. Data augmentation is a commonly used approach to generating additional training data. However, this approach is often insufficient in accounting for real world changes in lighting, location or viewpoint outside of the collection geometry. Alternatively, image simulation can be an efficient way to augment training data that incorporates all these variations, such as changing backgrounds, that may be encountered in real data. The Digital Imaging and Remote Sensing Image Image Generation (DIRSIG) model is a tool that produces synthetic imagery using a suite of physics-based radiation propagation modules. DIRSIG can simulate images taken from different sensors with variation in collection geometry, spectral response, solar elevation and angle, atmospheric models, target, and background. Simulation of Urban Mobility (SUMO) is a multi-modal traffic simulation tool that explicitly models vehicles that move through a given road network. The output of the SUMO model was incorporated into DIRSIG to generate scenes with moving vehicles. The same approach was used when using helicopters as targets, but with slight modifications. Using the combination of DIRSIG and SUMO, we quickly generated many small images, with the target at the center with different backgrounds. The simulations generated images with vehicles and helicopters as targets, and corresponding images without targets. Using parallel computing, 120,000 training images were generated in about an hour. Some preliminary results show an improvement in the deep learning algorithm when real image training data are augmented with

  20. Magnetic nanoparticles in magnetic resonance imaging and diagnostics.

    Science.gov (United States)

    Rümenapp, Christine; Gleich, Bernhard; Haase, Axel

    2012-05-01

    Magnetic nanoparticles are useful as contrast agents for magnetic resonance imaging (MRI). Paramagnetic contrast agents have been used for a long time, but more recently superparamagnetic iron oxide nanoparticles (SPIOs) have been discovered to influence MRI contrast as well. In contrast to paramagnetic contrast agents, SPIOs can be functionalized and size-tailored in order to adapt to various kinds of soft tissues. Although both types of contrast agents have a inducible magnetization, their mechanisms of influence on spin-spin and spin-lattice relaxation of protons are different. A special emphasis on the basic magnetism of nanoparticles and their structures as well as on the principle of nuclear magnetic resonance is made. Examples of different contrast-enhanced magnetic resonance images are given. The potential use of magnetic nanoparticles as diagnostic tracers is explored. Additionally, SPIOs can be used in diagnostic magnetic resonance, since the spin relaxation time of water protons differs, whether magnetic nanoparticles are bound to a target or not.

  1. A MICRO-IMAGE FUSION ALGORITHM BASED ON REGION GROWING

    Institute of Scientific and Technical Information of China (English)

    Bai Cuixia; Jiang Gangyi; Yu Mei; Wang Yigang; Shao Feng; Peng Zongju

    2013-01-01

    Due to the limitation of Depth Of Field (DOF) of microscope,the regions which are not within the DOF will be blurring after imaging.Thus for micro-image fusion,the most important step is to identify the blurring regions within each micro-image,so as to remove their undesirable impacts on the fused image.In this paper,a fusion algorithm based on a novel region growing method is proposed for micro-image fusion.The local sharpness of micro-image is judged block by block,then blocks whose sharpness is lower than an adaptive threshold are used as seeds,and the sharpness of neighbors of each seed are evaluated again during the region growing until the blurring regions are identified completely.With the decreasing in block size,the obtained region segmentation becomes more and more accurate.Finally,the micro-images are fused with pixel-wise fusion rules.The experimental results show that the proposed algorithm benefits from the novel region segmentation and it is able to obtain fused micro-image with higher sharpness compared with some popular image fusion method.

  2. Tissue segmentation of computed tomography images using a Random Forest algorithm: a feasibility study

    Science.gov (United States)

    Polan, Daniel F.; Brady, Samuel L.; Kaufman, Robert A.

    2016-09-01

    There is a need for robust, fully automated whole body organ segmentation for diagnostic CT. This study investigates and optimizes a Random Forest algorithm for automated organ segmentation; explores the limitations of a Random Forest algorithm applied to the CT environment; and demonstrates segmentation accuracy in a feasibility study of pediatric and adult patients. To the best of our knowledge, this is the first study to investigate a trainable Weka segmentation (TWS) implementation using Random Forest machine-learning as a means to develop a fully automated tissue segmentation tool developed specifically for pediatric and adult examinations in a diagnostic CT environment. Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck-chest-abdomen-pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the TWS plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2 n , (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21

  3. Enhanced ultrasound for advanced diagnostics, ultrasound tomography for volume limb imaging and prosthetic fitting

    Science.gov (United States)

    Anthony, Brian W.

    2016-04-01

    Ultrasound imaging methods hold the potential to deliver low-cost, high-resolution, operator-independent and nonionizing imaging systems - such systems couple appropriate algorithms with imaging devices and techniques. The increasing demands on general practitioners motivate us to develop more usable and productive diagnostic imaging equipment. Ultrasound, specifically freehand ultrasound, is a low cost and safe medical imaging technique. It doesn't expose a patient to ionizing radiation. Its safety and versatility make it very well suited for the increasing demands on general practitioners, or for providing improved medical care in rural regions or the developing world. However it typically suffers from sonographer variability; we will discuss techniques to address user variability. We also discuss our work to combine cylindrical scanning systems with state of the art inversion algorithms to deliver ultrasound systems for imaging and quantifying limbs in 3-D in vivo. Such systems have the potential to track the progression of limb health at a low cost and without radiation exposure, as well as, improve prosthetic socket fitting. Current methods of prosthetic socket fabrication remain subjective and ineffective at creating an interface to the human body that is both comfortable and functional. Though there has been recent success using methods like magnetic resonance imaging and biomechanical modeling, a low-cost, streamlined, and quantitative process for prosthetic cup design and fabrication has not been fully demonstrated. Medical ultrasonography may inform the design process of prosthetic sockets in a more objective manner. This keynote talk presents the results of progress in this area.

  4. IMAGE ENCRYPTION ALGORITHM USING TWO-DIMENSIONAL CHAOTIC MAPS

    Directory of Open Access Journals (Sweden)

    A. V. Sidorenko

    2016-01-01

    Full Text Available A new image encryption algorithm based on dynamic chaos is proposed. The encryption is performed using the modified element permutation procedure. The element value changing procedure is carried with regard to the performed permutation. The modified permutation procedure includes the following steps: (1 permutation table creation; (2 permutation of image blocks, (3 element permutation in the image regions. The procedure «block permutations – permutation in the image regions» is performed q times – for this study q = 3. The second element value changing procedure is realized with the use of the pseudorandom sequence G that is added to the image elements. The following algorithm is proposed for the formation of this pseudorandom sequence: (1 the formation of the sequence G element distribution by brightness; (2 sequence G element initialization; (3 permutation of the sequence G elements. It is shown that, owing to the modified permutation procedure, the amount of calculations for new positions of the elements using chaotic maps is reduced by a factor of a – in this study a is equal to 16 and 64. The implementation of the proposed element value changing procedure necessitates the formation of d pseudorandom values from the interval [0, 1 with a uniform distribution. Actually, for the majority of practical cases d = 256 is applicable. The proposed algorithm has been tested as follows. The correlation coefficients have been computed for the original and encrypted images, and also for the adjacent elements in the vertical, horizontal, diagonal directions. The algorithm key sensitivity has been evaluated. Besides, the values of the unified average change intensity (UACI and the ratios of differing bits to the total number of bits have been determined. As demonstrated by the testing results, the proposed algorithm is highly operable and may be successfully used to solve the tasks of information security.

  5. Comparison research on iot oriented image classification algorithms

    Directory of Open Access Journals (Sweden)

    Du Ke

    2016-01-01

    Full Text Available Image classification belongs to the machine learning and computer vision fields, it aims to recognize and classify objects in the image contents. How to apply image classification algorithms to large-scale data in the IoT framework is the focus of current research. Based on Anaconda, this article implement sk-NN, SVM, Softmax and Neural Network algorithms by Python, performs data normalization, random search, HOG and colour histogram feature extraction to enhance the algorithms, experiments on them in CIFAR-10 datasets, then conducts comparison from three aspects of training time, test time and classification accuracy. The experimental results show that: the vectorized implementation of the algorithms is more efficient than the loop implementation; The training time of k-NN is the shortest, SVM and Softmax spend more time, and the training time of Neural Network is the longest; The test time of SVM, Softmax and Neural Network are much shorter than of k-NN; Neural Network gets the highest classification accuracy, SVM and Softmax get lower and approximate accuracies, and k-NN gets the lowest accuracy. The effects of three algorithm improvement methods are obvious.

  6. Evaluation of clinical image processing algorithms used in digital mammography.

    Science.gov (United States)

    Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde

    2009-03-01

    Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the

  7. An Improved Fast SPIHT Image Compression Algorithm for Aerial Applications

    Directory of Open Access Journals (Sweden)

    Ning Zhang

    2011-12-01

    Full Text Available In this paper, an improved fast SPIHT algorithm has been presented. SPIHT and NLS (Not List SPIHT are efficient compression algorithms, but the algorithms application is limited by the shortcomings of the poor error resistance and slow compression speed in the aviation areas. In this paper, the error resilience and the compression speed are improved. The remote sensing images are decomposed by Le Gall5/3 wavelet, and wavelet coefficients are indexed, scanned and allocated by the means of family blocks. The bit-plane importance is predicted by bitwise OR, so the N bit-planes can be encoded at the same time. Compared with the SPIHT algorithm, this improved algorithm is easy implemented by hardware, and the compression speed is improved. The PSNR of reconstructed images encoded by fast SPIHT is higher than SPIHT and CCSDS from 0.3 to 0.9db, and the speed is 4-6 times faster than SPIHT encoding process. The algorithm meets the high speed and reliability requirements of aerial applications.

  8. Comparison of Doubling the Size of Image Algorithms

    Directory of Open Access Journals (Sweden)

    S. E. Vaganov

    2016-01-01

    Full Text Available In this paper the comparative analysis for quality of some interpolation non-adaptive methods of doubling the image size is carried out. We used the value of a mean square error for estimation accuracy (quality approximation. Artifacts (aliasing, Gibbs effect (ringing, blurring, etc. introduced by interpolation methods were not considered. The description of the doubling interpolation upscale algorithms are presented, such as: the nearest neighbor method, linear and cubic interpolation, Lanczos convolution interpolation (with a=1,2,3, and 17-point interpolation method. For each method of upscaling to twice optimal coefficients of kernel convolutions for different down-scale to twice algorithms were found. Various methods for reducing the image size by half were considered the mean value over 4 nearest points and the weighted value of 16 nearest points with optimal coefficients. The optimal weights were calculated for each method of doubling described in this paper. The optimal weights were chosen in such a way as to minimize the value of mean square error between the accurate value and the found approximation. A simple method performing correction for approximation of any algorithm of doubling size is offered. The proposed correction method shows good results for simple interpolation algorithms. However, these improvements are insignificant for complex algorithms (17-point interpolation, Lanczos a=3. According to the results of numerical experiments, the most accurate among the reviewed algorithms is the 17-point interpolation method, slightly worse is Lanczos convolution interpolation with the parameter a=3 (see the table at the end

  9. Image processing methods and architectures in diagnostic pathology.

    Directory of Open Access Journals (Sweden)

    Oscar DĂŠniz

    2010-05-01

    Full Text Available Grid technology has enabled the clustering and the efficient and secure access to and interaction among a wide variety of geographically distributed resources such as: supercomputers, storage systems, data sources, instruments and special devices and services. Their main applications include large-scale computational and data intensive problems in science and engineering. General grid structures and methodologies for both software and hardware in image analysis for virtual tissue-based diagnosis has been considered in this paper. This methods are focus on the user level middleware. The article describes the distributed programming system developed by the authors for virtual slide analysis in diagnostic pathology. The system supports different image analysis operations commonly done in anatomical pathology and it takes into account secured aspects and specialized infrastructures with high level services designed to meet application requirements. Grids are likely to have a deep impact on health related applications, and therefore they seem to be suitable for tissue-based diagnosis too. The implemented system is a joint application that mixes both Web and Grid Service Architecture around a distributed architecture for image processing. It has shown to be a successful solution to analyze a big and heterogeneous group of histological images under architecture of massively parallel processors using message passing and non-shared memory.

  10. A novel automatic image processing algorithm for detection of hard exudates based on retinal image analysis.

    Science.gov (United States)

    Sánchez, Clara I; Hornero, Roberto; López, María I; Aboy, Mateo; Poza, Jesús; Abásolo, Daniel

    2008-04-01

    We present an automatic image processing algorithm to detect hard exudates. Automatic detection of hard exudates from retinal images is an important problem since hard exudates are associated with diabetic retinopathy and have been found to be one of the most prevalent earliest signs of retinopathy. The algorithm is based on Fisher's linear discriminant analysis and makes use of colour information to perform the classification of retinal exudates. We prospectively assessed the algorithm performance using a database containing 58 retinal images with variable colour, brightness, and quality. Our proposed algorithm obtained a sensitivity of 88% with a mean number of 4.83+/-4.64 false positives per image using the lesion-based performance evaluation criterion, and achieved an image-based classification accuracy of 100% (sensitivity of 100% and specificity of 100%).

  11. Diagnostic imaging of psoriatic arthritis. Part II: magnetic resonance imaging and ultrasonography.

    Science.gov (United States)

    Sudoł-Szopińska, Iwona; Pracoń, Grzegorz

    2016-06-01

    Plain radiography reveals specific, yet late changes of advanced psoriatic arthritis. Early inflammatory changes are seen both on magnetic resonance imaging and ultrasound within peripheral joints (arthritis, synovitis), tendons sheaths (tenosynovitis, tendovaginitis) and entheses (enthesitis, enthesopathy). In addition, magnetic resonance imaging enables the assessment of inflammatory features in the sacroiliac joints (sacroiliitis), and the spine (spondylitis). In this article, we review current opinions on the diagnostics of some selective, and distinctive features of psoriatic arthritis concerning magnetic resonance imaging and ultrasound and present some hypotheses on psoriatic arthritis etiopathogenesis, which have been studied with the use of magnetic resonance imaging. The following elements of the psoriatic arthritis are discussed: enthesitis, extracapsular inflammation, dactylitis, distal interphalangeal joint and nail disease, and the ability of magnetic resonance imaging to differentiate undifferentiated arthritis, the value of whole-body magnetic resonance imaging and dynamic contrast-enhanced magnetic resonance imaging.

  12. Diagnostic imaging of psoriatic arthritis. Part II: magnetic resonance imaging and ultrasonography

    Directory of Open Access Journals (Sweden)

    Iwona Sudoł-Szopińska

    2016-06-01

    Full Text Available Plain radiography reveals specific, yet late changes of advanced psoriatic arthritis. Early inflammatory changes are seen both on magnetic resonance imaging and ultrasound within peripheral joints (arthritis, synovitis, tendons sheaths (tenosynovitis, tendovaginitis and entheses (enthesitis, enthesopathy. In addition, magnetic resonance imaging enables the assessment of inflammatory features in the sacroiliac joints (sacroiliitis, and the spine (spondylitis. In this article, we review current opinions on the diagnostics of some selective, and distinctive features of psoriatic arthritis concerning magnetic resonance imaging and ultrasound and present some hypotheses on psoriatic arthritis etiopathogenesis, which have been studied with the use of magnetic resonance imaging. The following elements of the psoriatic arthritis are discussed: enthesitis, extracapsular inflammation, dactylitis, distal interphalangeal joint and nail disease, and the ability of magnetic resonance imaging to differentiate undifferentiated arthritis, the value of whole-body magnetic resonance imaging and dynamic contrast-enhanced magnetic resonance imaging.

  13. Sparse Nonlinear Electromagnetic Imaging Accelerated With Projected Steepest Descent Algorithm

    KAUST Repository

    Desmal, Abdulla

    2017-04-03

    An efficient electromagnetic inversion scheme for imaging sparse 3-D domains is proposed. The scheme achieves its efficiency and accuracy by integrating two concepts. First, the nonlinear optimization problem is constrained using L₀ or L₁-norm of the solution as the penalty term to alleviate the ill-posedness of the inverse problem. The resulting Tikhonov minimization problem is solved using nonlinear Landweber iterations (NLW). Second, the efficiency of the NLW is significantly increased using a steepest descent algorithm. The algorithm uses a projection operator to enforce the sparsity constraint by thresholding the solution at every iteration. Thresholding level and iteration step are selected carefully to increase the efficiency without sacrificing the convergence of the algorithm. Numerical results demonstrate the efficiency and accuracy of the proposed imaging scheme in reconstructing sparse 3-D dielectric profiles.

  14. A fast image encryption algorithm based on chaotic map

    Science.gov (United States)

    Liu, Wenhao; Sun, Kehui; Zhu, Congxu

    2016-09-01

    Derived from Sine map and iterative chaotic map with infinite collapse (ICMIC), a new two-dimensional Sine ICMIC modulation map (2D-SIMM) is proposed based on a close-loop modulation coupling (CMC) model, and its chaotic performance is analyzed by means of phase diagram, Lyapunov exponent spectrum and complexity. It shows that this map has good ergodicity, hyperchaotic behavior, large maximum Lyapunov exponent and high complexity. Based on this map, a fast image encryption algorithm is proposed. In this algorithm, the confusion and diffusion processes are combined for one stage. Chaotic shift transform (CST) is proposed to efficiently change the image pixel positions, and the row and column substitutions are applied to scramble the pixel values simultaneously. The simulation and analysis results show that this algorithm has high security, low time complexity, and the abilities of resisting statistical analysis, differential, brute-force, known-plaintext and chosen-plaintext attacks.

  15. Image Encryption Using a Lightweight Stream Encryption Algorithm

    Directory of Open Access Journals (Sweden)

    Saeed Bahrami

    2012-01-01

    Full Text Available Security of the multimedia data including image and video is one of the basic requirements for the telecommunications and computer networks. In this paper, we consider a simple and lightweight stream encryption algorithm for image encryption, and a series of tests are performed to confirm suitability of the described encryption algorithm. These tests include visual test, histogram analysis, information entropy, encryption quality, correlation analysis, differential analysis, and performance analysis. Based on this analysis, it can be concluded that the present algorithm in comparison to A5/1 and W7 stream ciphers has the same security level, is better in terms of the speed of performance, and is used for real-time applications.

  16. Gray Cerebrovascular Image Skeleton Extraction Algorithm Using Level Set Model

    Directory of Open Access Journals (Sweden)

    Jian Wu

    2010-06-01

    Full Text Available The ambiguity and complexity of medical cerebrovascular image makes the skeleton gained by conventional skeleton algorithm discontinuous, which is sensitive at the weak edges, with poor robustness and too many burrs. This paper proposes a cerebrovascular image skeleton extraction algorithm based on Level Set model, using Euclidean distance field and improved gradient vector flow to obtain two different energy functions. The first energy function controls the  obtain of topological nodes for the beginning of skeleton curve. The second energy function controls the extraction of skeleton surface. This algorithm avoids the locating and classifying of the skeleton connection points which guide the skeleton extraction. Because all its parameters are gotten by the analysis and reasoning, no artificial interference is needed.

  17. A Reversible Image Steganographic Algorithm Based on Slantlet Transform

    Directory of Open Access Journals (Sweden)

    Sushil Kumar

    2013-07-01

    Full Text Available In this paper we present a reversible imagesteganography technique based on Slantlet transform (SLTand using advanced encryption standard (AES method. Theproposed method first encodes the message using two sourcecodes, viz., Huffman codes and a self-synchronizing variablelength code known as, T-code. Next, the encoded binarystring is encrypted using an improved AES method. Theencrypted data so obtained is embedded in the middle andhigh frequency sub-bands, obtained by applying 2-level ofSLT to the cover-image, using thresholding method. Theproposed algorithm is compared with the existing techniquesbased on wavelet transform. The Experimental results showthat the proposed algorithm can extract hidden message andrecover the original cover image with low distortion. Theproposed algorithm offers acceptable imperceptibility,security (two-layer security and provides robustness againstGaussian and Salt-n-Pepper noise attack.

  18. Joint graph cut and relative fuzzy connectedness image segmentation algorithm.

    Science.gov (United States)

    Ciesielski, Krzysztof Chris; Miranda, Paulo A V; Falcão, Alexandre X; Udupa, Jayaram K

    2013-12-01

    We introduce an image segmentation algorithm, called GC(sum)(max), which combines, in novel manner, the strengths of two popular algorithms: Relative Fuzzy Connectedness (RFC) and (standard) Graph Cut (GC). We show, both theoretically and experimentally, that GC(sum)(max) preserves robustness of RFC with respect to the seed choice (thus, avoiding "shrinking problem" of GC), while keeping GC's stronger control over the problem of "leaking though poorly defined boundary segments." The analysis of GC(sum)(max) is greatly facilitated by our recent theoretical results that RFC can be described within the framework of Generalized GC (GGC) segmentation algorithms. In our implementation of GC(sum)(max) we use, as a subroutine, a version of RFC algorithm (based on Image Forest Transform) that runs (provably) in linear time with respect to the image size. This results in GC(sum)(max) running in a time close to linear. Experimental comparison of GC(sum)(max) to GC, an iterative version of RFC (IRFC), and power watershed (PW), based on a variety medical and non-medical images, indicates superior accuracy performance of GC(sum)(max) over these other methods, resulting in a rank ordering of GC(sum)(max)>PW∼IRFC>GC.

  19. Quality measures for HRR alignment based ISAR imaging algorithms

    CSIR Research Space (South Africa)

    Janse van Rensburg, V

    2013-05-01

    Full Text Available Some Inverse Synthetic Aperture Radar (ISAR) algorithms form the image in a two-step process of range alignment and phase conjugation. This paper discusses a comprehensive set of measures used to quantify the quality of range alignment, with the aim...

  20. A Novel Algorithm of Surface Eliminating in Undersurface Optoacoustic Imaging

    Directory of Open Access Journals (Sweden)

    Zhulina Yulia V

    2004-01-01

    Full Text Available This paper analyzes the task of optoacoustic imaging of the objects located under the surface covering them. In this paper, we suggest the algorithm of the surface eliminating based on the fact that the intensity of the image as a function of the spatial point should change slowly inside the local objects, and will suffer a discontinuity of the spatial gradients on their boundaries. The algorithm forms the 2-dimensional curves along which the discontinuity of the signal derivatives is detected. Then, the algorithm divides the signal space into the areas along these curves. The signals inside the areas with the maximum level of the signal amplitudes and the maximal gradient absolute values on their edges are put equal to zero. The rest of the signals are used for the image restoration. This method permits to reconstruct the picture of the surface boundaries with a higher contrast than that of the surface detection technique based on the maximums of the received signals. This algorithm does not require any prior knowledge of the signals' statistics inside and outside the local objects. It may be used for reconstructing any images with the help of the signals representing the integral over the object's volume. Simulation and real data are also provided to validate the proposed method.

  1. Adaptive wavelet transform algorithm for lossy image compression

    Science.gov (United States)

    Pogrebnyak, Oleksiy B.; Ramirez, Pablo M.; Acevedo Mosqueda, Marco Antonio

    2004-11-01

    A new algorithm of locally adaptive wavelet transform based on the modified lifting scheme is presented. It performs an adaptation of the wavelet high-pass filter at the prediction stage to the local image data activity. The proposed algorithm uses the generalized framework for the lifting scheme that permits to obtain easily different wavelet filter coefficients in the case of the (~N, N) lifting. Changing wavelet filter order and different control parameters, one can obtain the desired filter frequency response. It is proposed to perform the hard switching between different wavelet lifting filter outputs according to the local data activity estimate. The proposed adaptive transform possesses a good energy compaction. The designed algorithm was tested on different images. The obtained simulation results show that the visual and quantitative quality of the restored images is high. The distortions are less in the vicinity of high spatial activity details comparing to the non-adaptive transform, which introduces ringing artifacts. The designed algorithm can be used for lossy image compression and in the noise suppression applications.

  2. E-PLE: an Algorithm for Image Inpainting

    Directory of Open Access Journals (Sweden)

    Yi-Qing Wang

    2013-12-01

    Full Text Available Gaussian mixture is a powerful tool for modeling the patch prior. In this work, a probabilisticview of an existing algorithm piecewise linear estimation (PLE for image inpainting is presentedwhich leads to several theoretical and numerical improvements based on an effective use ofGaussian mixture.

  3. Prototype for Meta-Algorithmic, Content-Aware Image Analysis

    Science.gov (United States)

    2015-03-01

    PROTOTYPE FOR META- ALGORITHMIC , CONTENT-AWARE IMAGE ANALYSIS UNIVERSITY OF VIRGINIA MARCH 2015 FINAL TECHNICAL REPORT...Visual Object Recognition : A Review." IEEE Trans. on Pattern Analysis and Machine Analysis , 2013. [32] Rakotomamonjy A., Bach F., Canu S., Y...Learning a Discriminative Dictionary for Recognition ," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 11, pp. 2651 - 2664

  4. An image-tracking algorithm based on object center distance-weighting and image feature recognition

    Institute of Scientific and Technical Information of China (English)

    JIANG Shuhong; WANG Qin; ZHANG Jianqiu; HU Bo

    2007-01-01

    Areal-time image-tracking algorithm is proposed.which gives small weights to pixels farther from the object center and uses the quantized image gray scales as a template.It identifies the target's location by the mean-shift iteration method and arrives at the target's scale by using image feature recognition.It improves the kernel-based algorithm in tracking scale-changing targets.A decimation mcthod is proposed to track large-sized targets and real-time experimental results verify the effectiveness of the proposed algorithm.

  5. Novel Near-Lossless Compression Algorithm for Medical Sequence Images with Adaptive Block-Based Spatial Prediction.

    Science.gov (United States)

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2016-12-01

    To address the low compression efficiency of lossless compression and the low image quality of general near-lossless compression, a novel near-lossless compression algorithm based on adaptive spatial prediction is proposed for medical sequence images for possible diagnostic use in this paper. The proposed method employs adaptive block size-based spatial prediction to predict blocks directly in the spatial domain and Lossless Hadamard Transform before quantization to improve the quality of reconstructed images. The block-based prediction breaks the pixel neighborhood constraint and takes full advantage of the local spatial correlations found in medical images. The adaptive block size guarantees a more rational division of images and the improved use of the local structure. The results indicate that the proposed algorithm can efficiently compress medical images and produces a better peak signal-to-noise ratio (PSNR) under the same pre-defined distortion than other near-lossless methods.

  6. A chaos-based image encryption algorithm using alternate structure

    Institute of Scientific and Technical Information of China (English)

    ZHANG YiWei; WANG YuMin; SHEN XuBang

    2007-01-01

    Combined with two chaotic maps, a novel alternate structure is applied to image cryptosystem. In proposed algorithm, a general cat-map is used for permutation and diffusion, as well as the OCML (one-way coupled map lattice), which is applied for substitution. These two methods are operated alternately in every round of encryption process, where two subkeys employed in different chaotic maps are generated through the masterkey spreading. Decryption has the same structure with the encryption algorithm, but the masterkey in each round should be reversely ordered in decryption. The cryptanalysis shows that the proposed algorithm bears good immunities to many forms of attacks. Moreover, the algorithm features high execution speed and compact program, which is suitable for various software and hardware applications.

  7. An Efficient Block Matching Algorithm Using Logical Image

    Directory of Open Access Journals (Sweden)

    Manisha Pal

    2014-12-01

    Full Text Available Motion estimation, which has been widely used in various image sequence coding schemes, plays a key role in the transmission and storage of video signals at reduced bit rates. There are two classes of motion estimation methods, Block matching algorithms (BMA and Pel-recursive algorithms (PRA. Due to its implementation simplicity, block matching algorithms have been widely adopted by various video coding standards such as CCITT H.261, ITU-T H.263, and MPEG. In BMA, the current image frame is partitioned into fixed-size rectangular blocks. The motion vector for each block is estimated by finding the best matching block of pixels within the search window in the previous frame according to matching criteria. The goal of this work is to find a fast method for motion estimation and motion segmentation using proposed model. Recent day Communication between ends is facilitated by the development in the area of wired and wireless networks. And it is a challenge to transmit large data file over limited bandwidth channel. Block matching algorithms are very useful in achieving the efficient and acceptable compression. Block matching algorithm defines the total computation cost and effective bit budget. To efficiently obtain motion estimation different approaches can be followed but above constraints should be kept in mind. This paper presents a novel method using three step and diamond algorithms with modified search pattern based on logical image for the block based motion estimation. It has been found that, the improved PSNR value obtained from proposed algorithm shows a better computation time (faster as compared to original Three step Search (3SS/TSS method .The experimental results based on the number of video sequences were presented to demonstrate the advantages of proposed motion estimation technique.

  8. Secure and robust steganographic algorithm for binary images

    Science.gov (United States)

    Agaian, Sos S.; Cherukuri, Ravindranath

    2006-05-01

    In recent years, active research has mainly concentrated on authenticating a signature; tracking a document in a digital library, and tamper detection of a scanned document or secured communication using binary images. Binary image steganographical systems provide a solution for the above discussed issues. The two color constraint of the image limits the extension of various LSB embedding techniques to the binary case. In this paper, we present a new data hiding system for binary images and scanned documents. The system initially identifies embeddable blocks and enforces specific block statistics to hide sensitive information. The distribution of the flippable pixels in these blocks is highly uneven over the image. A variable block embedding threshold is employed for capitalizing on this uneven distribution of pixels. In addition, we also present a measure to find the best the cover given a specific file of sensitive information. The simulation was performed over 50 various binary images such the scanned documents, cartoons, threshold color images. Simulation results shows that 1) The amount of data embedded is comparatively higher than the existing algorithms (such as K.H. Hwang et.al [5], J. Chen et.al [10], M.Y.Wu et.al [9]). 2) The visual distortion in cover image is minimal when compared with the existing algorithms (such as J. Chen[10], M.Y.Wu et.al [9]) will be presented.

  9. Application of particle filtering algorithm in image reconstruction of EMT

    Science.gov (United States)

    Wang, Jingwen; Wang, Xu

    2015-07-01

    To improve the image quality of electromagnetic tomography (EMT), a new image reconstruction method of EMT based on a particle filtering algorithm is presented. Firstly, the principle of image reconstruction of EMT is analyzed. Then the search process for the optimal solution for image reconstruction of EMT is described as a system state estimation process, and the state space model is established. Secondly, to obtain the minimum variance estimation of image reconstruction, the optimal weights of random samples obtained from the state space are calculated from the measured information. Finally, simulation experiments with five different flow regimes are performed. The experimental results have shown that the average image error of reconstruction results obtained by the method mentioned in this paper is 42.61%, and the average correlation coefficient with the original image is 0.8706, which are much better than corresponding indicators obtained by LBP, Landweber and Kalman Filter algorithms. So, this EMT image reconstruction method has high efficiency and accuracy, and provides a new method and means for EMT research.

  10. Cropping and noise resilient steganography algorithm using secret image sharing

    Science.gov (United States)

    Juarez-Sandoval, Oswaldo; Fierro-Radilla, Atoany; Espejel-Trujillo, Angelina; Nakano-Miyatake, Mariko; Perez-Meana, Hector

    2015-03-01

    This paper proposes an image steganography scheme, in which a secret image is hidden into a cover image using a secret image sharing (SIS) scheme. Taking advantage of the fault tolerant property of the (k,n)-threshold SIS, where using any k of n shares (k≤n), the secret data can be recovered without any ambiguity, the proposed steganography algorithm becomes resilient to cropping and impulsive noise contamination. Among many SIS schemes proposed until now, Lin and Chan's scheme is selected as SIS, due to its lossless recovery capability of a large amount of secret data. The proposed scheme is evaluated from several points of view, such as imperceptibility of the stegoimage respect to its original cover image, robustness of hidden data to cropping operation and impulsive noise contamination. The evaluation results show a high quality of the extracted secret image from the stegoimage when it suffered more than 20% cropping or high density noise contamination.

  11. Object Recognition Algorithm Utilizing Graph Cuts Based Image Segmentation

    Directory of Open Access Journals (Sweden)

    Zhaofeng Li

    2014-02-01

    Full Text Available This paper concentrates on designing an object recognition algorithm utilizing image segmentation. The main innovations of this paper lie in that we convert the image segmentation problem into graph cut problem, and then the graph cut results can be obtained by calculating the probability of intensity for a given pixel which is belonged to the object and the background intensity. After the graph cut process, the pixels in a same component are similar, and the pixels in different components are dissimilar. To detect the objects in the test image, the visual similarity between the segments of the testing images and the object types deduced from the training images is estimated. Finally, a series of experiments are conducted to make performance evaluation. Experimental results illustrate that compared with existing methods, the proposed scheme can effectively detect the salient objects. Particularly, we testify that, in our scheme, the precision of object recognition is proportional to image segmentation accuracy

  12. Advances and applications of optimised algorithms in image processing

    CERN Document Server

    Oliva, Diego

    2017-01-01

    This book presents a study of the use of optimization algorithms in complex image processing problems. The problems selected explore areas ranging from the theory of image segmentation to the detection of complex objects in medical images. Furthermore, the concepts of machine learning and optimization are analyzed to provide an overview of the application of these tools in image processing. The material has been compiled from a teaching perspective. Accordingly, the book is primarily intended for undergraduate and postgraduate students of Science, Engineering, and Computational Mathematics, and can be used for courses on Artificial Intelligence, Advanced Image Processing, Computational Intelligence, etc. Likewise, the material can be useful for research from the evolutionary computation, artificial intelligence and image processing co.

  13. Noise reduction in selective computational ghost imaging using genetic algorithm

    Science.gov (United States)

    Zafari, Mohammad; Ahmadi-Kandjani, Sohrab; Kheradmand, Reza

    2017-03-01

    Recently, we have presented a selective computational ghost imaging (SCGI) method as an advanced technique for enhancing the security level of the encrypted ghost images. In this paper, we propose a modified method to improve the ghost image quality reconstructed by SCGI technique. The method is based on background subtraction using genetic algorithm (GA) which eliminates background noise and gives background-free ghost images. Analyzing the universal image quality index by using experimental data proves the advantage of this modification method. In particular, the calculated value of the image quality index for modified SCGI over 4225 realization shows an 11 times improvement with respect to SCGI technique. This improvement is 20 times in comparison to conventional CGI technique.

  14. Heuristic Scheduling Algorithm Oriented Dynamic Tasks for Imaging Satellites

    Directory of Open Access Journals (Sweden)

    Maocai Wang

    2014-01-01

    Full Text Available Imaging satellite scheduling is an NP-hard problem with many complex constraints. This paper researches the scheduling problem for dynamic tasks oriented to some emergency cases. After the dynamic properties of satellite scheduling were analyzed, the optimization model is proposed in this paper. Based on the model, two heuristic algorithms are proposed to solve the problem. The first heuristic algorithm arranges new tasks by inserting or deleting them, then inserting them repeatedly according to the priority from low to high, which is named IDI algorithm. The second one called ISDR adopts four steps: insert directly, insert by shifting, insert by deleting, and reinsert the tasks deleted. Moreover, two heuristic factors, congestion degree of a time window and the overlapping degree of a task, are employed to improve the algorithm’s performance. Finally, a case is given to test the algorithms. The results show that the IDI algorithm is better than ISDR from the running time point of view while ISDR algorithm with heuristic factors is more effective with regard to algorithm performance. Moreover, the results also show that our method has good performance for the larger size of the dynamic tasks in comparison with the other two methods.

  15. A local search with smoothing approximation in hybrid algorithms of diagnostics of hydromechanical systems

    Directory of Open Access Journals (Sweden)

    V. D. Sulimov

    2014-01-01

    Full Text Available Modern methods for solving practical problems relating to trouble free, efficient and pro-longed operation of complex systems are presumed the application of computational diagnos-tics. Input data for diagnosing usually contain the results of experimental measurements of the system certain investigatory characteristics; among them may be registered parameters of oscillatory motion or impact process. The diagnostic procedure is founded on the solution of the corresponding inverse spectral problem; the problem in many cases may be reduced to a minimization of an appropriate error criterion. Eigenvalues from the direct problem for the mathematical model and useful measured data for the system are used in order to construct the corresponding criterion. When solving these inverse problems, consideration must be given to following special features: the error criterion may be represented by nondifferentiable and multiextremal function.Consideration is being given to problems of identification of anomalies in the phase constitution of the coolant circulating throw the reactor primary circuit. Main dynamical char-acteristics of the object under diagnosing are considered as continuous functions of the bounded set of control variables. Possible occurrence of anomalies in the phase constitution of the coolant can be detected owing to changes in dynamical characteristics of the two-phase flow. It is suggested that criterion functions are continuous, Lipschitzian, multiextremal and not everywhere differentiable. Two novel hybrid algorithms are proposed with scanning a search space by use of the modern stochastic Multi-Particle Collision Algorithm on base of analogy with absorbtion and scattering processes for nuclear particles. The local search is im-plemented using the hyperbolic smoothing function method for the first algorithm, and the linearization method with two-parametric smoothing approximations of criteria for the second one. Some results on solving

  16. Enhanced temporal resolution at cardiac CT with a novel CT image reconstruction algorithm: Initial patient experience

    Energy Technology Data Exchange (ETDEWEB)

    Apfaltrer, Paul, E-mail: paul.apfaltrer@medma.uni-heidelberg.de [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Institute of Clinical Radiology and Nuclear Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim (Germany); Schoendube, Harald, E-mail: harald.schoendube@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Schoepf, U. Joseph, E-mail: schoepf@musc.edu [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Allmendinger, Thomas, E-mail: thomas.allmendinger@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Tricarico, Francesco, E-mail: francescotricarico82@gmail.com [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Department of Bioimaging and Radiological Sciences, Catholic University of the Sacred Heart, “A. Gemelli” Hospital, Largo A. Gemelli 8, Rome (Italy); Schindler, Andreas, E-mail: andreas.schindler@campus.lmu.de [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Vogt, Sebastian, E-mail: sebastian.vogt@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Sunnegårdh, Johan, E-mail: johan.sunnegardh@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); and others

    2013-02-15

    Objective: To evaluate the effect of a temporal resolution improvement method (TRIM) for cardiac CT on diagnostic image quality for coronary artery assessment. Materials and methods: The TRIM-algorithm employs an iterative approach to reconstruct images from less than 180° of projections and uses a histogram constraint to prevent the occurrence of limited-angle artifacts. This algorithm was applied in 11 obese patients (7 men, 67.2 ± 9.8 years) who had undergone second generation dual-source cardiac CT with 120 kV, 175–426 mAs, and 500 ms gantry rotation. All data were reconstructed with a temporal resolution of 250 ms using traditional filtered-back projection (FBP) and of 200 ms using the TRIM-algorithm. Contrast attenuation and contrast-to-noise-ratio (CNR) were measured in the ascending aorta. The presence and severity of coronary motion artifacts was rated on a 4-point Likert scale. Results: All scans were considered of diagnostic quality. Mean BMI was 36 ± 3.6 kg/m{sup 2}. Average heart rate was 60 ± 9 bpm. Mean effective dose was 13.5 ± 4.6 mSv. When comparing FBP- and TRIM reconstructed series, the attenuation within the ascending aorta (392 ± 70.7 vs. 396.8 ± 70.1 HU, p > 0.05) and CNR (13.2 ± 3.2 vs. 11.7 ± 3.1, p > 0.05) were not significantly different. A total of 110 coronary segments were evaluated. All studies were deemed diagnostic; however, there was a significant (p < 0.05) difference in the severity score distribution of coronary motion artifacts between FBP (median = 2.5) and TRIM (median = 2.0) reconstructions. Conclusion: The algorithm evaluated here delivers diagnostic imaging quality of the coronary arteries despite 500 ms gantry rotation. Possible applications include improvement of cardiac imaging on slower gantry rotation systems or mitigation of the trade-off between temporal resolution and CNR in obese patients.

  17. Classification decision tree algorithm assisting in diagnosing solitary pulmonary nodule by SPECT/CT fusion imaging

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Objective To develop a classification tree algorithm to improve diagnostic performances of 99mTc-MIBI SPECT/CT fusion imaging in differentiating solitary pulmonary nodules(SPNs).Methods Forty-four SPNs,including 30 malignant cases and 14 benign ones that were eventually pathologically identified,were included in this prospective study.All patients received 99Tcm-MIBI SPECT/CT scanning at an early stage and a delayed stage before operation.Thirty predictor variables,including 11 clinical variables,4 variable...

  18. A New Efficient Reordering Algorithm for Color Palette Image

    Directory of Open Access Journals (Sweden)

    Somaye Akbari Moghadam

    2013-11-01

    Full Text Available Palette re-ordering is a class of pre-processing methods aiming at finding a permutation of color palette such that the resulting image of indexes is more amenable for compression. The efficiency of lossless compression algorithms for fixed-palette images (indexed images may change if a different indexing scheme is adopted. Obtaining an optimal re-indexing scheme is suspected to be a hard problem and only approximate solutions have been provided in literature. In this paper, we explore a heuristic method to improve the performances on compression ratio. The results indicate that the proposed approach is very effective, acceptable and proved.

  19. Comparison of algorithms for ultrasound image segmentation without ground truth

    Science.gov (United States)

    Sikka, Karan; Deserno, Thomas M.

    2010-02-01

    Image segmentation is a pre-requisite to medical image analysis. A variety of segmentation algorithms have been proposed, and most are evaluated on a small dataset or based on classification of a single feature. The lack of a gold standard (ground truth) further adds to the discrepancy in these comparisons. This work proposes a new methodology for comparing image segmentation algorithms without ground truth by building a matrix called region-correlation matrix. Subsequently, suitable distance measures are proposed for quantitative assessment of similarity. The first measure takes into account the degree of region overlap or identical match. The second considers the degree of splitting or misclassification by using an appropriate penalty term. These measures are shown to satisfy the axioms of a quasi-metric. They are applied for a comparative analysis of synthetic segmentation maps to show their direct correlation with human intuition of similar segmentation. Since ultrasound images are difficult to segment and usually lack a ground truth, the measures are further used to compare the recently proposed spectral clustering algorithm (encoding spatial and edge information) with standard k-means over abdominal ultrasound images. Improving the parameterization and enlarging the feature space for k-means steadily increased segmentation quality to that of spectral clustering.

  20. Gold Nanoconstructs for Multimodal Diagnostic Imaging and Photothermal Cancer Therapy

    Science.gov (United States)

    Coughlin, Andrew James

    Cancer accounts for nearly 1 out of every 4 deaths in the United States, and because conventional treatments are limited by morbidity and off-target toxicities, improvements in cancer management are needed. This thesis further develops nanoparticle-assisted photothermal therapy (NAPT) as a viable treatment option for cancer patients. NAPT enables localized ablation of disease because heat generation only occurs where tissue permissive near-infrared (NIR) light and absorbing nanoparticles are combined, leaving surrounding normal tissue unharmed. Two principle approaches were investigated to improve the specificity of this technique: multimodal imaging and molecular targeting. Multimodal imaging affords the ability to guide NIR laser application for site-specific NAPT and more holistic characterization of disease by combining the advantages of several diagnostic technologies. Towards the goal of image-guided NAPT, gadolinium-conjugated gold-silica nanoshells were engineered and demonstrated to enhance imaging contrast across a range of diagnostic modes, including T1-weighted magnetic resonance imaging, X-Ray, optical coherence tomography, reflective confocal microscopy, and two-photon luminescence in vitro as well as within an animal tumor model. Additionally, the nanoparticle conjugates were shown to effectively convert NIR light to heat for applications in photothermal therapy. Therefore, the broad utility of gadolinium-nanoshells for anatomic localization of tissue lesions, molecular characterization of malignancy, and mediators of ablation was established. Molecular targeting strategies may also improve NAPT by promoting nanoparticle uptake and retention within tumors and enhancing specificity when malignant and normal tissue interdigitate. Here, ephrinA1 protein ligands were conjugated to nanoshell surfaces for particle homing to overexpressed EphA2 receptors on prostate cancer cells. In vitro, successful targeting and subsequent photothermal ablation of

  1. Analysis of Fast- ICA Algorithm for Separation of Mixed Images

    Directory of Open Access Journals (Sweden)

    Tanmay Awasthy

    2013-10-01

    Full Text Available Independent component analysis (ICA is a newly developed method in which the aim is to find a linear representation of nongaussian statistics so that the components are statistically independent, or as independent as possible. Such techniques are actively being used in study of both statistical image processing and unsupervised neural learning application. This paper represents the Fast Independent component analysis algorithm for separation of mixed images. To solve the blind signal separation problems Independent component analysis approach used statistical independence of the source signals. This paper focuses on the theory and methods of ICA in contrast to classical transformations along with the applications of this method to blind source separation .For an illustration of the algorithm, visualized the immixing process with a set of images has been done. To express the results of our analysis simulations have been presented.

  2. A Distortion Input Parameter in Image Denoising Algorithms with Wavelets

    Directory of Open Access Journals (Sweden)

    Anisia GOGU

    2009-07-01

    Full Text Available The problem of image denoising based on wavelets is considered. The paper proposes an image denoising method by imposing a distortion input parameter instead of threshold. The method has two algorithms. The first one is running off line and it is applied to the prototype of the image class and it building a specific dependency, linear or nonlinear, between the final desired distortion and the necessary probability of the details coefficients. The next algorithm, is directly applying the denoising with a threshold computed from the previous step. The threshold is estimated by using the probability density function of the details coefficients and by imposing the probability of the coefficients which will be kept. The obtained results are at the same quality level with other well known methods.

  3. Study on scalable coding algorithm for medical image.

    Science.gov (United States)

    Hongxin, Chen; Zhengguang, Liu; Hongwei, Zhang

    2005-01-01

    According to the characteristics of medical image and wavelet transform, a scalable coding algorithm is presented, which can be used in image transmission by network. Wavelet transform makes up for the weakness of DCT transform and it is similar to the human visual system. The second generation of wavelet transform, the lifting scheme, can be completed by integer form, which is divided into several steps, and they can be realized by calculation form integer to integer. Lifting scheme can simplify the computing process and increase transform precision. According to the property of wavelet sub-bands, wavelet coefficients are organized on the basis of the sequence of their importance, so code stream is formed progressively and it is scalable in resolution. Experimental results show that the algorithm can be used effectively in medical image compression and suitable to long-distance browse.

  4. [Maize seed identification using hyperspectral imaging and SVDD algorithm].

    Science.gov (United States)

    Zhu, Qi-Bing; Feng, Zhao-Li; Huang, Min; Zhu, Xiao

    2013-02-01

    The sufficiency of feature extraction and the rationality of classifier design are two key issues affecting the accuracy of maize seed recognition. In the present study, the hyperspectral images of maize seeds were acquired using hyperspectral image system, and the image entropy of maize seeds for each wavelength was extracted as classification features. Then, support vector data description (SVDD) algorithm was used to develop the classifier model for each variety of maize seeds. The SVDD models yielded 94.14% average test accuracy for known variety samples and 92.28% average test accuracy for new variety samples, respectively. The simulation results showed that the proposed method implemented accurate identification of maize seeds and solved the problem of misclassification by the traditional classification algorithm for new variety maize seeds.

  5. Targeted diagnostic magnetic nanoparticles for medical imaging of pancreatic cancer.

    Science.gov (United States)

    Rosenberger, I; Strauss, A; Dobiasch, S; Weis, C; Szanyi, S; Gil-Iceta, L; Alonso, E; González Esparza, M; Gómez-Vallejo, V; Szczupak, B; Plaza-García, S; Mirzaei, S; Israel, L L; Bianchessi, S; Scanziani, E; Lellouche, J-P; Knoll, P; Werner, J; Felix, K; Grenacher, L; Reese, T; Kreuter, J; Jiménez-González, M

    2015-09-28

    Highly aggressive cancer types such as pancreatic cancer possess a mortality rate of up to 80% within the first 6months after diagnosis. To reduce this high mortality rate, more sensitive diagnostic tools allowing an early stage medical imaging of even very small tumours are needed. For this purpose, magnetic, biodegradable nanoparticles prepared using recombinant human serum albumin (rHSA) and incorporated iron oxide (maghemite, γ-Fe2O3) nanoparticles were developed. Galectin-1 has been chosen as target receptor as this protein is upregulated in pancreatic cancer and its precursor lesions but not in healthy pancreatic tissue nor in pancreatitis. Tissue plasminogen activator derived peptides (t-PA-ligands), that have a high affinity to galectin-1 have been chosen as target moieties and were covalently attached onto the nanoparticle surface. Improved targeting and imaging properties were shown in mice using single photon emission computed tomography-computer tomography (SPECT-CT), a handheld gamma camera, and magnetic resonance imaging (MRI).

  6. IDEAL: Images Across Domains, Experiments, Algorithms and Learning

    Science.gov (United States)

    Ushizima, Daniela M.; Bale, Hrishikesh A.; Bethel, E. Wes; Ercius, Peter; Helms, Brett A.; Krishnan, Harinarayan; Grinberg, Lea T.; Haranczyk, Maciej; Macdowell, Alastair A.; Odziomek, Katarzyna; Parkinson, Dilworth Y.; Perciano, Talita; Ritchie, Robert O.; Yang, Chao

    2016-09-01

    Research across science domains is increasingly reliant on image-centric data. Software tools are in high demand to uncover relevant, but hidden, information in digital images, such as those coming from faster next generation high-throughput imaging platforms. The challenge is to analyze the data torrent generated by the advanced instruments efficiently, and provide insights such as measurements for decision-making. In this paper, we overview work performed by an interdisciplinary team of computational and materials scientists, aimed at designing software applications and coordinating research efforts connecting (1) emerging algorithms for dealing with large and complex datasets; (2) data analysis methods with emphasis in pattern recognition and machine learning; and (3) advances in evolving computer architectures. Engineering tools around these efforts accelerate the analyses of image-based recordings, improve reusability and reproducibility, scale scientific procedures by reducing time between experiments, increase efficiency, and open opportunities for more users of the imaging facilities. This paper describes our algorithms and software tools, showing results across image scales, demonstrating how our framework plays a role in improving image understanding for quality control of existent materials and discovery of new compounds.

  7. IDEAL: Images Across Domains, Experiments, Algorithms and Learning

    Science.gov (United States)

    Ushizima, Daniela M.; Bale, Hrishikesh A.; Bethel, E. Wes; Ercius, Peter; Helms, Brett A.; Krishnan, Harinarayan; Grinberg, Lea T.; Haranczyk, Maciej; Macdowell, Alastair A.; Odziomek, Katarzyna; Parkinson, Dilworth Y.; Perciano, Talita; Ritchie, Robert O.; Yang, Chao

    2016-11-01

    Research across science domains is increasingly reliant on image-centric data. Software tools are in high demand to uncover relevant, but hidden, information in digital images, such as those coming from faster next generation high-throughput imaging platforms. The challenge is to analyze the data torrent generated by the advanced instruments efficiently, and provide insights such as measurements for decision-making. In this paper, we overview work performed by an interdisciplinary team of computational and materials scientists, aimed at designing software applications and coordinating research efforts connecting (1) emerging algorithms for dealing with large and complex datasets; (2) data analysis methods with emphasis in pattern recognition and machine learning; and (3) advances in evolving computer architectures. Engineering tools around these efforts accelerate the analyses of image-based recordings, improve reusability and reproducibility, scale scientific procedures by reducing time between experiments, increase efficiency, and open opportunities for more users of the imaging facilities. This paper describes our algorithms and software tools, showing results across image scales, demonstrating how our framework plays a role in improving image understanding for quality control of existent materials and discovery of new compounds.

  8. Effective FCM noise clustering algorithms in medical images.

    Science.gov (United States)

    Kannan, S R; Devi, R; Ramathilagam, S; Takezawa, K

    2013-02-01

    The main motivation of this paper is to introduce a class of robust non-Euclidean distance measures for the original data space to derive new objective function and thus clustering the non-Euclidean structures in data to enhance the robustness of the original clustering algorithms to reduce noise and outliers. The new objective functions of proposed algorithms are realized by incorporating the noise clustering concept into the entropy based fuzzy C-means algorithm with suitable noise distance which is employed to take the information about noisy data in the clustering process. This paper presents initial cluster prototypes using prototype initialization method, so that this work tries to obtain the final result with less number of iterations. To evaluate the performance of the proposed methods in reducing the noise level, experimental work has been carried out with a synthetic image which is corrupted by Gaussian noise. The superiority of the proposed methods has been examined through the experimental study on medical images. The experimental results show that the proposed algorithms perform significantly better than the standard existing algorithms. The accurate classification percentage of the proposed fuzzy C-means segmentation method is obtained using silhouette validity index.

  9. Low-Complexity Regularization Algorithms for Image Deblurring

    KAUST Repository

    Alanazi, Abdulrahman

    2016-11-01

    Image restoration problems deal with images in which information has been degraded by blur or noise. In practice, the blur is usually caused by atmospheric turbulence, motion, camera shake, and several other mechanical or physical processes. In this study, we present two regularization algorithms for the image deblurring problem. We first present a new method based on solving a regularized least-squares (RLS) problem. This method is proposed to find a near-optimal value of the regularization parameter in the RLS problems. Experimental results on the non-blind image deblurring problem are presented. In all experiments, comparisons are made with three benchmark methods. The results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and structural similarity, as well as the visual quality of the deblurred images. To reduce the complexity of the proposed algorithm, we propose a technique based on the bootstrap method to estimate the regularization parameter in low and high-resolution images. Numerical results show that the proposed technique can effectively reduce the computational complexity of the proposed algorithms. In addition, for some cases where the point spread function (PSF) is separable, we propose using a Kronecker product so as to reduce the computations. Furthermore, in the case where the image is smooth, it is always desirable to replace the regularization term in the RLS problems by a total variation term. Therefore, we propose a novel method for adaptively selecting the regularization parameter in a so-called square root regularized total variation (SRTV). Experimental results demonstrate that our proposed method outperforms the other benchmark methods when applied to smooth images in terms of PSNR, SSIM and the restored image quality. In this thesis, we focus on the non-blind image deblurring problem, where the blur kernel is assumed to be known. However, we developed algorithms that also work

  10. Infrared imaging diagnostics for intense pulsed electron beam

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Xiao; Shen, Jie; Liu, Wenbin; Zhong, Haowen; Zhang, Jie; Zhang, Gaolong; Le, Xiaoyun, E-mail: xyle@buaa.edu.cn [School of Physics and Nuclear Energy Engineering, Beihang University, Beijing 100191 (China); International Research Center for Nuclei and Particles in the Cosmos, Beihang University, Beijing 100191 (China); Qu, Miao; Yan, Sha [Institute of Heavy Ion Physics, Peking University, Beijing 100871 (China)

    2015-08-15

    Infrared imaging diagnostic method for two-dimensional calorimetric diagnostics has been developed for intense pulsed electron beam (IPEB). By using a 100-μm-thick tungsten film as the infrared heat sink for IPEB, the emitting uniformity of the electron source can be analyzed to evaluate the efficiency and stability of the diode system. Two-dimensional axisymmetric finite element method heat transfer simulation, combined with Monte Carlo calculation, was performed for error estimation and optimization of the method. The test of the method was finished with IPEB generated by explosive emission electron diode with pulse duration (FWHM) of 80 ns, electron energy up to 450 keV, and a total beam current of over 1 kA. The results showed that it is possible to measure the cross-sectional energy density distribution of IPEB with energy sensitivity of 0.1 J/cm{sup 2} and spatial resolution of 1 mm. The technical details, such as irradiation protection of bremsstrahlung γ photons and the functional extensibility of the method were discussed in this work.

  11. Infrared imaging diagnostics for intense pulsed electron beam.

    Science.gov (United States)

    Yu, Xiao; Shen, Jie; Qu, Miao; Liu, Wenbin; Zhong, Haowen; Zhang, Jie; Yan, Sha; Zhang, Gaolong; Le, Xiaoyun

    2015-08-01

    Infrared imaging diagnostic method for two-dimensional calorimetric diagnostics has been developed for intense pulsed electron beam (IPEB). By using a 100-μm-thick tungsten film as the infrared heat sink for IPEB, the emitting uniformity of the electron source can be analyzed to evaluate the efficiency and stability of the diode system. Two-dimensional axisymmetric finite element method heat transfer simulation, combined with Monte Carlo calculation, was performed for error estimation and optimization of the method. The test of the method was finished with IPEB generated by explosive emission electron diode with pulse duration (FWHM) of 80 ns, electron energy up to 450 keV, and a total beam current of over 1 kA. The results showed that it is possible to measure the cross-sectional energy density distribution of IPEB with energy sensitivity of 0.1 J/cm(2) and spatial resolution of 1 mm. The technical details, such as irradiation protection of bremsstrahlung γ photons and the functional extensibility of the method were discussed in this work.

  12. Hemicrania Continua: Functional Imaging and Clinical Features With Diagnostic Implications.

    Science.gov (United States)

    Sahler, Kristen

    2013-04-10

    This review focuses on summarizing 2 pivotal articles in the clinical and pathophysiologic understanding of hemicrania continua (HC). The first article, a functional imaging project, identifies both the dorsal rostral pons (a region associated with the generation of migraines) and the posterior hypothalamus (a region associated with the generation of cluster and short-lasting unilateral neuralgiform headache with conjunctival injection and tearing [SUNCT]) as active during HC. The second article is a summary of the clinical features seen in a prospective cohort of HC patients that carry significant diagnostic implications. In particular, they identify a wider range of autonomic signs than what is currently included in the International Headache Society criteria (including an absence of autonomic signs in a small percentage of patients), a high frequency of migrainous features, and the presence of aggravation and/or restlessness during attacks. Wide variations in exacerbation length, frequency, pain description, and pain location (including side-switching pain) are also noted. Thus, a case is made for widening and modifying the clinical diagnostic criteria used to identify patients with HC.

  13. Targeting SR-BI for cancer diagnostics, imaging and therapy

    Directory of Open Access Journals (Sweden)

    Maneesha Amrita Rajora

    2016-09-01

    Full Text Available Scavenger receptor class B type I (SR-BI plays an important role in trafficking cholesteryl esters between the core of high density lipoprotein and the liver. Interestingly, this integral membrane protein receptor is also implicated in the metabolism of cholesterol by cancer cells, whereby overexpression of SR-BI has been observed in a number of tumours and cancer cell lines, including breast and prostate cancers. Consequently, SR-BI has recently gained attention as a cancer biomarker and exciting target for the direct cytosolic delivery of therapeutic agents. This brief review highlights these key developments in SR-BI-targeted cancer therapies and imaging probes. Special attention is given to the exploration of high density lipoprotein nanomimetic platforms that take advantage of upregulated SR-BI expression to facilitate targeted drug-delivery and cancer diagnostics, and promising future directions in the development of these agents.

  14. First Steps Toward Incorporating Image Based Diagnostics Into Particle Accelerator Control Systems Using Convolutional Neural Networks

    OpenAIRE

    Edelen, A. L.; Biedron, S. G.; Milton, S. V.; Edelen, J. P.

    2016-01-01

    At present, a variety of image-based diagnostics are used in particle accelerator systems. Often times, these are viewed by a human operator who then makes appropriate adjustments to the machine. Given recent advances in using convolutional neural networks (CNNs) for image processing, it should be possible to use image diagnostics directly in control routines (NN-based or otherwise). This is especially appealing for non-intercepting diagnostics that could run continuously during beam operatio...

  15. Cost Of Managing Digital Diagnostic Images For A 614 Bed Hospital

    Science.gov (United States)

    Dwyer, Samuel J.; Templeton, Arch W.; Martin, Norman L.; Cook, Larry T.; Lee, Kyo R.; Levine, Errol; Batnitzky, Solomon; Preston, David F.; Rosenthal, Stanton J.; Price, Hilton I.; Anderson, William H.; Tarlton, Mark A.; Faszold, Susan

    1982-01-01

    The cost of recording and archiving digital diagnostic imaging data is presented for a Radiology Department serving a 614 bed University Hospital with a large outpatient population. Digital diagnostic imaging modalities include computed tomography, nuclear medicine, ultrasound, and digital radiography. The archiving media include multiformat video film recordings, magnetic tapes, and disc storage. The estimated cost per patient for the archiving of digital diagnostic imaging data is presented.

  16. Road network extraction in classified SAR images using genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    肖志强; 鲍光淑; 蒋晓确

    2004-01-01

    Due to the complicated background of objectives and speckle noise, it is almost impossible to extract roads directly from original synthetic aperture radar(SAR) images. A method is proposed for extraction of road network from high-resolution SAR image. Firstly, fuzzy C means is used to classify the filtered SAR image unsupervisedly, and the road pixels are isolated from the image to simplify the extraction of road network. Secondly, according to the features of roads and the membership of pixels to roads, a road model is constructed, which can reduce the extraction of road network to searching globally optimization continuous curves which pass some seed points. Finally, regarding the curves as individuals and coding a chromosome using integer code of variance relative to coordinates, the genetic operations are used to search global optimization roads. The experimental results show that the algorithm can effectively extract road network from high-resolution SAR images.

  17. A Robust Image Hashing Algorithm Resistant Against Geometrical Attacks

    Directory of Open Access Journals (Sweden)

    Y.L. Liu

    2013-12-01

    Full Text Available This paper proposes a robust image hashing method which is robust against common image processing attacks and geometric distortion attacks. In order to resist against geometric attacks, the log-polar mapping (LPM and contourlet transform are employed to obtain the low frequency sub-band image. Then the sub-band image is divided into some non-overlapping blocks, and low and middle frequency coefficients are selected from each block after discrete cosine transform. The singular value decomposition (SVD is applied in each block to obtain the first digit of the maximum singular value. Finally, the features are scrambled and quantized as the safe hash bits. Experimental results show that the algorithm is not only resistant against common image processing attacks and geometric distortion attacks, but also discriminative to content changes.

  18. On-shot laser beam diagnostics for high-power laser facility with phase modulation imaging

    Science.gov (United States)

    Pan, X.; Veetil, S. P.; Liu, C.; Tao, H.; Jiang, Y.; Lin, Q.; Li, X.; Zhu, J.

    2016-05-01

    A coherent-modulation-imaging-based (CMI) algorithm has been employed for on-shot laser beam diagnostics in high-power laser facilities, where high-intensity short-pulsed lasers from terawatt to petawatt are designed to realize inertial confinement fusion (ICF). A single-shot intensity measurement is sufficient for wave-front reconstruction, both for the near-field and far-field at the same time. The iterative reconstruction process is computationally very efficient and was completed in dozens of seconds by the additional use of a GPU device to speed it up. The compact measurement unit—including a CCD and a piece of pre-characterized phase plate—makes it convenient for focal-spot intensity prediction in the target chamber. It can be placed almost anywhere in high-power laser facilities to achieve near-field wave-front diagnostics. The feasibility of the method has been demonstrated by conducting a series of experiments with diagnostic beams and seed pulses with deactivated amplifiers in our high-power laser system.

  19. Performance evaluation of image processing algorithms on the GPU.

    Science.gov (United States)

    Castaño-Díez, Daniel; Moser, Dominik; Schoenegger, Andreas; Pruggnaller, Sabine; Frangakis, Achilleas S

    2008-10-01

    The graphics processing unit (GPU), which originally was used exclusively for visualization purposes, has evolved into an extremely powerful co-processor. In the meanwhile, through the development of elaborate interfaces, the GPU can be used to process data and deal with computationally intensive applications. The speed-up factors attained compared to the central processing unit (CPU) are dependent on the particular application, as the GPU architecture gives the best performance for algorithms that exhibit high data parallelism and high arithmetic intensity. Here, we evaluate the performance of the GPU on a number of common algorithms used for three-dimensional image processing. The algorithms were developed on a new software platform called "CUDA", which allows a direct translation from C code to the GPU. The implemented algorithms include spatial transformations, real-space and Fourier operations, as well as pattern recognition procedures, reconstruction algorithms and classification procedures. In our implementation, the direct porting of C code in the GPU achieves typical acceleration values in the order of 10-20 times compared to a state-of-the-art conventional processor, but they vary depending on the type of the algorithm. The gained speed-up comes with no additional costs, since the software runs on the GPU of the graphics card of common workstations.

  20. Majorization-minimization algorithms for wavelet-based image restoration.

    Science.gov (United States)

    Figueiredo, Mário A T; Bioucas-Dias, José M; Nowak, Robert D

    2007-12-01

    Standard formulations of image/signal deconvolution under wavelet-based priors/regularizers lead to very high-dimensional optimization problems involving the following difficulties: the non-Gaussian (heavy-tailed) wavelet priors lead to objective functions which are nonquadratic, usually nondifferentiable, and sometimes even nonconvex; the presence of the convolution operator destroys the separability which underlies the simplicity of wavelet-based denoising. This paper presents a unified view of several recently proposed algorithms for handling this class of optimization problems, placing them in a common majorization-minimization (MM) framework. One of the classes of algorithms considered (when using quadratic bounds on nondifferentiable log-priors) shares the infamous "singularity issue" (SI) of "iteratively reweighted least squares" (IRLS) algorithms: the possibility of having to handle infinite weights, which may cause both numerical and convergence issues. In this paper, we prove several new results which strongly support the claim that the SI does not compromise the usefulness of this class of algorithms. Exploiting the unified MM perspective, we introduce a new algorithm, resulting from using l1 bounds for nonconvex regularizers; the experiments confirm the superior performance of this method, when compared to the one based on quadratic majorization. Finally, an experimental comparison of the several algorithms, reveals their relative merits for different standard types of scenarios.

  1. Comparison of different phantoms used in digital diagnostic imaging

    Energy Technology Data Exchange (ETDEWEB)

    Bor, Dogan, E-mail: bor@eng.ankara.edu.tr [Ankara University, Faculty of Engineering, Department of Engineering Physics. Tandogan, 06100 Ankara (Turkey); Unal, Elif, E-mail: elf.unall@gmail.com [Radat Dosimetry Laboratory Services, 06830, Golbasi, Ankara (Turkey); Uslu, Anil, E-mail: m.aniluslu@gmail.com [Radat Dosimetry Laboratory Services, 06830, Golbasi, Ankara (Turkey)

    2015-09-21

    The organs of extremity, chest, skull and lumbar were physically simulated using uniform PMMA slabs with different thicknesses alone and using these slabs together with aluminum plates and air gaps (ANSI Phantoms). The variation of entrance surface air kerma and scatter fraction with X-ray beam qualities was investigated for these phantoms and the results were compared with those measured from anthropomorphic phantoms. A flat panel digital radiographic system was used for all the experiments. Considerable variations of entrance surface air kermas were found for the same organs of different designs, and highest doses were measured for the PMMA slabs. A low contrast test tool and a contrast detail test object (CDRAD) were used together with each organ simulation of PMMA slabs and ANSI phantoms in order to test the clinical image qualities. Digital images of these phantom combinations and anthropomorphic phantoms were acquired in raw and clinically processed formats. Variation of image quality with kVp and post processing was evaluated using the numerical metrics of these test tools and measured contrast values from the anthropomorphic phantoms. Our results indicated that design of some phantoms may not be efficient enough to reveal the expected performance of the post processing algorithms.

  2. Comparison of different phantoms used in digital diagnostic imaging

    Science.gov (United States)

    Bor, Dogan; Unal, Elif; Uslu, Anil

    2015-09-01

    The organs of extremity, chest, skull and lumbar were physically simulated using uniform PMMA slabs with different thicknesses alone and using these slabs together with aluminum plates and air gaps (ANSI Phantoms). The variation of entrance surface air kerma and scatter fraction with X-ray beam qualities was investigated for these phantoms and the results were compared with those measured from anthropomorphic phantoms. A flat panel digital radiographic system was used for all the experiments. Considerable variations of entrance surface air kermas were found for the same organs of different designs, and highest doses were measured for the PMMA slabs. A low contrast test tool and a contrast detail test object (CDRAD) were used together with each organ simulation of PMMA slabs and ANSI phantoms in order to test the clinical image qualities. Digital images of these phantom combinations and anthropomorphic phantoms were acquired in raw and clinically processed formats. Variation of image quality with kVp and post processing was evaluated using the numerical metrics of these test tools and measured contrast values from the anthropomorphic phantoms. Our results indicated that design of some phantoms may not be efficient enough to reveal the expected performance of the post processing algorithms.

  3. Algorithm for X-ray scatter, beam-hardening, and beam profile correction in diagnostic (kilovoltage) and treatment (megavoltage) cone beam CT.

    Science.gov (United States)

    Maltz, Jonathan S; Gangadharan, Bijumon; Bose, Supratik; Hristov, Dimitre H; Faddegon, Bruce A; Paidi, Ajay; Bani-Hashemi, Ali R

    2008-12-01

    Quantitative reconstruction of cone beam X-ray computed tomography (CT) datasets requires accurate modeling of scatter, beam-hardening, beam profile, and detector response. Typically, commercial imaging systems use fast empirical corrections that are designed to reduce visible artifacts due to incomplete modeling of the image formation process. In contrast, Monte Carlo (MC) methods are much more accurate but are relatively slow. Scatter kernel superposition (SKS) methods offer a balance between accuracy and computational practicality. We show how a single SKS algorithm can be employed to correct both kilovoltage (kV) energy (diagnostic) and megavoltage (MV) energy (treatment) X-ray images. Using MC models of kV and MV imaging systems, we map intensities recorded on an amorphous silicon flat panel detector to water-equivalent thicknesses (WETs). Scattergrams are derived from acquired projection images using scatter kernels indexed by the local WET values and are then iteratively refined using a scatter magnitude bounding scheme that allows the algorithm to accommodate the very high scatter-to-primary ratios encountered in kV imaging. The algorithm recovers radiological thicknesses to within 9% of the true value at both kV and megavolt energies. Nonuniformity in CT reconstructions of homogeneous phantoms is reduced by an average of 76% over a wide range of beam energies and phantom geometries.

  4. Real-time Imaging Orientation Determination System to Verify Imaging Polarization Navigation Algorithm

    Directory of Open Access Journals (Sweden)

    Hao Lu

    2016-01-01

    Full Text Available Bio-inspired imaging polarization navigation which can provide navigation information and is capable of sensing polarization information has advantages of high-precision and anti-interference over polarization navigation sensors that use photodiodes. Although all types of imaging polarimeters exist, they may not qualify for the research on the imaging polarization navigation algorithm. To verify the algorithm, a real-time imaging orientation determination system was designed and implemented. Essential calibration procedures for the type of system that contained camera parameter calibration and the inconsistency of complementary metal oxide semiconductor calibration were discussed, designed, and implemented. Calibration results were used to undistort and rectify the multi-camera system. An orientation determination experiment was conducted. The results indicated that the system could acquire and compute the polarized skylight images throughout the calibrations and resolve orientation by the algorithm to verify in real-time. An orientation determination algorithm based on image processing was tested on the system. The performance and properties of the algorithm were evaluated. The rate of the algorithm was over 1 Hz, the error was over 0.313°, and the population standard deviation was 0.148° without any data filter.

  5. Exploring New Ways to Deliver Value to Healthcare Organizations: Algorithmic Testing, Data Integration, and Diagnostic E-consult Service.

    Science.gov (United States)

    Risin, Semyon A; Chang, Brian N; Welsh, Kerry J; Kidd, Laura R; Moreno, Vanessa; Chen, Lei; Tholpady, Ashok; Wahed, Amer; Nguyen, Nghia; Kott, Marylee; Hunter, Robert L

    2015-01-01

    As the USA Health Care System undergoes transformation and transitions to value-based models it is critical for laboratory medicine/clinical pathology physicians to explore opportunities and find new ways to deliver value, become an integral part of the healthcare team. This is also essential for ensuring financial health and stability of the profession when the payment paradigm changes from fee-for-service to fee-for-performance. About 5 years ago we started searching for ways to achieve this goal. Among other approaches, the search included addressing the laboratory work-ups for specialists' referrals in the HarrisHealth System, a major safety net health care organization serving mostly indigent and underserved population of Harris County, TX. We present here our experience in improving the efficiency of laboratory testing for the referral process and in building a prototype of a diagnostic e-consult service using rheumatologic diseases as a starting point. The service incorporates algorithmic testing, integration of clinical, laboratory and imaging data, issuing structured comprehensive consultation reports, incorporating all the relevant information, and maintaining personal contacts and an e-line of communications with the primary providers and referral center personnel. Ongoing survey of providers affords testimony of service value in terms of facilitating their work and increasing productivity. Analysis of the cost effectiveness and of other value indicators is currently underway. We also discuss our pioneering experience in building pathology residents and fellows training in integrated diagnostic consulting service.

  6. Diagnostic imaging of compression neuropathy; Bildgebende Diagnostik von Nervenkompressionssyndromen

    Energy Technology Data Exchange (ETDEWEB)

    Weishaupt, D.; Andreisek, G. [Universitaetsspital, Institut fuer Diagnostische Radiologie, Zuerich (Switzerland)

    2007-03-15

    Compression-induced neuropathy of peripheral nerves can cause severe pain of the foot and ankle. Early diagnosis is important to institute prompt treatment and to minimize potential injury. Although clinical examination combined with electrophysiological studies remain the cornerstone of the diagnostic work-up, in certain cases, imaging may provide key information with regard to the exact anatomic location of the lesion or aid in narrowing the differential diagnosis. In other patients with peripheral neuropathies of the foot and ankle, imaging may establish the etiology of the condition and provide information crucial for management and/or surgical planning. MR imaging and ultrasound provide direct visualization of the nerve and surrounding abnormalities. Bony abnormalities contributing to nerve compression are best assessed by radiographs and CT. Knowledge of the anatomy, the etiology, typical clinical findings, and imaging features of peripheral neuropathies affecting the peripheral nerves of the foot and ankle will allow for a more confident diagnosis. (orig.) [German] Kompressionsbedingte Schaedigungen peripherer Nerven koennen die Ursache hartnaeckiger Schmerzen im Bereich des Sprunggelenks und Fusses sein. Eine fruehzeitige Diagnose ist entscheidend, um den Patienten der richtigen Therapie zuzufuehren und potenzielle Schaedigungen zu vermeiden oder zu verringern. Obschon die klinische Untersuchung und die elektrophysiologische Abklaerungen die wichtigsten Elemente der Diagnostik peripherer Nervenkompressionssyndrome sind, kann die Bildgebung entscheidend sein, wenn es darum geht, die Hoehe des Nervenschadens festzulegen oder die Differenzialdiagnose einzugrenzen. In gewissen Faellen kann durch Bildgebung sogar die Ursache der Nervenkompression gefunden werden. In anderen Faellen ist die Bildgebung wichtig bei der Therapieplanung, insbesondere dann, wenn die Laesion chirurgisch angegangen wird. Magnetresonanztomographie (MRT) und Sonographie ermoeglichen eine

  7. Diagnostic imaging in child abuse; Bildgebende Diagnostik der Kindesmisshandlung

    Energy Technology Data Exchange (ETDEWEB)

    Stoever, B. [Charite, Campus Virchow-Klinikum, Universitaetsmedizin Berlin, Abteilung Paediatrische Radiologie, CC6, Diagnostische und interventionelle Radiologie und Nuklearmedizin, Berlin (Germany)

    2007-11-15

    Diagnostic imaging in child abuse plays an important role and includes the depiction of skeletal injuries, soft tissue lesions, visceral injuries in 'battered child syndrome' and brain injuries in 'shaken baby syndrome'. The use of appropriate imaging modalities allows specific fractures to be detected, skeletal lesions to be dated and the underlying mechanism of the lesion to be described. The imaging results must be taken into account when assessing the clinical history, clinical findings and differential diagnoses. Computed tomography (CT) and magnetic resonance imaging (MRI) examinations must be performed in order to detect lesions of the central nervous system (CNS) immediately. CT is necessary in the initial diagnosis to delineate oedema and haemorrhages. Early detection of brain injuries in children with severe neurological symptoms can prevent serious late sequelae. MRI is performed in follow-up investigations and is used to describe residual lesions, including parenchymal findings. (orig.) [German] In der Diagnostik der Kindesmisshandlung ist die Bildgebung ein wesentlicher Faktor. Trotz scheinbar leerer Anamnese gelingt es, typische Verletzungsmuster als Misshandlungsfolge zu erkennen, sowohl im Bereich des Skeletts, der Weichteile, des Abdomens ('battered child syndrome', heute: 'non accidental injury', NAI) als auch im ZNS ('shaken baby syndrome'). Den klinischen Symptomen entsprechend, wird im Verdachtsfall ein adaequates diagnostisches Verfahren eingesetzt, das erwartete charakteristische Befunde nachweist, den Mechanismus der Verletzung aufzeigt und das Alter der Laesionen annaehernd festlegt. Radiologische Skelettbefunde werden hinsichtlich ihrer Spezifitaet fuer eine Misshandlung bewertet. Alle Resultate der Bildgebung sind zusammen mit Anamnese und klinischen Befunden zu deuten. Bei schwerer Misshandlung ohne aeussere Verletzungszeichen ist das rechtzeitige Erfassen einer ZNS

  8. Feature Selection for Image Retrieval based on Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Preeti Kushwaha

    2016-12-01

    Full Text Available This paper describes the development and implementation of feature selection for content based image retrieval. We are working on CBIR system with new efficient technique. In this system, we use multi feature extraction such as colour, texture and shape. The three techniques are used for feature extraction such as colour moment, gray level co- occurrence matrix and edge histogram descriptor. To reduce curse of dimensionality and find best optimal features from feature set using feature selection based on genetic algorithm. These features are divided into similar image classes using clustering for fast retrieval and improve the execution time. Clustering technique is done by k-means algorithm. The experimental result shows feature selection using GA reduces the time for retrieval and also increases the retrieval precision, thus it gives better and faster results as compared to normal image retrieval system. The result also shows precision and recall of proposed approach compared to previous approach for each image class. The CBIR system is more efficient and better performs using feature selection based on Genetic Algorithm.

  9. Chaotic Image Encryption Design Using Tompkins-Paige Algorithm

    Directory of Open Access Journals (Sweden)

    Shahram Etemadi Borujeni

    2009-01-01

    Full Text Available In this paper, we have presented a new permutation-substitution image encryption architecture using chaotic maps and Tompkins-Paige algorithm. The proposed encryption system includes two major parts, chaotic pixels permutation and chaotic pixels substitution. A logistic map is used to generate a bit sequence, which is used to generate pseudorandom numbers in Tompkins-Paige algorithm, in 2D permutation phase. Pixel substitution phase includes two process, the tent pseudorandom image (TPRI generator and modulo addition operation. All parts of the proposed chaotic encryption system are simulated. Uniformity of the histogram of the proposed encrypted image is justified using the chi-square test, which is less than 2(255, 0.05. The vertical, horizontal, and diagonal correlation coefficients, as well as their average and RMS values for the proposed encrypted image are calculated that is about 13% less than previous researches. To quantify the difference between the encrypted image and the corresponding plain-image, three measures are used. These are MAE, NPCR, and UACI, which are improved in our proposed system considerably. NPCR of our proposed system is exactly the ideal value of this criterion. The key space of our proposed method is large enough to protect the system against any Brute-force and statistical attacks.

  10. IMAGEP - A FORTRAN ALGORITHM FOR DIGITAL IMAGE PROCESSING

    Science.gov (United States)

    Roth, D. J.

    1994-01-01

    IMAGEP is a FORTRAN computer algorithm containing various image processing, analysis, and enhancement functions. It is a keyboard-driven program organized into nine subroutines. Within the subroutines are other routines, also, selected via keyboard. Some of the functions performed by IMAGEP include digitization, storage and retrieval of images; image enhancement by contrast expansion, addition and subtraction, magnification, inversion, and bit shifting; display and movement of cursor; display of grey level histogram of image; and display of the variation of grey level intensity as a function of image position. This algorithm has possible scientific, industrial, and biomedical applications in material flaw studies, steel and ore analysis, and pathology, respectively. IMAGEP is written in VAX FORTRAN for DEC VAX series computers running VMS. The program requires the use of a Grinnell 274 image processor which can be obtained from Mark McCloud Associates, Campbell, CA. An object library of the required GMR series software is included on the distribution media. IMAGEP requires 1Mb of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in VAX FILES-11 format. It is also available on a TK50 tape cartridge in VAX FILES-11 format. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation.

  11. TRANSFORMATION ALGORITHM FOR IMAGES OBTAINED BY OMNIDIRECTIONAL CAMERAS

    Directory of Open Access Journals (Sweden)

    V. P. Lazarenko

    2015-01-01

    Full Text Available Omnidirectional optoelectronic systems find their application in areas where a wide viewing angle is critical. However, omnidirectional optoelectronic systems have a large distortion that makes their application more difficult. The paper compares the projection functions of traditional perspective lenses and omnidirectional wide angle fish-eye lenses with a viewing angle not less than 180°. This comparison proves that distortion models of omnidirectional cameras cannot be described as a deviation from the classic model of pinhole camera. To solve this problem, an algorithm for transforming omnidirectional images has been developed. The paper provides a brief comparison of the four calibration methods available in open source toolkits for omnidirectional optoelectronic systems. Geometrical projection model is given used for calibration of omnidirectional optical system. The algorithm consists of three basic steps. At the first step, we calculate he field of view of a virtual pinhole PTZ camera. This field of view is characterized by an array of 3D points in the object space. At the second step the array of corresponding pixels for these three-dimensional points is calculated. Then we make a calculation of the projection function that expresses the relation between a given 3D point in the object space and a corresponding pixel point. In this paper we use calibration procedure providing the projection function for calibrated instance of the camera. At the last step final image is formed pixel-by-pixel from the original omnidirectional image using calculated array of 3D points and projection function. The developed algorithm gives the possibility for obtaining an image for a part of the field of view of an omnidirectional optoelectronic system with the corrected distortion from the original omnidirectional image. The algorithm is designed for operation with the omnidirectional optoelectronic systems with both catadioptric and fish-eye lenses

  12. Histological Image Feature Mining Reveals Emergent Diagnostic Properties for Renal Cancer.

    Science.gov (United States)

    Kothari, Sonal; Phan, John H; Young, Andrew N; Wang, May D

    2011-11-01

    Computer-aided histological image classification systems are important for making objective and timely cancer diagnostic decisions. These systems use combinations of image features that quantify a variety of image properties. Because researchers tend to validate their diagnostic systems on specific cancer endpoints, it is difficult to predict which image features will perform well given a new cancer endpoint. In this paper, we define a comprehensive set of common image features (consisting of 12 distinct feature subsets) that quantify a variety of image properties. We use a data-mining approach to determine which feature subsets and image properties emerge as part of an "optimal" diagnostic model when applied to specific cancer endpoints. Our goal is to assess the performance of such comprehensive image feature sets for application to a wide variety of diagnostic problems. We perform this study on 12 endpoints including 6 renal tumor subtype endpoints and 6 renal cancer grade endpoints. Keywords-histology, image mining, computer-aided diagnosis.

  13. Classification decision tree algorithm assisting in diagnosing solitary pulmonary nodule by SPECT/CT fusion imaging

    Institute of Scientific and Technical Information of China (English)

    Qiang Yongqian; Guo Youmin; Jin Chenwang; Liu Min; Yang Aimin; Wang Qiuping; Niu Gang

    2008-01-01

    Objective To develop a classification tree algorithm to improve diagnostic performances of 99mTc-MIBI SPECT/CT fusion imaging in differentiating solitary pulmonary nodules (SPNs). Methods Forty-four SPNs, including 30 malignant cases and 14 benign ones that were eventually pathologically identified, were included in this prospective study. All patients received 99Tcm-MIBI SPECT/CT scanning at an early stage and a delayed stage before operation. Thirty predictor variables, including 11 clinical variables, 4 variables of emission and 15 variables of transmission information from SPECT/CT scanning, were analyzed independently by the classification tree algorithm and radiological residents. Diagnostic rules were demonstrated in tree-topology, and diagnostic performances were compared with Area under Curve (AUC) of Receiver Operating Characteristic Curve (ROC). Results A classification decision tree with lowest relative cost of 0.340 was developed for 99Tcm-MIBI SPECT/CT scanning in which the value of Target/Normal region of 99Tcm-MIBI uptake in the delayed stage and in the early stage, age, cough and specula sign were five most important contributors. The sensitivity and specificity were 93.33% and 78. 57e, respectively, a little higher than those of the expert. The sensitivity and specificity by residents of Grade one were 76.67% and 28.57%, respectively, and AUC of CART and expert was 0.886±0.055 and 0.829±0.062, respectively, and the corresponding AUC of residents was 0.566±0.092. Comparisons of AUCs suggest that performance of CART was similar to that of expert (P=0.204), but greater than that of residents (P<0.001). Conclusion Our data mining technique using classification decision tree has a much higher accuracy than residents. It suggests that the application of this algorithm will significantly improve the diagnostic performance of residents.

  14. Cell segmentation in histopathological images with deep learning algorithms by utilizing spatial relationships.

    Science.gov (United States)

    Hatipoglu, Nuh; Bilgin, Gokhan

    2017-02-28

    In many computerized methods for cell detection, segmentation, and classification in digital histopathology that have recently emerged, the task of cell segmentation remains a chief problem for image processing in designing computer-aided diagnosis (CAD) systems. In research and diagnostic studies on cancer, pathologists can use CAD systems as second readers to analyze high-resolution histopathological images. Since cell detection and segmentation are critical for cancer grade assessments, cellular and extracellular structures should primarily be extracted from histopathological images. In response, we sought to identify a useful cell segmentation approach with histopathological images that uses not only prominent deep learning algorithms (i.e., convolutional neural networks, stacked autoencoders, and deep belief networks), but also spatial relationships, information of which is critical for achieving better cell segmentation results. To that end, we collected cellular and extracellular samples from histopathological images by windowing in small patches with various sizes. In experiments, the segmentation accuracies of the methods used improved as the window sizes increased due to the addition of local spatial and contextual information. Once we compared the effects of training sample size and influence of window size, results revealed that the deep learning algorithms, especially convolutional neural networks and partly stacked autoencoders, performed better than conventional methods in cell segmentation.

  15. A Novel Image Fusion Algorithm for Visible and PMMW Images based on Clustering and NSCT

    Directory of Open Access Journals (Sweden)

    Xiong Jintao

    2016-01-01

    Full Text Available Aiming at the fusion of visible and Passive Millimeter Wave (PMMW images, a novel algorithm based on clustering and NSCT (Nonsubsampled Contourlet Transform is proposed. It takes advantages of the particular ability of PMMW image in presenting metal target and uses the clustering algorithm for PMMW image to extract the potential target regions. In the process of fusion, NSCT is applied to both input images, and then the decomposition coefficients on different scale are combined using different rules. At last, the fusion image is obtained by taking the inverse NSCT of the fusion coefficients. Some methodologies are used to evaluate the fusion results. Experiments demonstrate the superiority of the proposed algorithm for metal target detection compared to wavelet transform and Laplace transform.

  16. Classification of ETM+ Remote Sensing Image Based on Hybrid Algorithm of Genetic Algorithm and Back Propagation Neural Network

    Directory of Open Access Journals (Sweden)

    Haisheng Song

    2013-01-01

    Full Text Available The back propagation neural network (BPNN algorithm can be used as a supervised classification in the processing of remote sensing image classification. But its defects are obvious: falling into the local minimum value easily, slow convergence speed, and being difficult to determine intermediate hidden layer nodes. Genetic algorithm (GA has the advantages of global optimization and being not easy to fall into local minimum value, but it has the disadvantage of poor local searching capability. This paper uses GA to generate the initial structure of BPNN. Then, the stable, efficient, and fast BP classification network is gotten through making fine adjustments on the improved BP algorithm. Finally, we use the hybrid algorithm to execute classification on remote sensing image and compare it with the improved BP algorithm and traditional maximum likelihood classification (MLC algorithm. Results of experiments show that the hybrid algorithm outperforms improved BP algorithm and MLC algorithm.

  17. PERFORMANCE ANALYSIS OF IMAGE COMPRESSION USING FUZZY LOGIC ALGORITHM

    Directory of Open Access Journals (Sweden)

    Rohit Kumar Gangwar

    2014-04-01

    Full Text Available With the increase in demand, product of multimedia is increasing fast and thus contributes to insufficient network bandwidth and memory storage. Therefore image compression is more significant for reducing data redundancy for save more memory and transmission bandwidth. An efficient compression technique has been proposed which combines fuzzy logic with that of Huffman coding. While normalizing image pixel, each value of pixel image belonging to that image foreground are characterized and interpreted. The image is sub divided into pixel which is then characterized by a pair of set of approximation. Here encoding represent Huffman code which is statistically independent to produce more efficient code for compression and decoding represents rough fuzzy logic which is used to rebuilt the pixel of image. The method used here are rough fuzzy logic with Huffman coding algorithm (RFHA. Here comparison of different compression techniques with Huffman coding is done and fuzzy logic is applied on the Huffman reconstructed image. Result shows that high compression rates are achieved and visually negligible difference between compressed images and original images.

  18. Diagnostic performance of the ACR/EULAR 2010 criteria for rheumatoid arthritis and two diagnostic algorithms in an early arthritis clinic (REACH)

    NARCIS (Netherlands)

    C. Alves (Celina); J.J. Luime (Jolanda); D. van Zeben (Derkjen); M.A.M. Huisman (Margriet); A.E.A.M. Weel (Angelique); P.J. Barendregt (Pieternella); J.M.W. Hazes (Mieke)

    2011-01-01

    textabstractIntroduction: An ACR/EULAR task force released new criteria to classify rheumatoid arthritis at an early stage. This study evaluates the diagnostic performance of these criteria and algorithms by van der Helm and Visser in REACH. Methods: Patients with symptoms ≤12 months from REACH were

  19. Diagnostic performance of the ACR/EULAR 2010 criteria for rheumatoid arthritis and two diagnostic algorithms in an early arthritis clinic (REACH).

    Science.gov (United States)

    Alves, Celina; Luime, Jolanda Jacoba; van Zeben, Derkjen; Huisman, Anne-Margriet; Weel, Angelique Elisabeth Adriana Maria; Barendregt, Pieternella Johanna; Hazes, Johanna Maria Wilhelmina

    2011-09-01

    An ACR/EULAR task force released new criteria to classify rheumatoid arthritis at an early stage. This study evaluates the diagnostic performance of these criteria and algorithms by van der Helm and Visser in REACH. Patients with symptoms ≤12 months from REACH were used. Algorithms were tested on discrimination, calibration and diagnostic accuracy of proposed cut-points. Two patient sets were defined to test robustness; undifferentiated arthritis (UA) (n=231) and all patients including those without synovitis (n=513). The outcomes evaluated were methotrexate use and persistent disease at 12 months. In UA patients all algorithms had good areas under the curve 0.79, 95% CI 0.73 to 0.83 for the ACR/EULAR criteria, 0.80, 95% CI 0.74 to 0.87 for van der Helm and 0.83, 95% CI 0.77 to 0.88 for Visser. All calibrated well. Sensitivity and specificity were 0.74 and 0.66 for the ACR/EULAR criteria, 0.1 and 1.0 for van der Helm and 0.59 and 0.93 for Visser. Similar results were found in all patients indicating robustness. The ACR/EULAR 2010 criteria showed good diagnostic properties in an early arthritis cohort reflecting daily practice, as did the van der Helm and Visser algorithms. All were robust. To promote uniformity and comparability the ACR/EULAR 2010 criteria should be used in future diagnostic studies.

  20. [Transparency regime: semiotics of radiographical images in urological diagnostics].

    Science.gov (United States)

    Martin, M; Fangerau, H

    2012-10-01

    Shortly after Röntgen discovered x-rays urology became one of the main test fields for the application of this new technology. Initial scepticism among physicians, who were inclined to cling to traditional manual methods of diagnosing, was replaced by enthusiasm for radiographic technologies and the new method soon became the standard in, for example the diagnosis of concrements. Patients favoring radiographic procedures over the use of probes and a convincing documentation of stones in radiograms were factors that impacted the relatively rapid integration of radiology into urology. The radiographic representation of soft tissues and body cavities was more difficult and the development of contrast agents in particular posed a serious problem. Several patients died during this research. A new diagnostic dimension was revealed when radiography and cystography were combined to form the method of retrograde pyelography. However, the problem of how urologists could learn how to read the new images remained. In order to allow trainee physicians to practice interpreting radiograms atlases were produced which offered explanatory texts and drawings for radiographic images of the kidneys, the bladder etc. Thus, urologists developed a self-contained semiotics which facilitated the appropriation of a unique urological radiographical gaze.

  1. PACS and diagnostic imaging service delivery-A UK perspective

    Energy Technology Data Exchange (ETDEWEB)

    Sutton, Laurence N., E-mail: lasusu@laurencesutton.co.uk [Diagnostic Imaging Department, Main X-Ray, Calderdale Royal Hospital, Calderdale and Huddersfield NHS Foundation Trust, Salterhebble, Halifax, West Yorkshire, HX3 0PW (United Kingdom)

    2011-05-15

    This review sets out the current position with regard to the implementation of PACS throughout the United Kingdom and the impact this has had on improving patient care. In December 2007 England had implemented full hospital-wide PACS in all hospitals: a major achievement in the relatively short time period of three years. The different approaches used by each country of the UK to achieve full national PACS are described in addition to the current issues with the sharing of images and reports across different healthcare organisations with regard to technical solutions, clinical safety and governance. The review gives insight into the changing methods of service delivery to address increasing demand pressures on diagnostic imaging services and how the national PACS implementation, specifically in England, has made a significant contribution to measures to improve efficiencies. The role of Teleradiology is discussed in the context of supporting local patient services rather than undermining them and the concept of cross-healthcare reporting 'Grids' is described. Finally, in the summary it is recognised that the vast wealth of knowledge accumulated during the national implementations has placed the UK in a strong position to facilitate full national data sharing across all healthcare organisations to improve patient care.

  2. Acoustic imaging for diagnostics of chemically reacting systems

    Science.gov (United States)

    Ramohalli, K.; Seshan, P.

    1983-01-01

    The concept of local diagnostics, in chemically reacting systems, with acoustic imaging is developed. The elements of acoustic imaging through ellipsoidal mirrors are theoretically discussed. In a general plan of the experimental program, the first system is chosen in these studies to be a simple open jet, non premixed turbulent flame. Methane is the fuel and enriched air is the oxidizer. This simple chemically reacting flow system is established at a Reynolds number (based on cold viscosity) of 50,000. A 1.5 m diameter high resolution acoustic mirror with an f-number of 0.75 is used to map the acoustic source zone along the axis of the flame. The results are presented as acoustic power spectra at various distances from the nozzle exit. It is seen that most of the reaction intensity is localized in a zone within 8 diameters from the exit. The bulk reactions (possibly around the periphery of the larger eddies) are evenly distributed along the length of the flame. Possibilities are seen for locally diagnosing single zones in a multiple cluster of reaction zones that occur frequently in practice. A brief outline is given of the future of this work which will be to apply this technique to chemically reacting flows not limited to combustion.

  3. Computer Aided Diagnostic Support System for Skin Cancer: A Review of Techniques and Algorithms

    Directory of Open Access Journals (Sweden)

    Ammara Masood

    2013-01-01

    Full Text Available Image-based computer aided diagnosis systems have significant potential for screening and early detection of malignant melanoma. We review the state of the art in these systems and examine current practices, problems, and prospects of image acquisition, pre-processing, segmentation, feature extraction and selection, and classification of dermoscopic images. This paper reports statistics and results from the most important implementations reported to date. We compared the performance of several classifiers specifically developed for skin lesion diagnosis and discussed the corresponding findings. Whenever available, indication of various conditions that affect the technique’s performance is reported. We suggest a framework for comparative assessment of skin cancer diagnostic models and review the results based on these models. The deficiencies in some of the existing studies are highlighted and suggestions for future research are provided.

  4. Computer Aided Diagnostic Support System for Skin Cancer: A Review of Techniques and Algorithms

    Science.gov (United States)

    Masood, Ammara; Al-Jumaily, Adel Ali

    2013-01-01

    Image-based computer aided diagnosis systems have significant potential for screening and early detection of malignant melanoma. We review the state of the art in these systems and examine current practices, problems, and prospects of image acquisition, pre-processing, segmentation, feature extraction and selection, and classification of dermoscopic images. This paper reports statistics and results from the most important implementations reported to date. We compared the performance of several classifiers specifically developed for skin lesion diagnosis and discussed the corresponding findings. Whenever available, indication of various conditions that affect the technique's performance is reported. We suggest a framework for comparative assessment of skin cancer diagnostic models and review the results based on these models. The deficiencies in some of the existing studies are highlighted and suggestions for future research are provided. PMID:24575126

  5. An Improved Piecewise Linear Chaotic Map Based Image Encryption Algorithm

    Directory of Open Access Journals (Sweden)

    Yuping Hu

    2014-01-01

    Full Text Available An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack.

  6. Constrained branch-and-bound algorithm for image registration

    Institute of Scientific and Technical Information of China (English)

    JIN Jian-qiu; WANG Zhang-ye; PENG Qun-sheng

    2005-01-01

    In this paper, the authors propose a refined Branch-and-Bound algorithm for affine-transformation based image registration. Given two feature point-sets in two images respectively, the authors first extract a sequence of high-probability matched point-pairs by considering well-defined features. Each resultant point-pair can be regarded as a constraint in the search space of Branch-and-Bound algorithm guiding the search process. The authors carry out Branch-and-Bound search with the constraint of a pair-point selected by using Monte Carlo sampling according to the match measures of point-pairs. If such one cannot lead to correct result, additional candidate is chosen to start another search. High-probability matched point-pairs usually results in fewer loops and the search process is accelerated greatly. Experimental results verify the high efficiency and robustness of the author's approach.

  7. Improved zerotree coding algorithm for wavelet image compression

    Science.gov (United States)

    Chen, Jun; Li, Yunsong; Wu, Chengke

    2000-12-01

    A listless minimum zerotree coding algorithm based on the fast lifting wavelet transform with lower memory requirement and higher compression performance is presented in this paper. Most state-of-the-art image compression techniques based on wavelet coefficients, such as EZW and SPIHT, exploit the dependency between the subbands in a wavelet transformed image. We propose a minimum zerotree of wavelet coefficients which exploits the dependency not only between the coarser and the finer subbands but also within the lowest frequency subband. And a ne listless significance map coding algorithm based on the minimum zerotree, using new flag maps and new scanning order different form Wen-Kuo Lin et al. LZC, is also proposed. A comparison reveals that the PSNR results of LMZC are higher than those of LZC, and the compression performance of LMZC outperforms that of SPIHT in terms of hard implementation.

  8. A robust chaotic algorithm for digital image steganography

    Science.gov (United States)

    Ghebleh, M.; Kanso, A.

    2014-06-01

    This paper proposes a new robust chaotic algorithm for digital image steganography based on a 3-dimensional chaotic cat map and lifted discrete wavelet transforms. The irregular outputs of the cat map are used to embed a secret message in a digital cover image. Discrete wavelet transforms are used to provide robustness. Sweldens' lifting scheme is applied to ensure integer-to-integer transforms, thus improving the robustness of the algorithm. The suggested scheme is fast, efficient and flexible. Empirical results are presented to showcase the satisfactory performance of our proposed steganographic scheme in terms of its effectiveness (imperceptibility and security) and feasibility. Comparison with some existing transform domain steganographic schemes is also presented.

  9. Research on Wavelet-Based Algorithm for Image Contrast Enhancement

    Institute of Scientific and Technical Information of China (English)

    Wu Ying-qian; Du Pei-jun; Shi Peng-fei

    2004-01-01

    A novel wavelet-based algorithm for image enhancement is proposed in the paper. On the basis of multiscale analysis, the proposed algorithm solves efficiently the problem of noise over-enhancement, which commonly occurs in the traditional methods for contrast enhancement. The decomposed coefficients at same scales are processed by a nonlinear method, and the coefficients at different scales are enhanced in different degree. During the procedure, the method takes full advantage of the properties of Human visual system so as to achieve better performance. The simulations demonstrate that these characters of the proposed approach enable it to fully enhance the content in images, to efficiently alleviate the enhancement of noise and to achieve much better enhancement effect than the traditional approaches.

  10. An Improved Piecewise Linear Chaotic Map Based Image Encryption Algorithm

    Science.gov (United States)

    Hu, Yuping; Wang, Zhijian

    2014-01-01

    An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM) model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack. PMID:24592159

  11. Selection and collection of multi parameter physiological data for cardiac rhythm diagnostic algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Bostock, J.; Weller, P. [School of Informatics, City University London, London EC1V 0HB (United Kingdom); Cooklin, M., E-mail: jbostock1@msn.co [Cardiovascular Directorate, Guy' s and St. Thomas' NHS Foundation Trust, London, SE1 7EH (United Kingdom)

    2010-07-01

    Automated diagnostic algorithms are used in implantable cardioverter-defibrillators (ICD's) to detect abnormal heart rhythms. Algorithms misdiagnose and improved specificity is needed to prevent inappropriate therapy. Knowledge engineering (KE) and artificial intelligence (AI) could improve this. A pilot study of KE was performed with artificial neural network (ANN) as AI system. A case note review analysed arrhythmic events stored in patients ICD memory. 13.2% patients received inappropriate therapy. The best ICD algorithm had sensitivity 1.00, specificity 0.69 (p<0.001 different to gold standard). A subset of data was used to train and test an ANN. A feed-forward, back-propagation network with 7 inputs, a 4 node hidden layer and 1 output had sensitivity 1.00, specificity 0.71 (p<0.001). A prospective study was performed using KE to list arrhythmias, factors and indicators for which measurable parameters were evaluated and results reviewed by a domain expert. Waveforms from electrodes in the heart and thoracic bio-impedance; temperature and motion data were collected from 65 patients during cardiac electrophysiological studies. 5 incomplete datasets were due to technical failures. We concluded that KE successfully guided selection of parameters and ANN produced a usable system and that complex data collection carries greater risk of technical failure, leading to data loss.

  12. An efficient BTC image compression algorithm with visual patterns

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Discusses block truncation coding (BTC) a simple and fast image compression technique suitable for real-time image transmission with high channel error resisting capability and good reconstructed image quality, and its main drawback of high bit rate of 2 bits/pixel for a 256-gray image for the purpose of reducing the bit rate, and introduces a simple look-up-table method for coding the higher mean and the lower mean of a block, and a set of 24 visual patterns used to encode 4×4 bit plane of the high-detail block and proposes a new algorithm, when needs only 19 bits to encode 4×4 high-detail block and 12 bits to encode the 4×4 low-detail block.

  13. Crowdsourcing the creation of image segmentation algorithms for connectomics

    Directory of Open Access Journals (Sweden)

    Ignacio eArganda-Carreras

    2015-11-01

    Full Text Available To stimulate progress in automating the reconstruction of neural circuits,we organized the first international challenge on 2D segmentationof electron microscopic (EM images of the brain. Participants submittedboundary maps predicted for a test set of images, and were scoredbased on their agreement with ground truth from human experts. Thewinning team had no prior experience with EM images, and employeda convolutional network. This ``deep learning'' approach has sincebecome accepted as a standard for segmentation of EM images. The challengehas continued to accept submissions, and the best so far has resultedfrom cooperation between two teams. The challenge has probably saturated,as algorithms cannot progress beyond limits set by ambiguities inherentin 2D scoring. Retrospective evaluation of the challenge scoring systemreveals that it was not sufficiently robust to variations in the widthsof neurite borders. We propose a solution to this problem, which shouldbe useful for a future 3D segmentation challenge.

  14. A Volume Rendering Algorithm for Sequential 2D Medical Images

    Institute of Scientific and Technical Information of China (English)

    吕忆松; 陈亚珠

    2002-01-01

    Volume rendering of 3D data sets composed of sequential 2D medical images has become an important branch in image processing and computer graphics.To help physicians fully understand deep-seated human organs and focuses(e.g.a tumour)as 3D structures.in this paper,we present a modified volume rendering algorithm to render volumetric data,Using this method.the projection images of structures of interest from different viewing directions can be obtained satisfactorily.By rotating the light source and the observer eyepoint,this method avoids rotates the whole volumetric data in main memory and thus reduces computational complexity and rendering time.Experiments on CT images suggest that the proposed method is useful and efficient for rendering 3D data sets.

  15. Singular point detection algorithm based on the transition line of the fingerprint orientation image

    CSIR Research Space (South Africa)

    Mathekga, ME

    2009-11-01

    Full Text Available A new algorithm for identifying and locating singular points on a fingerprint image is presented. This algorithm is based on properties of the fingerprint orientation image, including a feature defined as a transition line. The transition line...

  16. New Optimal DWT Domain Image Watermarking Technique via Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    ZHONG Ning; KUANG Jing-ming; HE Zun-wen

    2007-01-01

    A novel optimal image watermarking scheme is proposed in which the genetic algor ithm (GA) is employed to obtain the improvement of algorithm performance. Arnold transform is utilized to obtain the scrambled watermark, and then the embedding and extraction of watermark are implemented in digital wavelet transform (DWT) domain. During the watermarking process, GA is employed to search optimal parame ters of embedding strength and times of Arnold transform to gain the optimization of watermarking performance. Simulation results show that the proposed method can improve the quality of watermarked image and give almost the same robustness of the watermark.

  17. An improved image encryption algorithm based on chaotic maps

    Institute of Scientific and Technical Information of China (English)

    Xu Shu-Jiang; Wang Ji-Zhi; Yang Su-Xiang

    2008-01-01

    Recently,two chaotic image encryption schemes have been proposed,in which shuffling the positions and changing the grey values of image pixels are combined.This paper provides the chosen plaintext attack to recover the corresponding plaintext of a given ciphertext.Furthermore,it points out that the two schemes are not sufficiently sensitive to small changes of the plaintext.Based on the given analysis,it proposes an improved algorithm which includes two rounds of substitution and one round of permutation to strengthen the overall performance.

  18. Superiorization of incremental optimization algorithms for statistical tomographic image reconstruction

    Science.gov (United States)

    Helou, E. S.; Zibetti, M. V. W.; Miqueles, E. X.

    2017-04-01

    We propose the superiorization of incremental algorithms for tomographic image reconstruction. The resulting methods follow a better path in its way to finding the optimal solution for the maximum likelihood problem in the sense that they are closer to the Pareto optimal curve than the non-superiorized techniques. A new scaled gradient iteration is proposed and three superiorization schemes are evaluated. Theoretical analysis of the methods as well as computational experiments with both synthetic and real data are provided.

  19. Algorithms and Array Design Criteria for Robust Imaging in Interferometry

    Science.gov (United States)

    2016-04-01

    from the esteemed Harvard faculty. In particular, I would like to thank Prof. Yue Lu. I was very fortunate to be enrolled in the Statistical Inference... parents , Jean and Tom Kurien. xvi Introduction The use of optical interferometry as a multi-aperture imaging approach is attracting in- creasing...on the scene’s compactness, sparsity, or smoothness). In particular, a myriad of so-called self -calibration algorithms have been developed (see, e.g

  20. A Versatile Chip Set For Image Processing Algorithms

    Science.gov (United States)

    Krishnan, M. S.

    1988-02-01

    This paper presents a versatile chip set that can realize signal/image processing algorithms used in several important image processing applications, including template-processing, spatial filtering and image scaling. This chip set architecture is superior in versatility, programmability and modularity to several schemes proposed in the literature. The first chip, called the Template Processor, can perform a variety of template functions on a pixel stream using a set of threshold matrices that can be modified or switched in real-time as a function of the image being processed. This chip can also be used to perform data scaling and image biasing. The second chip, called the Filter/Scaler chip, can perform two major functions. The first is a transversal filter function where the number of sample points is modularly extendable and the coefficients are programmable. The second major function performed by this chip is the interpolation function. Linear or cubic B-spline interpolation algorithms can be implemented by programming the coefficients appropriately. The essential features of these two basic building block processors and their significance in template-based computations, filtering, data-scaling and half-tone applications are discussed. Structured, testable implementations of these processors in VLSI technology and extensions to higher performance systems are presented.

  1. A Progressive Image Compression Method Based on EZW Algorithm

    Science.gov (United States)

    Du, Ke; Lu, Jianming; Yahagi, Takashi

    A simple method based on the EZW algorithm is presented for improving image compression performance. Recent success in wavelet image coding is mainly attributed to recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's EZW(Embedded Zerotree Wavelets)(1), Said and Pearlman's SPIHT(Set Partitioning In Hierarchical Trees)(2), and Bing-Bing Chai's SLCCA(Significance-Linked Connected Component Analysis for Wavelet Image Coding)(3). The EZW algorithm is based on five key concepts: (1) a DWT(Discrete Wavelet Transform) or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, (4) universal lossless data compression which is achieved via adaptive arithmetic coding. and (5) DWT coefficients' degeneration from high scale subbands to low scale subbands. In this paper, we have improved the self-similarity statistical characteristic in concept (5) and present a progressive image compression method.

  2. Fast vector quantization algorithm preserving color image quality

    Science.gov (United States)

    Charrier, Christophe; Cherifi, Hocine

    1998-04-01

    In the color image compression field, it is well known by researchers that the information is statistically redundant. This redundancy is a handicap in terms of dictionary construction time. A way to counterbalance this time consuming effect is to reduce the redundancy within the original image while keeping the image quality. One can extract a random sample of the initial training set on which one constructs the codebook whose quality is equal to the quality of the codebook generated from the entire training set. We applied this idea in the color vector quantization (VQ) compression scheme context. We propose an algorithm to reduce the complexity of the standard LBG technique. We searched for a measure of relevance of each block from the entire training set. Under the assumption that the measure of relevance is a independent random variable, we applied the Kolmogorov statistical test to define the smallest size of a random sample, and then the sample itself. Finally, from blocks associated to each measure of relevance of the random sample, we compute the standard LBG algorithm to construct the codebook. Psychophysics and statistical measures of image quality allow us to find the best measure of relevance to reduce the training set while preserving the image quality and decreasing the computational cost.

  3. A study of image reconstruction algorithms for hybrid intensity interferometers

    Science.gov (United States)

    Crabtree, Peter N.; Murray-Krezan, Jeremy; Picard, Richard H.

    2011-09-01

    Phase retrieval is explored for image reconstruction using outputs from both a simulated intensity interferometer (II) and a hybrid system that combines the II outputs with partially resolved imagery from a traditional imaging telescope. Partially resolved imagery provides an additional constraint for the iterative phase retrieval process, as well as an improved starting point. The benefits of this additional a priori information are explored and include lower residual phase error for SNR values above 0.01, increased sensitivity, and improved image quality. Results are also presented for image reconstruction from II measurements alone, via current state-of-the-art phase retrieval techniques. These results are based on the standard hybrid input-output (HIO) algorithm, as well as a recent enhancement to HIO that optimizes step lengths in addition to step directions. The additional step length optimization yields a reduction in residual phase error, but only for SNR values greater than about 10. Image quality for all algorithms studied is quite good for SNR>=10, but it should be noted that the studied phase-recovery techniques yield useful information even for SNRs that are much lower.

  4. Imaging volcanic infrasound sources using time reversal mirror algorithm

    Science.gov (United States)

    Kim, Keehoon; Lees, Jonathan M.

    2015-09-01

    We investigate the capability of Time Reversal Mirror (TRM) algorithm to image local acoustic sources (acoustic waves) is often challenging due to pronounced volcanic topography and emergent arrivals of infrasound signals. While the accuracy of the conventional approaches (e.g. triangulation and semblance method) can be severely compromised by the complex volcanic settings, a TRM-based method may have the potential to properly image acoustic sources by the use of full waveform information and numerical modelling of the time-reversed wavefield. We apply the TRM algorithm to a pyroclastic-laden eruption (sustained for ˜60 s) at Santiaguito Volcano, Guatemala, and show that an ordinary TRM operation can undergo significant reduction of its focusing power due to strong topographic propagation effects (e.g. reflection and diffraction). We propose a weighted imaging condition to compensate for complicated transmission loss of the time-reversed wavefield and demonstrate that the presented condition significantly improves the focusing quality of TRM in the presence of complex topography. The consequent TRM source images exhibit remarkable agreement with the visual observation of the eruption implying that the TRM method with a proper imaging condition can be used to localize and track acoustic sources associated with complex volcanic eruptions.

  5. Finger-vein image separation algorithms and realization with MATLAB

    Science.gov (United States)

    Gao, Xiaoyan; Ma, Junshan; Wu, Jiajie

    2010-10-01

    According to the characteristics of the finger-vein image, we adopted a series of methods to enhance the contrast of the image in order to separate the finger-vein areas from the background areas, and made prepare for the subsequent research such as feature extraction and recognition processing . The method consists of three steps: denoising, contrast enhancement and image binarization. In denoising, considering the relationship between gray levels in the adjacent areas of the finger-vein image, we adopted the Gradient Inverse Weighted Smoothing method. In contrast enhancement, we improved the conventional High Frequency Stress Filtering method and adopted a method which combined the traditional High Frequency Stress Filtering algorithm together with the Histogram Equalization. With this method, the contrast of the finger-vein area and the background area has been enhanced significantly. During the binarization process, after taking the differences of the gray levels between the different areas of the finger-vein image into consideration, we proposed a method which combined the binarization by dividing the image into several segments and the Morphological Image Processing means. Our experiment results show that after a series of processing mentioned above by using MATLAB, the finger-vein areas can be separated from the background areas obviously. We can get a vivid figure of the finger-vein which provided some references for the following research such as finger-vein image feature extraction, matching and identification.

  6. Final Report - DOE Center for Laser Imaging and Cancer Diagnostics

    Energy Technology Data Exchange (ETDEWEB)

    Alfano, Robert R.; Koutcher, Jason A.

    2002-10-31

    This Final Report summarizes the significant progress made by the researchers, students and staff of the Center for Laser Imaging and Cancer Diagnostics (CLICD) from January 1998 through May 2002. During this period, the Center supported several projects. Most projects were proposed initially, some were added subsequently as their relevance and importance to the DOE mission became evident. DOE support has been leveraged to obtain continuing funding for some projects. Leveraged funds come from various sources, including NIH, Army, NSF and the Air Force. The goal of the Center was to develop laser-based instruments for use in the detection and diagnosis of major diseases, with an emphasis on detection and diagnosis of various cancers. Each of the supported projects is a collaborative effort between physicists and laser scientists and the City College of New York and noted physicians, surgeons, pathologists, and biologists located at medical centers in the Metropolitan area. The participating institutions were: City College of New York Institute for Ultrafast Lasers and Spectroscopy, Hackensack University Medical Center, Lawrence Livermore National Laboratory, Memorial Sloan Kettering Cancer Center, and New York Eye and Ear Institute. Each of the projects funded by the Center is grouped into one of four research categories: a) Disease Detection, b) Non-Disease Applications, c) New Diagnostic Tools, and, d) Education, Training, Outreach and Dissemination. The progress achieved by the multidisciplinary teams was reported in 51 publications and 32 presentations at major national conferences. Also, one U.S. patent was obtained and six U.S. patent applications have been filed for innovations resulting from the projects sponsored by the Center.

  7. Diagnostic imaging of viral encephalitis; Bildgebende Diagnostik der Virusenzephalitiden

    Energy Technology Data Exchange (ETDEWEB)

    Weber, W.; Henkes, H.; Kuehne, D. [Alfried-Krupp-Krankenhaus Essen (Germany). Klinik fuer Radiologie und Neuroradiologie; Felber, S. [Universitaetsklinikum Innsbruck (Austria). Klinische Abt. der Radiologie I; Jaenisch, W. [Freie Univ. Berlin (Germany). Inst. fuer Neuropathologie; Schaper, J. [Klinikum der Univ. Essen (Germany). Zentralinst. fuer Radiologische Diagnostik

    2000-11-01

    The diagnostic procedure in viral encephalitis is based on the synopsis of clinical signs and symptoms, serological data, CSF analysis and diagnostic imaging findings. This article summarizes the findings of those viral encephalitides most frequently encountered in Western Europe. MRI is more sensitive than CT for the detection of inflammatory brain lesions due to the higher contrast resolution. The pattern of parenchymal damage is highly specific in only some viral encephalitides (e.g., the frequently hemorrhagic lesions of structures of the limbic system in herpes simples virus type I encephalitis; the symmetric and confluent lesions of the frontal white matter of progressive diffuse leukoencephalopathy in AIDS). In the majority of viral encephalitides MRI demonstrates the location and extension of parenchymal damage. The specific diagnosis in terms of the causative agent is based on serological studies. (orig.) [German] Die Diagnostik viraler Enzephalitiden basiert auf der synoptischen Auswertung klinischer, serologischer, liquoranalytischer und bildgebend erhobener Befunde. In der vorliegenden Arbeit werden die entsprechenden Befunde der haeufigsten in Westeuropa viral verursachten Enzephalitiden dargestellt. Generell ist bei entzuendlichen Laesionen des Hirnparenchyms die Kernspintomographie (MRT) aufgrund ihrer hohen Weichteilkontrastaufloesung der Computertomographie (CT) hinsichtlich der Nachweissensitivitaet ueberlegen. Bei einigen viralen Enzephalitiden ist das kernspintomographisch erfassbare Schaedigungsmuster hochspezifisch. Die gilt z.B. fuer die haeufig haemorrhagischen Laesionen der Strukturen des limbischen Systems bei der Herpes-simplex-Virus-Typ-1-Enzephalitis und fuer die flaechenhaft symmetrischen Marklagerlaesionen bei der progressiven diffusen Leukenzephalopathie bei AIDS-Patienten. Bei der Mehrzahl der viralen Enzephalitiden weist die MRT zwar die Lokalisation und Ausdehnung der Parenchymschaedigung nach, erlaubt jedoch keine sichere

  8. Optimization of CT image reconstruction algorithms for the lung tissue research consortium (LTRC)

    Science.gov (United States)

    McCollough, Cynthia; Zhang, Jie; Bruesewitz, Michael; Bartholmai, Brian

    2006-03-01

    spatial resolution bar patterns demonstrated that the BONE (GE) and B46f (Siemens) showed higher spatial resolution compared to the STANDARD (GE) or B30f (Siemens) reconstruction algorithms typically used for routine body CT imaging. Only the sharper images were deemed clinically acceptable for the evaluation of diffuse lung disease (e.g. emphysema). Quantitative analyses of the extent of emphysema in patient data showed the percent volumes above the -950 HU threshold as 9.4% for the BONE reconstruction, 5.9% for the STANDARD reconstruction, and 4.7% for the BONE filtered images. Contrary to the practice of using standard resolution CT images for the quantitation of diffuse lung disease, these data demonstrate that a single sharp reconstruction (BONE/B46f) should be used for both the qualitative and quantitative evaluation of diffuse lung disease. The sharper reconstruction images, which are required for diagnostic interpretation, provide accurate CT numbers over the range of -1000 to +900 HU and preserve the fidelity of small structures in the reconstructed images. A filtered version of the sharper images can be accurately substituted for images reconstructed with smoother kernels for comparison to previously published results.

  9. Optimization of diagnostic imaging use in patients with acute abdominal pain (OPTIMA): Design and rationale

    OpenAIRE

    Bossuyt Patrick MM; Dijkgraaf Marcel GW; van Randen Adrienne; Laméris Wytze; Stoker Jaap; Boermeester Marja A

    2007-01-01

    Abstract Background The acute abdomen is a frequent entity at the Emergency Department (ED), which usually needs rapid and accurate diagnostic work-up. Diagnostic work-up with imaging can consist of plain X-ray, ultrasonography (US), computed tomography (CT) and even diagnostic laparoscopy. However, no evidence-based guidelines exist in current literature. The actual diagnostic work-up of a patient with acute abdominal pain presenting to the ED varies greatly between hospitals and physicians....

  10. Computer systems for three-dimensional diagnostic imaging: an examination of the state of the art.

    Science.gov (United States)

    Stytz, M R; Frieder, O

    1991-01-01

    This survey reviews three-dimensional (3D) medical imaging machines and 3D medical imaging operations. The survey is designed to provide a snapshot overview of the present state of computer architectures for 3D medical imaging. The basic volume manipulation, object segmentation, and graphics operations required of a 3D medical imaging machine are described and sample algorithms are presented. The architecture and 3D imaging algorithms employed in 11 machines which render medical images are assessed. The performance of the machines is compared across several dimensions, including image resolution, elapsed time to form an image, imaging algorithms employed in the machine, and the degree of parallelism employed in the architecture. The innovation in each machine, whether architectural or algorithmic, is described in detail. General trends for future developments in this field are delineated and an extensive bibliography is provided.

  11. Study design for concurrent development, assessment, and implementation of new diagnostic imaging technology

    NARCIS (Netherlands)

    M.G.M. Hunink (Myriam); G.P. Krestin (Gabriel)

    2002-01-01

    textabstractWith current constraints on health care resources and emphasis on value for money, new diagnostic imaging technologies must be assessed and their value demonstrated. The state of the art in the field of diagnostic imaging technology assessment advocates a hierarchical

  12. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm.

    Science.gov (United States)

    Yang, Zhang; Shufan, Ye; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method.

  13. Image Dodging Algorithm for GF-1 Satellite WFV Imagery

    Directory of Open Access Journals (Sweden)

    HAN Jie

    2016-12-01

    Full Text Available Image dodging method is one of the important processes that determines whether the mosaicking image can be used for remote sensing quantitative application. GF-1 satellite is the first satellite in CHEOS (Chinese high-resolution earth observation system. WFV multispectral sensor is one of the instruments onboard GF-1 satellite which consist of four cameras to mosaic imaging. According to the characteristics of WFV sensor, this paper proposes an image dodging algorithm based on cross/inter-radiometric calibration method. First, the traditional cross calibration method is applied to obtain the calibration coefficients of one WFV camera. Then statistical analysis and simulation methods are adopted to build the correlation models of DN and TOA (top of atmosphere radiances between adjacent cameras. The proposed method can not only accomplish the radiation performance transfer, but also can fulfill the image dodging. The experimental results show the cross/inter-radiometric calibration coefficients in this paper can effectively eliminate the radiation inconsistency problem of the adjacent camera image which realizes the image dodging. So our proposed dodging method can provide an important reference for other similar sensor in future.

  14. Image nonlinearity and non-uniformity corrections using Papoulis - Gerchberg algorithm in gamma imaging systems

    Science.gov (United States)

    Shemer, A.; Schwarz, A.; Gur, E.; Cohen, E.; Zalevsky, Z.

    2015-04-01

    In this paper, the authors describe a novel technique for image nonlinearity and non-uniformity corrections in imaging systems based on gamma detectors. The limitation of the gamma detector prevents the producing of high quality images due to the radionuclide distribution. This problem causes nonlinearity and non-uniformity distortions in the obtained image. Many techniques have been developed to correct or compensate for these image artifacts using complex calibration processes. The presented method is based on the Papoulis - Gerchberg(PG) iterative algorithm and is obtained without need of detector calibration, tuning process or using any special test phantom.

  15. Pomegranate MR images analysis using ACM and FCM algorithms

    Science.gov (United States)

    Morad, Ghobad; Shamsi, Mousa; Sedaaghi, M. H.; Alsharif, M. R.

    2011-10-01

    Segmentation of an image plays an important role in image processing applications. In this paper segmentation of pomegranate magnetic resonance (MR) images has been explored. Pomegranate has healthy nutritional and medicinal properties for which the maturity indices and quality of internal tissues play an important role in the sorting process in which the admissible determination of features mentioned above cannot be easily achieved by human operator. Seeds and soft tissues are the main internal components of pomegranate. For research purposes, such as non-destructive investigation, in order to determine the ripening index and the percentage of seeds in growth period, segmentation of the internal structures should be performed as exactly as possible. In this paper, we present an automatic algorithm to segment the internal structure of pomegranate. Since its intensity of stem and calyx is close to the internal tissues, the stem and calyx pixels are usually labeled to the internal tissues by segmentation algorithm. To solve this problem, first, the fruit shape is extracted from its background using active contour model (ACM). Then stem and calyx are removed using morphological filters. Finally the image is segmented by fuzzy c-means (FCM). The experimental results represent an accuracy of 95.91% in the presence of stem and calyx, while the accuracy of segmentation increases to 97.53% when stem and calyx are first removed by morphological filters.

  16. Practical contour segmentation algorithm for small animal digital radiography image

    Science.gov (United States)

    Zheng, Fang; Hui, Gong

    2008-12-01

    In this paper a practical, automated contour segmentation technique for digital radiography image is described. Digital radiography is an imaging mode based on the penetrability of x-ray. Unlike reflection imaging mode such as visible light camera, the pixel brightness represents the summation of the attenuations on the photon thoroughfare. It is not chromophotograph but gray scale picture. Contour extraction is of great importance in medical applications, especially in non-destructive inspection. Manual segmentation techniques include pixel selection, geometrical boundary selection and tracing. But it relies heavily on the experience of the operators, and is time-consuming. Some researchers try to find contours from the intensity jumping characters around them. However these characters also exist in the juncture of bone and soft tissue. The practical way is back to the primordial threshold algorithm. This research emphasizes on how to find the optimal threshold. A high resolution digital radiography system is used to provide the oriental gray scale image. A mouse is applied as the sample of this paper to show the feasibility of the algorithm.

  17. Chaotic CDMA watermarking algorithm for digital image in FRFT domain

    Science.gov (United States)

    Liu, Weizhong; Yang, Wentao; Feng, Zhuoming; Zou, Xuecheng

    2007-11-01

    A digital image-watermarking algorithm based on fractional Fourier transform (FRFT) domain is presented by utilizing chaotic CDMA technique in this paper. As a popular and typical transmission technique, CDMA has many advantages such as privacy, anti-jamming and low power spectral density, which can provide robustness against image distortions and malicious attempts to remove or tamper with the watermark. A super-hybrid chaotic map, with good auto-correlation and cross-correlation characteristics, is adopted to produce many quasi-orthogonal codes (QOC) that can replace the periodic PN-code used in traditional CDAM system. The watermarking data is divided into a lot of segments that correspond to different chaotic QOC respectively and are modulated into the CDMA watermarking data embedded into low-frequency amplitude coefficients of FRFT domain of the cover image. During watermark detection, each chaotic QOC extracts its corresponding watermarking segment by calculating correlation coefficients between chaotic QOC and watermarked data of the detected image. The CDMA technique not only can enhance the robustness of watermark but also can compress the data of the modulated watermark. Experimental results show that the watermarking algorithm has good performances in three aspects: better imperceptibility, anti-attack robustness and security.

  18. Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding

    Directory of Open Access Journals (Sweden)

    Linguo Li

    2017-01-01

    Full Text Available The computation of image segmentation has become more complicated with the increasing number of thresholds, and the option and application of the thresholds in image thresholding fields have become an NP problem at the same time. The paper puts forward the modified discrete grey wolf optimizer algorithm (MDGWO, which improves on the optimal solution updating mechanism of the search agent by the weights. Taking Kapur’s entropy as the optimized function and based on the discreteness of threshold in image segmentation, the paper firstly discretizes the grey wolf optimizer (GWO and then proposes a new attack strategy by using the weight coefficient to replace the search formula for optimal solution used in the original algorithm. The experimental results show that MDGWO can search out the optimal thresholds efficiently and precisely, which are very close to the result examined by exhaustive searches. In comparison with the electromagnetism optimization (EMO, the differential evolution (DE, the Artifical Bee Colony (ABC, and the classical GWO, it is concluded that MDGWO has advantages over the latter four in terms of image segmentation quality and objective function values and their stability.

  19. [Evaluation of the toxoplasmosis seroprevalence in pregnant women and creating a diagnostic algorithm].

    Science.gov (United States)

    Mumcuoglu, Ipek; Toyran, Alparslan; Cetin, Feyza; Coskun, Feride Alaca; Baran, Irmak; Aksu, Neriman; Aksoy, Altan

    2014-04-01

    Toxoplasma gondii, an obligatory intracellular protozoon is widely distributed around the world and can infect all mammals and birds. While acquired toxoplasmosis is usually asymptomatic in healthy subjects, acute infection during pregnancy may lead to abortion, stillbirth, fetal neurological and ocular damages. For the prevention of congenital toxoplasmosis it is recommended that a screening programme and a diagnostic algorithm in pregnant women should be implemented while considering the cost effectiveness. Thus, it is necessary to determine the seroprevalence of toxoplasmosis in pregnant women and the actual risk of T.gondii transmission during pregnancy in a certain area. The aims of this study were to detect the T.gondii seropositivity in the pregnant women admitted to our hospital and to create a diagnostic algorithm in order to solve the problems arising from interpretation of the serological test results. A total of 6140 women aged 15-49 years who were admitted to our hospital between April 1st, 2010 to July 31st, 2013, were evaluated retrospectively. In the serum samples, T.gondii IgM, IgG and IgG avidity tests were performed by VIDAS automated analyzer using TOXO IgM, TOXO IgG II and TOXO IgG avidity kits (bioMerieux, France). It was noted that, both T.gondii IgM and IgG tests were requested from 4758 (77.5%) of the pregnant women, while only IgM test from 1382 (22.5%) cases. Sole IgM positivity was found as 0.2% (11/6140), IgG as 26.4% (1278/4758) and both IgM + IgG as 0.9% (44/4758). T.gondii IgG avidity tests were requested from 12 of 44 women who were found both IgM and IgG positive and eight of them revealed high avidity and four low avidity. Avidity test was ordered for the 91 (7.1%) of 1278 sole IgG positive cases and four of them were found to have low avidity. IgG avidity test was ordered for 554 (16.2%) of IgM and/or IgG negative subjects, however, the test was not performed according to rejection criteria of the laboratory. It was noticed that

  20. Reconstruction of quasi-monochromatic images from a multiple monochromatic x-ray imaging diagnostic for inertial confinement fusion

    Energy Technology Data Exchange (ETDEWEB)

    Izumi, N; Turner, R; Barbee, T; Koch, J; Welser, L; Mansini, R

    2004-04-15

    We have developed a software package for image reconstruction of a multiple monochromatic x-ray imaging diagnostics (MMI) for diagnostic of inertial conferment fusion capsules. The MMI consists of a pinhole array, a multi-layer Bragg mirror, and a charge injection device image detector (CID). The pinhole array projects {approx}500 sub-images onto the CID after reflection off the multi-layer Bragg mirror. The obtained raw images have continuum spectral dispersion on its vertical axis. For systematic analysis, a computer-aided reconstruction of the quasi-monochromatic image is essential.

  1. An Improved Image Segmentation Algorithm Based on MET Method

    Directory of Open Access Journals (Sweden)

    Z. A. Abo-Eleneen

    2012-09-01

    Full Text Available Image segmentation is a basic component of many computer vision systems and pattern recognition. Thresholding is a simple but effective method to separate objects from the background. A commonly used method, Kittler and Illingworth's minimum error thresholding (MET, improves the image segmentation effect obviously. Its simpler and easier to implement. However, it fails in the presence of skew and heavy-tailed class-conditional distributions or if the histogram is unimodal or close to unimodal. The Fisher information (FI measure is an important concept in statistical estimation theory and information theory. Employing the FI measure, an improved threshold image segmentation algorithm FI-based extension of MET is developed. Comparing with the MET method, the improved method in general can achieve more robust performance when the data for either class is skew and heavy-tailed.

  2. Adaptively wavelet-based image denoising algorithm with edge preserving

    Institute of Scientific and Technical Information of China (English)

    Yihua Tan; Jinwen Tian; Jian Liu

    2006-01-01

    @@ A new wavelet-based image denoising algorithm, which exploits the edge information hidden in the corrupted image, is presented. Firstly, a canny-like edge detector identifies the edges in each subband.Secondly, multiplying the wavelet coefficients in neighboring scales is implemented to suppress the noise while magnifying the edge information, and the result is utilized to exclude the fake edges. The isolated edge pixel is also identified as noise. Unlike the thresholding method, after that we use local window filter in the wavelet domain to remove noise in which the variance estimation is elaborated to utilize the edge information. This method is adaptive to local image details, and can achieve better performance than the methods of state of the art.

  3. A High Performance Image Authentication Algorithm on GPU with CUDA

    Directory of Open Access Journals (Sweden)

    Caiwei Lin

    2011-03-01

    Full Text Available There has been large amounts of research on image authentication method. Many of the schemes perform well in verification results; however, most of them are time-consuming in traditional serial manners. And improving the efficiency of authentication process has become one of the challenges in image authentication field today. In the future, it’s a trend that authentication system with the properties of high performance, real-time, flexible and ease for development. In this paper, we present a CUDA-based implementation of an image authentication algorithm with NVIDIA’s Tesla C1060 GPU devices. Comparing with the original implementation on CPU, our CUDA-based implementation works 20x-50x faster with single GPU device. And experiment shows that, by using two GPUs, the performance gains can be further improved around 1.2 times in contras to single GPU.

  4. Image Transformation using Modified Kmeans clustering algorithm for Parallel saliency map

    Directory of Open Access Journals (Sweden)

    Aman Sharma

    2013-08-01

    Full Text Available to design an image transformation system is Depending on the transform chosen, the input and output images may appear entirely different and have different interpretations. Image Transformationwith the help of certain module like input image, image cluster index, object in cluster and color index transformation of image. K-means clustering algorithm is used to cluster the image for bettersegmentation. In the proposed method parallel saliency algorithm with K-means clustering is used to avoid local minima and to find the saliency map. The region behind that of using parallel saliency algorithm is proved to be more than exiting saliency algorithm.

  5. Feedback control and beam diagnostic algorithms for a multiprocessor DSP system

    Energy Technology Data Exchange (ETDEWEB)

    Teytelman, D.; Claus, R.; Fox, J.; Hindi, H.; Linscott, I.; Prabhakar, S. [Stanford Linear Accelerator Center, P.O. Box 4349, Stanford, California 94309 (United States); Drago, A. [INFN---Laboratori Nazionali di Frascati, P.O. Box 13, I-00044 Frascati (Roma) (Italy); Stover, G. [Lawrence Berkeley Laboratory, 1 Cyclotron Road, Berkeley, California 94563 (United States)

    1997-01-01

    The multibunch longitudinal feedback system developed for use by PEP-II, ALS, and DA{Phi}NE uses a parallel array of digital signal processors (DSPs) to calculate the feedback signals from measurements of beam motion. The system is designed with general-purpose programmable elements which allow many feedback operating modes as well as system diagnostics, calibrations, and accelerator measurements. The overall signal processing architecture of the system is illustrated. The real-time DSP algorithms and off-line postprocessing tools are presented. The problems in managing 320k samples of data collected in one beam transient measurement are discussed and our solutions are presented. Example software structures are presented showing the beam feedback process, techniques for modal analysis of beam motion (used to quantify growth and damping rates of instabilities), and diagnostic functions (such as timing adjustment of beam pick-up and kicker components). These operating techniques are illustrated with example results obtained from the system installed at the Advanced Light Source at LBL. {copyright} {ital 1997 American Institute of Physics.}

  6. Feedback control and beam diagnostic algorithms for a multiprocessor DSP system

    Energy Technology Data Exchange (ETDEWEB)

    Teytelman, D.; Claus, R.; Fox, J.; Hindi, H.; Linscott, I.; Prabhakar, S. [Stanford Univ., CA (United States). Stanford Linear Accelerator Center; Drago, A. [INFN, Roma (Italy). Lab. Nazionali di Frascati; Stover, G. [Lawrence Berkeley Lab., CA (United States)

    1996-09-01

    The multibunch longitudinal feedback system developed for use by PEP-II, ALS and DA{Phi}NE uses a parallel array of digital signal processors to calculate the feedback signals from measurements of beam motion. The system is designed with general-purpose programmable elements which allow many feedback operating modes as well as system diagnostics, calibrations and accelerator measurements. The overall signal processing architecture of the system is illustrated. The real-time DSP algorithms and off-line postprocessing tools are presented. The problems in managing 320 K samples of data collected in one beam transient measurement are discussed and the solutions are presented. Example software structures are presented showing the beam feedback process, techniques for modal analysis of beam motion(used to quantify growth and damping rates of instabilities) and diagnostic functions (such as timing adjustment of beam pick-up and kicker components). These operating techniques are illustrated with example results obtained from the system installed at the Advanced Light Source at LBL.

  7. A new trust region algorithm for image restoration

    Institute of Scientific and Technical Information of China (English)

    WEN Zaiwen; WANG Yanfei

    2005-01-01

    The image restoration problems play an important role in remote sensing and astronomical image analysis. One common method for the recovery of a true image from corrupted or blurred image is the least squares error (LSE) method. But the LSE method is unstable in practical applications. A popular way to overcome instability is the Tikhonov regularization. However, difficulties will encounter when adjusting the so-called regularization parameter α. Moreover, how to truncate the iteration at appropriate steps is also challenging. In this paper we use the trust region method to deal with the image restoration problem, meanwhile, the trust region subproblem is solved by the truncated Lanczos method and the preconditioned truncated Lanczos method. We also develop a fast algorithm for evaluating the Kronecker matrix-vector product when the matrix is banded. The trust region method is very stable and robust, and it has the nice property of updating the trust region automatically. This releases us from tedious finding the regularization parameters and truncation levels. Some numerical tests on remotely sensed images are given to show that the trust region method is promising.

  8. Research on Airborne SAR Imaging Based on Esc Algorithm

    Science.gov (United States)

    Dong, X. T.; Yue, X. J.; Zhao, Y. H.; Han, C. M.

    2017-09-01

    Due to the ability of flexible, accurate, and fast obtaining abundant information, airborne SAR is significant in the field of Earth Observation and many other applications. Optimally the flight paths are straight lines, but in reality it is not the case since some portion of deviation from the ideal path is impossible to avoid. A small disturbance from the ideal line will have a major effect on the signal phase, dramatically deteriorating the quality of SAR images and data. Therefore, to get accurate echo information and radar images, it is essential to measure and compensate for nonlinear motion of antenna trajectories. By means of compensating each flying trajectory to its reference track, MOCO method corrects linear phase error and quadratic phase error caused by nonlinear antenna trajectories. Position and Orientation System (POS) data is applied to acquiring accuracy motion attitudes and spatial positions of antenna phase centre (APC). In this paper, extend chirp scaling algorithm (ECS) is used to deal with echo data of airborne SAR. An experiment is done using VV-Polarization raw data of C-band airborne SAR. The quality evaluations of compensated SAR images and uncompensated SAR images are done in the experiment. The former always performs better than the latter. After MOCO processing, azimuth ambiguity is declined, peak side lobe ratio (PSLR) effectively improves and the resolution of images is improved obviously. The result shows the validity and operability of the imaging process for airborne SAR.

  9. An Improved Medical Image Fusion Algorithm for Anatomical and Functional Medical Images

    Institute of Scientific and Technical Information of China (English)

    CHEN Mei-ling; TAO Ling; QIAN Zhi-yu

    2009-01-01

    In recent years,many medical image fusion methods had been exploited to derive useful information from multimodality medical image data,but,not an appropriate fusion algorithm for anatomical and functional medical images.In this paper,the traditional method of wavelet fusion is improved and a new fusion algorithm of anatomical and functional medical images,in which high-frequency and low-frequency coefficients are studied respectively.When choosing high-frequency coefficients,the global gradient of each sub-image is calculated to realize adaptive fusion,so that the fused image can reserve the functional information;while choosing the low coefficients is based on the analysis of the neighborbood region energy,so that the fused image can reserve the anatomical image's edge and texture feature.Experimental results and the quality evaluation parameters show that the improved fusion algorithm can enhance the edge and texture feature and retain the function information and anatomical information effectively.

  10. A Diagnostic Algorithm for the Detection of Clostridium difficile-Associated Diarrhea.

    Science.gov (United States)

    Yoldaş, Özlem; Altındiş, Mustafa; Cufalı, Davut; Aşık, Gülşah; Keşli, Recep

    2016-01-01

    Clostridium difficile is a common cause of hospital-acquired diarrhea, which is usually associated with previous antibiotic use. The clinical manifestations of C. difficile infection (CDI) may range from mild diarrhea to fulminant colitis. Clostridium difficile should be considered in diarrhea cases with a history of antibiotic use within the last 8 weeks (community-associated CDI) or with a hospital stay of at least 3 days, regardless of the duration of antibiotic use (hospital-acquired CDI). This study investigated the frequency of CDI in diarrheic patients and evaluated the efficacy of the triple diagnostic algorithm that is proposed here for C. difficile detection. Cross-sectional study. In this study, we compared three methods currently employed for C. difficile detection using 95 patient stool samples: an enzyme immunoassay (EIA) for toxin A/B (C. diff Toxin A+B; Diagnostic Automation Inc.; Calabasas, CA, USA), an EIA for glutamate dehydrogenase (GDH) (C. DIFF CHEK-60TM, TechLab Inc.; Blacksburg, VA, USA), and a polymerase chain reaction (PCR)-based assay (GeneXpert(®) C. difficile; Cepheid, Sunnyvale, CA, USA) that detects C. difficile toxin genes and conventional methods as well. In this study, 50.5% of the patients were male, 50 patients were outpatients, 32 were from inpatient clinics and 13 patients were from the intensive care unit. Of the 95 stool samples tested for GDH, 28 were positive. Six samples were positive by PCR, while nine samples were positive for toxin A/B. The hypervirulent strain NAP-1 and binary toxin was not detected. The rate of occurrence of toxigenic C. difficile was 5.1% in the samples. Cefaclor, ampicillin-sulbactam, ertapenem, and piperacillin-tazobactam were the most commonly used antibiotics by patients preceding the onset of diarrhea. Among the patients who were hospitalized in an intensive care unit for more than 7 days, 83.3% were positive for CDI by PCR screening. If the PCR test is accepted as the reference: C. difficile

  11. Implementation and Optimization of Image Processing Algorithms on Embedded GPU

    Science.gov (United States)

    Singhal, Nitin; Yoo, Jin Woo; Choi, Ho Yeol; Park, In Kyu

    In this paper, we analyze the key factors underlying the implementation, evaluation, and optimization of image processing and computer vision algorithms on embedded GPU using OpenGL ES 2.0 shader model. First, we present the characteristics of the embedded GPU and its inherent advantage when compared to embedded CPU. Additionally, we propose techniques to achieve increased performance with optimized shader design. To show the effectiveness of the proposed techniques, we employ cartoon-style non-photorealistic rendering (NPR), speeded-up robust feature (SURF) detection, and stereo matching as our example algorithms. Performance is evaluated in terms of the execution time and speed-up achieved in comparison with the implementation on embedded CPU.

  12. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    Science.gov (United States)

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.

  13. Emergency Department Management of Suspected Calf-Vein Deep Venous Thrombosis: A Diagnostic Algorithm

    Directory of Open Access Journals (Sweden)

    Levi Kitchen

    2016-06-01

    Full Text Available Introduction: Unilateral leg swelling with suspicion of deep venous thrombosis (DVT is a common emergency department (ED presentation. Proximal DVT (thrombus in the popliteal or femoral veins can usually be diagnosed and treated at the initial ED encounter. When proximal DVT has been ruled out, isolated calf-vein deep venous thrombosis (IC-DVT often remains a consideration. The current standard for the diagnosis of IC-DVT is whole-leg vascular duplex ultrasonography (WLUS, a test that is unavailable in many hospitals outside normal business hours. When WLUS is not available from the ED, recommendations for managing suspected IC-DVT vary. The objectives of the study is to use current evidence and recommendations to (1 propose a diagnostic algorithm for IC-DVT when definitive testing (WLUS is unavailable; and (2 summarize the controversy surrounding IC-DVT treatment. Discussion: The Figure combines D-dimer testing with serial CUS or a single deferred FLUS for the diagnosis of IC-DVT. Such an algorithm has the potential to safely direct the management of suspected IC-DVT when definitive testing is unavailable. Whether or not to treat diagnosed IC-DVT remains widely debated and awaiting further evidence. Conclusion: When IC-DVT is not ruled out in the ED, the suggested algorithm, although not prospectively validated by a controlled study, offers an approach to diagnosis that is consistent with current data and recommendations. When IC-DVT is diagnosed, current references suggest that a decision between anticoagulation and continued follow-up outpatient testing can be based on shared decision-making. The risks of proximal progression and life-threatening embolization should be balanced against the generally more benign natural history of such thrombi, and an individual patient’s risk factors for both thrombus propagation and complications of anticoagulation. [West J Emerg Med. 2016;17(4384-390.

  14. Opto-acoustic image fusion technology for diagnostic breast imaging in a feasibility study

    Science.gov (United States)

    Zalev, Jason; Clingman, Bryan; Herzog, Don; Miller, Tom; Ulissey, Michael; Stavros, A. T.; Oraevsky, Alexander; Lavin, Philip; Kist, Kenneth; Dornbluth, N. C.; Otto, Pamela

    2015-03-01

    Functional opto-acoustic (OA) imaging was fused with gray-scale ultrasound acquired using a specialized duplex handheld probe. Feasibility Study findings indicated the potential to more accurately characterize breast masses for cancer than conventional diagnostic ultrasound (CDU). The Feasibility Study included OA imagery of 74 breast masses that were collected using the investigational Imagio® breast imaging system. Superior specificity and equal sensitivity to CDU was demonstrated, suggesting that OA fusion imaging may potentially obviate the need for negative biopsies without missing cancers in a certain percentage of breast masses. Preliminary results from a 100 subject Pilot Study are also discussed. A larger Pivotal Study (n=2,097 subjects) is underway to confirm the Feasibility Study and Pilot Study findings.

  15. The SUMO Ship Detector Algorithm for Satellite Radar Images

    Directory of Open Access Journals (Sweden)

    Harm Greidanus

    2017-03-01

    Full Text Available Search for Unidentified Maritime Objects (SUMO is an algorithm for ship detection in satellite Synthetic Aperture Radar (SAR images. It has been developed over the course of more than 15 years, using a large amount of SAR images from almost all available SAR satellites operating in L-, C- and X-band. As validated by benchmark tests, it performs very well on a wide range of SAR image modes (from Spotlight to ScanSAR and resolutions (from 1–100 m and for all types and sizes of ships, within the physical limits imposed by the radar imaging. This paper describes, in detail, the algorithmic approach in all of the steps of the ship detection: land masking, clutter estimation, detection thresholding, target clustering, ship attribute estimation and false alarm suppression. SUMO is a pixel-based CFAR (Constant False Alarm Rate detector for multi-look radar images. It assumes a K distribution for the sea clutter, corrected however for deviations of the actual sea clutter from this distribution, implementing a fast and robust method for the clutter background estimation. The clustering of detected pixels into targets (ships uses several thresholds to deal with the typically irregular distribution of the radar backscatter over a ship. In a multi-polarization image, the different channels are fused. Azimuth ambiguities, a common source of false alarms in ship detection, are removed. A reliability indicator is computed for each target. In post-processing, using the results of a series of images, additional false alarms from recurrent (fixed targets including range ambiguities are also removed. SUMO can run in semi-automatic mode, where an operator can verify each detected target. It can also run in fully automatic mode, where batches of over 10,000 images have successfully been processed in less than two hours. The number of satellite SAR systems keeps increasing, as does their application to maritime surveillance. The open data policy of the EU

  16. A New Approach to Lung Image Segmentation using Fuzzy Possibilistic C-Means Algorithm

    CERN Document Server

    Gomathi, M

    2010-01-01

    Image segmentation is a vital part of image processing. Segmentation has its application widespread in the field of medical images in order to diagnose curious diseases. The same medical images can be segmented manually. But the accuracy of image segmentation using the segmentation algorithms is more when compared with the manual segmentation. In the field of medical diagnosis an extensive diversity of imaging techniques is presently available, such as radiography, computed tomography (CT) and magnetic resonance imaging (MRI). Medical image segmentation is an essential step for most consequent image analysis tasks. Although the original FCM algorithm yields good results for segmenting noise free images, it fails to segment images corrupted by noise, outliers and other imaging artifact. This paper presents an image segmentation approach using Modified Fuzzy C-Means (FCM) algorithm and Fuzzy Possibilistic c-means algorithm (FPCM). This approach is a generalized version of standard Fuzzy CMeans Clustering (FCM) ...

  17. NMR imaging of the liver. Diagnostics, differential diagnostics, therapeutic approaches; MRT der Leber. Diagnostik, Differenzialdiagnostik, Therapieansaetze

    Energy Technology Data Exchange (ETDEWEB)

    Fischbach, Frank; Fischbach, Katharina [Universitaetsklinikum Magdeburg A.oe.R. (Germany). Klinik fuer Radiologie und Nuklearmedizin

    2017-03-01

    The book on NMR imaging of the liver covers the following issues: Fundamentals of NMR imaging, T1-weighted imaging; T2-weighted imaging, diffusion-weighted imaging, cavernous hemangioma, focal nodular hyperplasy; hepatocellular adenoma, hepatocellulas carcinoma, cholangiocellular carcinoma, hepatic metastases.

  18. Infrared image gray adaptive adjusting enhancement algorithm based on gray redundancy histogram-dealing technique

    Science.gov (United States)

    Hao, Zi-long; Liu, Yong; Chen, Ruo-wang

    2016-11-01

    In view of the histogram equalizing algorithm to enhance image in digital image processing, an Infrared Image Gray adaptive adjusting Enhancement Algorithm Based on Gray Redundancy Histogram-dealing Technique is proposed. The algorithm is based on the determination of the entire image gray value, enhanced or lowered the image's overall gray value by increasing appropriate gray points, and then use gray-level redundancy HE method to compress the gray-scale of the image. The algorithm can enhance image detail information. Through MATLAB simulation, this paper compares the algorithm with the histogram equalization method and the algorithm based on gray redundancy histogram-dealing technique , and verifies the effectiveness of the algorithm.

  19. Genetics algorithm optimization of DWT-DCT based image Watermarking

    Science.gov (United States)

    Budiman, Gelar; Novamizanti, Ledya; Iwut, Iwan

    2017-01-01

    Data hiding in an image content is mandatory for setting the ownership of the image. Two dimensions discrete wavelet transform (DWT) and discrete cosine transform (DCT) are proposed as transform method in this paper. First, the host image in RGB color space is converted to selected color space. We also can select the layer where the watermark is embedded. Next, 2D-DWT transforms the selected layer obtaining 4 subband. We select only one subband. And then block-based 2D-DCT transforms the selected subband. Binary-based watermark is embedded on the AC coefficients of each block after zigzag movement and range based pixel selection. Delta parameter replacing pixels in each range represents embedded bit. +Delta represents bit “1” and –delta represents bit “0”. Several parameters to be optimized by Genetics Algorithm (GA) are selected color space, layer, selected subband of DWT decomposition, block size, embedding range, and delta. The result of simulation performs that GA is able to determine the exact parameters obtaining optimum imperceptibility and robustness, in any watermarked image condition, either it is not attacked or attacked. DWT process in DCT based image watermarking optimized by GA has improved the performance of image watermarking. By five attacks: JPEG 50%, resize 50%, histogram equalization, salt-pepper and additive noise with variance 0.01, robustness in the proposed method has reached perfect watermark quality with BER=0. And the watermarked image quality by PSNR parameter is also increased about 5 dB than the watermarked image quality from previous method.

  20. Study on Performance Improvement of Oil Paint Image Filter Algorithm Using Parallel Pattern Library

    Directory of Open Access Journals (Sweden)

    Siddhartha Mukherjee

    2014-03-01

    Full Text Available This paper gives a detailed study on the performanc e of oil paint image filter algorithm with various parameters applied on an image of RGB model . Oil Paint image processing, being very performance hungry, current research tries to find improvement using parallel pattern library. With increasing kernel-size, the processing time of oil paint image filter algorithm increases exponentially.