WorldWideScience

Sample records for grayscale document images

  1. A secret-sharing-based method for authentication of grayscale document images via the use of the PNG image with a data repair capability.

    Science.gov (United States)

    Lee, Che-Wei; Tsai, Wen-Hsiang

    2012-01-01

    A new blind authentication method based on the secret sharing technique with a data repair capability for grayscale document images via the use of the Portable Network Graphics (PNG) image is proposed. An authentication signal is generated for each block of a grayscale document image, which, together with the binarized block content, is transformed into several shares using the Shamir secret sharing scheme. The involved parameters are carefully chosen so that as many shares as possible are generated and embedded into an alpha channel plane. The alpha channel plane is then combined with the original grayscale image to form a PNG image. During the embedding process, the computed share values are mapped into a range of alpha channel values near their maximum value of 255 to yield a transparent stego-image with a disguise effect. In the process of image authentication, an image block is marked as tampered if the authentication signal computed from the current block content does not match that extracted from the shares embedded in the alpha channel plane. Data repairing is then applied to each tampered block by a reverse Shamir scheme after collecting two shares from unmarked blocks. Measures for protecting the security of the data hidden in the alpha channel are also proposed. Good experimental results prove the effectiveness of the proposed method for real applications.

  2. Fuzzy Matching Based on Gray-scale Difference for Quantum Images

    Science.gov (United States)

    Luo, GaoFeng; Zhou, Ri-Gui; Liu, XingAo; Hu, WenWen; Luo, Jia

    2018-05-01

    Quantum image processing has recently emerged as an essential problem in practical tasks, e.g. real-time image matching. Previous studies have shown that the superposition and entanglement of quantum can greatly improve the efficiency of complex image processing. In this paper, a fuzzy quantum image matching scheme based on gray-scale difference is proposed to find out the target region in a reference image, which is very similar to the template image. Firstly, we employ the proposed enhanced quantum representation (NEQR) to store digital images. Then some certain quantum operations are used to evaluate the gray-scale difference between two quantum images by thresholding. If all of the obtained gray-scale differences are not greater than the threshold value, it indicates a successful fuzzy matching of quantum images. Theoretical analysis and experiments show that the proposed scheme performs fuzzy matching at a low cost and also enables exponentially significant speedup via quantum parallel computation.

  3. Universal Steganalysis of Data Hiding in Grayscale Images

    Institute of Scientific and Technical Information of China (English)

    HUANG Ji-feng; LIN Jia-jun

    2007-01-01

    This paper proposed an universal steganalysis program based on quantification attack which can detect several kinds of data hiding algorithms for grayscale images. In practice, most techniques produce stego images that are perceptually identical to the cover images but exhibit statistical irregularities that distinguish them from cover images. Attacking the suspicious images using the quantization method, we can obtain statistically different from embedded-and-quantization attacked images and from quantization attacked-but-not-embedded sources. We have developed a technique based on one-class SVM for discriminating between cover-images and stego-images. Simulation results show our approach is able to distinguish between cover and stego images with reasonable accuracy.

  4. Contrast enhancement of bite mark images using the grayscale mixer in ACR in Photoshop®.

    Science.gov (United States)

    Evans, Sam; Noorbhai, Suzanne; Lawson, Zoe; Stacey-Jones, Seren; Carabott, Romina

    2013-05-01

    Enhanced images may improve bite mark edge definition, assisting forensic analysis. Current contrast enhancement involves color extraction, viewing layered images by channel. A novel technique, producing a single enhanced image using the grayscale mix panel within Adobe Camera Raw®, has been developed and assessed here, allowing adjustments of multiple color channels simultaneously. Stage 1 measured RGB values in 72 versions of a color chart image; eight sliders in Photoshop® were adjusted at 25% intervals, all corresponding colors affected. Stage 2 used a bite mark image, and found only red, orange, and yellow sliders had discernable effects. Stage 3 assessed modality preference between color, grayscale, and enhanced images; on average, the 22 survey participants chose the enhanced image as better defined for nine out of 10 bite marks. The study has shown potential benefits for this new technique. However, further research is needed before use in the analysis of bite marks. © 2013 American Academy of Forensic Sciences.

  5. Chalcogenide phase-change thin films used as grayscale photolithography materials.

    Science.gov (United States)

    Wang, Rui; Wei, Jingsong; Fan, Yongtao

    2014-03-10

    Chalcogenide phase-change thin films are used in many fields, such as optical information storage and solid-state memory. In this work, we present another application of chalcogenide phase-change thin films, i.e., as grayscale photolithgraphy materials. The grayscale patterns can be directly inscribed on the chalcogenide phase-change thin films by a single process through direct laser writing method. In grayscale photolithography, the laser pulse can induce the formation of bump structure, and the bump height and size can be precisely controlled by changing laser energy. Bumps with different height and size present different optical reflection and transmission spectra, leading to the different gray levels. For example, the continuous-tone grayscale images of lifelike bird and cat are successfully inscribed onto Sb(2)Te(3) chalcogenide phase-change thin films using a home-built laser direct writer, where the expression and appearance of the lifelike bird and cat are fully presented. This work provides a way to fabricate complicated grayscale patterns using laser-induced bump structures onto chalcogenide phase-change thin films, different from current techniques such as photolithography, electron beam lithography, and focused ion beam lithography. The ability to form grayscale patterns of chalcogenide phase-change thin films reveals many potential applications in high-resolution optical images for micro/nano image storage, microartworks, and grayscale photomasks.

  6. Using color and grayscale images to teach histology to color-deficient medical students.

    Science.gov (United States)

    Rubin, Lindsay R; Lackey, Wendy L; Kennedy, Frances A; Stephenson, Robert B

    2009-01-01

    Examination of histologic and histopathologic microscopic sections relies upon differential colors provided by staining techniques, such as hematoxylin and eosin, to delineate normal tissue components and to identify pathologic alterations in these components. Given the prevalence of color deficiency (commonly called "color blindness") in the general population, it is likely that this reliance upon color differentiation poses a significant obstacle for several medical students beginning a course of study that includes examination of histologic slides. In the past, first-year medical students at Michigan State University who identified themselves as color deficient were encouraged to use color transparency overlays or tinted contact lenses to filter out problematic colors. Recently, however, we have offered such students a computer monitor adjusted to grayscale for in-lab work, as well as grayscale copies of color photomicrographs for examination purposes. Grayscale images emphasize the texture of tissues and the contrasts between tissues as the students learn histologic architecture. Using this approach, color-deficient students have quickly learned to compensate for their deficiency by focusing on cell and tissue structure rather than on color variation. Based upon our experience with color-deficient students, we believe that grayscale photomicrographs may also prove instructional for students with normal (trichromatic) color vision, by encouraging them to consider structural characteristics of cells and tissues that may otherwise be overshadowed by stain colors.

  7. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M. [Los Alamos National Lab., NM (United States); Hopper, T. [Federal Bureau of Investigation, Washington, DC (United States)

    1993-05-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.

  8. Metrology for Grayscale Lithography

    International Nuclear Information System (INIS)

    Murali, Raghunath

    2007-01-01

    Three dimensional microstructures find applications in diffractive optical elements, photonic elements, etc. and can be efficiently fabricated by grayscale lithography. Good process control is important for achieving the desired structures. Metrology methods for grayscale lithography are discussed. Process optimization for grayscale e-beam lithography is explored and various process parameters that affect the grayscale process are discussed

  9. Grayscale Optical Correlator Workbench

    Science.gov (United States)

    Hanan, Jay; Zhou, Hanying; Chao, Tien-Hsin

    2006-01-01

    Grayscale Optical Correlator Workbench (GOCWB) is a computer program for use in automatic target recognition (ATR). GOCWB performs ATR with an accurate simulation of a hardware grayscale optical correlator (GOC). This simulation is performed to test filters that are created in GOCWB. Thus, GOCWB can be used as a stand-alone ATR software tool or in combination with GOC hardware for building (target training), testing, and optimization of filters. The software is divided into three main parts, denoted filter, testing, and training. The training part is used for assembling training images as input to a filter. The filter part is used for combining training images into a filter and optimizing that filter. The testing part is used for testing new filters and for general simulation of GOC output. The current version of GOCWB relies on the mathematical software tools from MATLAB binaries for performing matrix operations and fast Fourier transforms. Optimization of filters is based on an algorithm, known as OT-MACH, in which variables specified by the user are parameterized and the best filter is selected on the basis of an average result for correct identification of targets in multiple test images.

  10. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M. (Los Alamos National Lab., NM (United States)); Hopper, T. (Federal Bureau of Investigation, Washington, DC (United States))

    1993-01-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.

  11. Ultrasound estimates of muscle quality in older adults: reliability and comparison of Photoshop and ImageJ for the grayscale analysis of muscle echogenicity

    Directory of Open Access Journals (Sweden)

    Michael O. Harris-Love

    2016-02-01

    Full Text Available Background. Quantitative diagnostic ultrasound imaging has been proposed as a method of estimating muscle quality using measures of echogenicity. The Rectangular Marquee Tool (RMT and the Free Hand Tool (FHT are two types of editing features used in Photoshop and ImageJ for determining a region of interest (ROI within an ultrasound image. The primary objective of this study is to determine the intrarater and interrater reliability of Photoshop and ImageJ for the estimate of muscle tissue echogenicity in older adults via grayscale histogram analysis. The secondary objective is to compare the mean grayscale values obtained using both the RMT and FHT methods across both image analysis platforms. Methods. This cross-sectional observational study features 18 community-dwelling men (age = 61.5 ± 2.32 years. Longitudinal views of the rectus femoris were captured using B-mode ultrasound. The ROI for each scan was selected by 2 examiners using the RMT and FHT methods from each software program. Their reliability is assessed using intraclass correlation coefficients (ICCs and the standard error of the measurement (SEM. Measurement agreement for these values is depicted using Bland-Altman plots. A paired t-test is used to determine mean differences in echogenicity expressed as grayscale values using the RMT and FHT methods to select the post-image acquisition ROI. The degree of association among ROI selection methods and image analysis platforms is analyzed using the coefficient of determination (R2. Results. The raters demonstrated excellent intrarater and interrater reliability using the RMT and FHT methods across both platforms (lower bound 95% CI ICC = .97–.99, p < .001. Mean differences between the echogenicity estimates obtained with the RMT and FHT methods was .87 grayscale levels (95% CI [.54–1.21], p < .0001 using data obtained with both programs. The SEM for Photoshop was .97 and 1.05 grayscale levels when using the RMT and FHT ROI selection

  12. Ultrasound estimates of muscle quality in older adults: reliability and comparison of Photoshop and ImageJ for the grayscale analysis of muscle echogenicity.

    Science.gov (United States)

    Harris-Love, Michael O; Seamon, Bryant A; Teixeira, Carla; Ismail, Catheeja

    2016-01-01

    Background. Quantitative diagnostic ultrasound imaging has been proposed as a method of estimating muscle quality using measures of echogenicity. The Rectangular Marquee Tool (RMT) and the Free Hand Tool (FHT) are two types of editing features used in Photoshop and ImageJ for determining a region of interest (ROI) within an ultrasound image. The primary objective of this study is to determine the intrarater and interrater reliability of Photoshop and ImageJ for the estimate of muscle tissue echogenicity in older adults via grayscale histogram analysis. The secondary objective is to compare the mean grayscale values obtained using both the RMT and FHT methods across both image analysis platforms. Methods. This cross-sectional observational study features 18 community-dwelling men (age = 61.5 ± 2.32 years). Longitudinal views of the rectus femoris were captured using B-mode ultrasound. The ROI for each scan was selected by 2 examiners using the RMT and FHT methods from each software program. Their reliability is assessed using intraclass correlation coefficients (ICCs) and the standard error of the measurement (SEM). Measurement agreement for these values is depicted using Bland-Altman plots. A paired t-test is used to determine mean differences in echogenicity expressed as grayscale values using the RMT and FHT methods to select the post-image acquisition ROI. The degree of association among ROI selection methods and image analysis platforms is analyzed using the coefficient of determination (R (2)). Results. The raters demonstrated excellent intrarater and interrater reliability using the RMT and FHT methods across both platforms (lower bound 95% CI ICC = .97-.99, p Photoshop was .97 and 1.05 grayscale levels when using the RMT and FHT ROI selection methods, respectively. Comparatively, the SEM values were .72 and .81 grayscale levels, respectively, when using the RMT and FHT ROI selection methods in ImageJ. Uniform coefficients of determination (R (2) = .96

  13. A Parallel Algorithm for Connected Component Labelling of Gray-scale Images on Homogeneous Multicore Architectures

    International Nuclear Information System (INIS)

    Niknam, Mehdi; Thulasiraman, Parimala; Camorlinga, Sergio

    2010-01-01

    Connected component labelling is an essential step in image processing. We provide a parallel version of Suzuki's sequential connected component algorithm in order to speed up the labelling process. Also, we modify the algorithm to enable labelling gray-scale images. Due to the data dependencies in the algorithm we used a method similar to pipeline to exploit parallelism. The parallel algorithm method achieved a speedup of 2.5 for image size of 256 x 256 pixels using 4 processing threads.

  14. Color-to-grayscale conversion through weighted multiresolution channel fusion

    NARCIS (Netherlands)

    Wu, T.; Toet, A.

    2014-01-01

    We present a color-to-gray conversion algorithm that retains both the overall appearance and the discriminability of details of the input color image. The algorithm employs a weighted pyramid image fusion scheme to blend the R, G, and B color channels of the input image into a single grayscale

  15. Steganalysis Techniques for Documents and Images

    Science.gov (United States)

    2005-05-01

    steganography . We then illustrated the efficacy of our model using variations of LSB steganography . For binary images , we have made significant progress in...efforts have focused on two areas. The first area is LSB steganalysis for grayscale images . Here, as we had proposed (as a challenging task), we have...generalized our previous steganalysis technique of sample pair analysis to a theoretical framework for the detection of the LSB steganography . The new

  16. High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering

    Directory of Open Access Journals (Sweden)

    Nelson Eduardo Diaz

    2015-09-01

    Full Text Available The coded aperture snapshot spectral imaging system (CASSI is an imaging architecture which senses the three dimensional informa-tion of a scene with two dimensional (2D focal plane array (FPA coded projection measurements. A reconstruction algorithm takes advantage of the compressive measurements sparsity to recover the underlying 3D data cube. Traditionally, CASSI uses block-un-block coded apertures (BCA to spatially modulate the light. In CASSI the quality of the reconstructed images depends on the design of these coded apertures and the FPA dynamic range. This work presents a new CASSI architecture based on grayscaled coded apertu-res (GCA which reduce the FPA saturation and increase the dynamic range of the reconstructed images. The set of GCA is calculated in a real-time adaptive manner exploiting the information from the FPA compressive measurements. Extensive simulations show the attained improvement in the quality of the reconstructed images when GCA are employed.  In addition, a comparison between traditional coded apertures and GCA is realized with respect to noise tolerance.

  17. Scanning technology selection impacts acceptability and usefulness of image-rich content

    Directory of Open Access Journals (Sweden)

    Kristine M. Alpi

    2016-01-01

    Full Text Available Objective: Clinical and research usefulness of articles can depend on image quality. This study addressed whether scans of figures in black and white (B&W, grayscale, or color, or portable document format (PDF to tagged image file format (TIFF conversions as provided by interlibrary loan or document delivery were viewed as acceptable or useful by radiologists or pathologists. Methods: Residency coordinators selected eighteen figures from studies from radiology, clinical pathology, and anatomic pathology journals.With original PDF controls, each figure was prepared in three or four experimental conditions: PDF conversion to TIFF, and scans from print in B&W, grayscale, and color. Twelve independent observers indicated whether they could identify the features and whether the image quality was acceptable. They also ranked all the experimental conditions of each figure in terms of usefulness. Results: Of 982 assessments of 87 anatomic pathology, 83 clinical pathology, and 77 radiology images, 471 (48% were unidentifiable. Unidentifiability of originals (4% and conversions (10% was low. For scans, unidentifiability ranged from 53% for color, to 74% for grayscale, to 97% for B&W. Of 987 responses about acceptability (n¼405, 41% were said to be unacceptable, 97% of B&W, 66% of grayscale, 41% of color, and 1% of conversions. Hypothesized order (original, conversion, color, grayscale, B&W matched 67% of rankings (n¼215. Conclusions: PDF to TIFF conversion provided acceptable content. Color images are rarely useful in grayscale (12% or B&W (less than 1%. Acceptability of grayscale scans of noncolor originals was 52%. Digital originals are needed for most images. Print images in color or grayscale should be scanned using those modalities.

  18. Page segmentation and text extraction from gray-scale images in microfilm format

    Science.gov (United States)

    Yuan, Qing; Tan, Chew Lim

    2000-12-01

    The paper deals with a suitably designed system that is being used to separate textual regions from graphics regions and locate textual data from textured background. We presented a method based on edge detection to automatically locate text in some noise infected grayscale newspaper images with microfilm format. The algorithm first finds the appropriate edges of textual region using Canny edge detector, and then by edge merging it makes use of edge features to do block segmentation and classification, afterwards feature aided connected component analysis was used to group homogeneous textual regions together within the scope of its bounding box. We can obtain an efficient block segmentation with reduced memory size by introducing the TLC. The proposed method has been used to locate text in a group of newspaper images with multiple page layout. Initial results are encouraging, we would expand the experiment data to over 300 microfilm images with different layout structures, promising result is anticipated with corresponding modification on the prototype of former algorithm to make it more robust and suitable to different cases.

  19. Multifractal Scaling of Grayscale Patterns: Lacunarity and Correlation Dimension

    Science.gov (United States)

    Roy, A.; Perfect, E.

    2012-12-01

    While fractal models can characterize self-similarity in binary fields, comprised solely of 0's and 1's, the concept of multifractals is needed to quantify scaling behavior in non-binary grayscale fields made up of fractional values. Multifractals are characterized by a spectrum of non-integer dimensions, Dq (-∞ < q < +∞) instead of a single fractal dimension. The gliding-box algorithm is sometimes employed to estimate these different dimensions. This algorithm is also commonly used for computing another parameter, lacunarity, L, which characterizes the distribution of gaps or spaces in patterns, fractals, multifractals or otherwise, as a function of scale (or box-size, x). In the case of 2-dimensional multifractal fields, L has been shown to be theoretically related to the correlation dimension, D2, by dlog(L)/dlog(x) = D2 - 2. Therefore, it is hypothesized that lacunarity analysis can help in delineating multifractal behavior in grayscale patterns. In testing this hypothesis, a set of 2-dimensional multifractal grayscale patterns was generated with known D2 values, and then analyzed for lacunarity by employing the gliding-box algorithm. The D2 values computed using this analysis gave a 1:1 relationship with the known D2 values, thus empirically validating the theoretical relationship between L and D2. Lacunarity analysis was further used to evaluate the multifractal nature of natural grayscale images in the form of soil thin sections that had been previously classified as multifractals based on the standard box counting method. The results indicated that lacunarity analysis is a more sensitive indicator of multifractal behavior in natural grayscale patterns than the box counting approach. A weighted mean of the log-transformed lacunarity values at different scales was employed for differentiating between grayscale patterns with various degrees of scale dependent clustering attributes. This new measure, which expresses lacunarity as a single number, should

  20. Realization of the FPGA-based reconfigurable computing environment by the example of morphological processing of a grayscale image

    Science.gov (United States)

    Shatravin, V.; Shashev, D. V.

    2018-05-01

    Currently, robots are increasingly being used in every industry. One of the most high-tech areas is creation of completely autonomous robotic devices including vehicles. The results of various global research prove the efficiency of vision systems in autonomous robotic devices. However, the use of these systems is limited because of the computational and energy resources available in the robot device. The paper describes the results of applying the original approach for image processing on reconfigurable computing environments by the example of morphological operations over grayscale images. This approach is prospective for realizing complex image processing algorithms and real-time image analysis in autonomous robotic devices.

  1. Conventional vs  invert-grayscale X-ray for diagnosis of pneumothorax in the emergency setting.

    Science.gov (United States)

    Musalar, Ekrem; Ekinci, Salih; Ünek, Orkun; Arş, Eda; Eren, Hakan Şevki; Gürses, Bengi; Aktaş, Can

    2017-09-01

    Pneumothorax is a pathologic condition in which air is accumulated between the visceral and parietal pleura. After clinical suspicion, in order to diagnose the severity of the condition, imaging is necessary. By using the help of Picture Archiving and Communication Systems (PACS) direct conventional X-rays are converted to gray-scale and this has become a preferred method among many physicians. Our study design was a case-control study with cross-over design study. Posterior-anterior chest X-rays of patients were evaluated for pneumothorax by 10 expert physicians with at least 3years of experience and who have used inverted gray-scale posterior anterior chest X-ray for diagnosing pneumothorax. The study included posterior anterior chest X-ray images of 268 patients of which 106 were diagnosed with spontaneous pneumothorax and 162 patients used as a control group. The sensitivity of Digital-conventional X-rays was found to be higher than that of inverted gray-scale images (95% CI (2,08-5,04), ppneumothorax. Prospective studies should be performed where diagnostic potency of inverted gray-scale radiograms is tested against gold standard chest CT. Further research should compare inverted grayscale to lung ultrasound to assess them as alternatives prior to CT. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Experimental investigation of distinguishable and non-distinguishable grayscales applicable in active-matrix organic light-emitting diodes for quality engineering

    Science.gov (United States)

    Yang, Henglong; Chang, Wen-Cheng; Lin, Yu-Hsuan; Chen, Ming-Hong

    2017-08-01

    The distinguishable and non-distinguishable 6-bit (64) grayscales of green and red organic light-emitting diode (OLED) were experimentally investigated by using high-sensitive photometric instrument. The feasibility of combining external detection system for quality engineering to compensate the grayscale loss based on preset grayscale tables was also investigated by SPICE simulation. The degradation loss of OLED deeply affects image quality as grayscales become inaccurate. The distinguishable grayscales are indicated as those brightness differences and corresponding current increments are differentiable by instrument. The grayscales of OLED in 8-bit (256) or higher may become nondistinguishable as current or voltage increments are in the same order of noise level in circuitry. The distinguishable grayscale tables for individual red, green, blue, and white colors can be experimentally established as preset reference for quality engineering (QE) in which the degradation loss is compensated by corresponding grayscale numbers shown in preset table. The degradation loss of each OLED colors is quantifiable by comparing voltage increments to those in preset grayscale table if precise voltage increments are detectable during operation. The QE of AMOLED can be accomplished by applying updated grayscale tables. Our preliminary simulation result revealed that it is feasible to quantify degradation loss in terms of grayscale numbers by using external detector circuitry.

  3. Defect sizing of post-irradiated nuclear fuels using grayscale thresholding in their radiographic images

    International Nuclear Information System (INIS)

    Chaudhary, Usman Khurshid; Iqbal, Masood; Ahmad, Munir

    2010-01-01

    Quantification of different types of material defects in a number of reference standard post-irradiated nuclear fuel image samples have been carried out by virtue of developing a computer program that takes radiographic images of the fuel as input. The program is based on user adjustable grayscale thresholding in the regime of image segmentation whereby it selects and counts the pixels having graylevel values less than or equal to the computed threshold. It can size the defects due to chipping in nuclear fuel, cracks, voids, melting, deformation, inclusion of foreign materials, heavy isotope accumulation, non-uniformity, etc. The classes of fuel range from those of research and power reactors to fast breeders and from pellets to annular and vibro-compacted fuel. The program has been validated against ground truth realities of some locally fabricated metallic plates having drilled holes of known sizes simulated as defects in them in which the results indicate that it either correctly selects and quantifies at least 94% of the actual required regions of interest in a given image or it gives less than 8.1% false alarm rate. Also, the developed program is independent of image size.

  4. Defect sizing of post-irradiated nuclear fuels using grayscale thresholding in their radiographic images

    Energy Technology Data Exchange (ETDEWEB)

    Chaudhary, Usman Khurshid, E-mail: ukhurshid@hotmail.co [Department of Physics and Applied Mathematics, Pakistan Institute of Engineering and Applied Sciences, P.O. Nilore, Islamabad 45650 (Pakistan); Iqbal, Masood, E-mail: masiqbal@hotmail.co [Nuclear Engineering Division, Pakistan Institute of Nuclear Science and Technology, P.O. Nilore, Islamabad 45650 (Pakistan); Ahmad, Munir [Nondestructive Testing Group, Directorate of Technology, Pakistan Institute of Nuclear Science and Technology, P.O. Nilore, Islamabad 45650 (Pakistan)

    2010-10-15

    Quantification of different types of material defects in a number of reference standard post-irradiated nuclear fuel image samples have been carried out by virtue of developing a computer program that takes radiographic images of the fuel as input. The program is based on user adjustable grayscale thresholding in the regime of image segmentation whereby it selects and counts the pixels having graylevel values less than or equal to the computed threshold. It can size the defects due to chipping in nuclear fuel, cracks, voids, melting, deformation, inclusion of foreign materials, heavy isotope accumulation, non-uniformity, etc. The classes of fuel range from those of research and power reactors to fast breeders and from pellets to annular and vibro-compacted fuel. The program has been validated against ground truth realities of some locally fabricated metallic plates having drilled holes of known sizes simulated as defects in them in which the results indicate that it either correctly selects and quantifies at least 94% of the actual required regions of interest in a given image or it gives less than 8.1% false alarm rate. Also, the developed program is independent of image size.

  5. Extending Ripley's K-Function to Quantify Aggregation in 2-D Grayscale Images.

    Directory of Open Access Journals (Sweden)

    Mohamed Amgad

    Full Text Available In this work, we describe the extension of Ripley's K-function to allow for overlapping events at very high event densities. We show that problematic edge effects introduce significant bias to the function at very high densities and small radii, and propose a simple correction method that successfully restores the function's centralization. Using simulations of homogeneous Poisson distributions of events, as well as simulations of event clustering under different conditions, we investigate various aspects of the function, including its shape-dependence and correspondence between true cluster radius and radius at which the K-function is maximized. Furthermore, we validate the utility of the function in quantifying clustering in 2-D grayscale images using three modalities: (i Simulations of particle clustering; (ii Experimental co-expression of soluble and diffuse protein at varying ratios; (iii Quantifying chromatin clustering in the nuclei of wt and crwn1 crwn2 mutant Arabidopsis plant cells, using a previously-published image dataset. Overall, our work shows that Ripley's K-function is a valid abstract statistical measure whose utility extends beyond the quantification of clustering of non-overlapping events. Potential benefits of this work include the quantification of protein and chromatin aggregation in fluorescent microscopic images. Furthermore, this function has the potential to become one of various abstract texture descriptors that are utilized in computer-assisted diagnostics in anatomic pathology and diagnostic radiology.

  6. How grayscale influences consumers’ perception of product personality

    NARCIS (Netherlands)

    Hung, W.-K.; Chen, Y.; Chen, L.

    Two studies are conducted to reveal the effect of grayscale on consumers' perception of product personality. Combining with two critical design elements - shape and surface finishing, we examined the net impact of grayscale on a particular one - business-like personality. In Study 1, eighteen cases

  7. Computer-aided mass detection in mammography: False positive reduction via gray-scale invariant ranklet texture features

    International Nuclear Information System (INIS)

    Masotti, Matteo; Lanconelli, Nico; Campanini, Renato

    2009-01-01

    In this work, gray-scale invariant ranklet texture features are proposed for false positive reduction (FPR) in computer-aided detection (CAD) of breast masses. Two main considerations are at the basis of this proposal. First, false positive (FP) marks surviving our previous CAD system seem to be characterized by specific texture properties that can be used to discriminate them from masses. Second, our previous CAD system achieves invariance to linear/nonlinear monotonic gray-scale transformations by encoding regions of interest into ranklet images through the ranklet transform, an image transformation similar to the wavelet transform, yet dealing with pixels' ranks rather than with their gray-scale values. Therefore, the new FPR approach proposed herein defines a set of texture features which are calculated directly from the ranklet images corresponding to the regions of interest surviving our previous CAD system, hence, ranklet texture features; then, a support vector machine (SVM) classifier is used for discrimination. As a result of this approach, texture-based information is used to discriminate FP marks surviving our previous CAD system; at the same time, invariance to linear/nonlinear monotonic gray-scale transformations of the new CAD system is guaranteed, as ranklet texture features are calculated from ranklet images that have this property themselves by construction. To emphasize the gray-scale invariance of both the previous and new CAD systems, training and testing are carried out without any in-between parameters' adjustment on mammograms having different gray-scale dynamics; in particular, training is carried out on analog digitized mammograms taken from a publicly available digital database, whereas testing is performed on full-field digital mammograms taken from an in-house database. Free-response receiver operating characteristic (FROC) curve analysis of the two CAD systems demonstrates that the new approach achieves a higher reduction of FP marks

  8. A New Binarization Algorithm for Historical Documents

    Directory of Open Access Journals (Sweden)

    Marcos Almeida

    2018-01-01

    Full Text Available Monochromatic documents claim for much less computer bandwidth for network transmission and storage space than their color or even grayscale equivalent. The binarization of historical documents is far more complex than recent ones as paper aging, color, texture, translucidity, stains, back-to-front interference, kind and color of ink used in handwriting, printing process, digitalization process, etc. are some of the factors that affect binarization. This article presents a new binarization algorithm for historical documents. The new global filter proposed is performed in four steps: filtering the image using a bilateral filter, splitting image into the RGB components, decision-making for each RGB channel based on an adaptive binarization method inspired by Otsu’s method with a choice of the threshold level, and classification of the binarized images to decide which of the RGB components best preserved the document information in the foreground. The quantitative and qualitative assessment made with 23 binarization algorithms in three sets of “real world” documents showed very good results.

  9. Encryption of QR code and grayscale image in interference-based scheme with high quality retrieval and silhouette problem removal

    Science.gov (United States)

    Qin, Yi; Wang, Hongjuan; Wang, Zhipeng; Gong, Qiong; Wang, Danchen

    2016-09-01

    In optical interference-based encryption (IBE) scheme, the currently available methods have to employ the iterative algorithms in order to encrypt two images and retrieve cross-talk free decrypted images. In this paper, we shall show that this goal can be achieved via an analytical process if one of the two images is QR code. For decryption, the QR code is decrypted in the conventional architecture and the decryption has a noisy appearance. Nevertheless, the robustness of QR code against noise enables the accurate acquisition of its content from the noisy retrieval, as a result of which the primary QR code can be exactly regenerated. Thereafter, a novel optical architecture is proposed to recover the grayscale image by aid of the QR code. In addition, the proposal has totally eliminated the silhouette problem existing in the previous IBE schemes, and its effectiveness and feasibility have been demonstrated by numerical simulations.

  10. Generation of Customizable Micro-wavy Pattern through Grayscale Direct Image Lithography.

    Science.gov (United States)

    He, Ran; Wang, Shunqiang; Andrews, Geoffrey; Shi, Wentao; Liu, Yaling

    2016-02-23

    With the increasing amount of research work in surface studies, a more effective method of producing patterned microstructures is highly desired due to the geometric limitations and complex fabricating process of current techniques. This paper presents an efficient and cost-effective method to generate customizable micro-wavy pattern using direct image lithography. This method utilizes a grayscale Gaussian distribution effect to model inaccuracies inherent in the polymerization process, which are normally regarded as trivial matters or errors. The measured surface profiles and the mathematical prediction show a good agreement, demonstrating the ability of this method to generate wavy patterns with precisely controlled features. An accurate pattern can be generated with customizable parameters (wavelength, amplitude, wave shape, pattern profile, and overall dimension). This mask-free photolithography approach provides a rapid fabrication method that is capable of generating complex and non-uniform 3D wavy patterns with the wavelength ranging from 12 μm to 2100 μm and an amplitude-to-wavelength ratio as large as 300%. Microfluidic devices with pure wavy and wavy-herringbone patterns suitable for capture of circulating tumor cells are made as a demonstrative application. A completely customized microfluidic device with wavy patterns can be created within a few hours without access to clean room or commercial photolithography equipment.

  11. Generation of Customizable Micro-wavy Pattern through Grayscale Direct Image Lithography

    Science.gov (United States)

    He, Ran; Wang, Shunqiang; Andrews, Geoffrey; Shi, Wentao; Liu, Yaling

    2016-02-01

    With the increasing amount of research work in surface studies, a more effective method of producing patterned microstructures is highly desired due to the geometric limitations and complex fabricating process of current techniques. This paper presents an efficient and cost-effective method to generate customizable micro-wavy pattern using direct image lithography. This method utilizes a grayscale Gaussian distribution effect to model inaccuracies inherent in the polymerization process, which are normally regarded as trivial matters or errors. The measured surface profiles and the mathematical prediction show a good agreement, demonstrating the ability of this method to generate wavy patterns with precisely controlled features. An accurate pattern can be generated with customizable parameters (wavelength, amplitude, wave shape, pattern profile, and overall dimension). This mask-free photolithography approach provides a rapid fabrication method that is capable of generating complex and non-uniform 3D wavy patterns with the wavelength ranging from 12 μm to 2100 μm and an amplitude-to-wavelength ratio as large as 300%. Microfluidic devices with pure wavy and wavy-herringbone patterns suitable for capture of circulating tumor cells are made as a demonstrative application. A completely customized microfluidic device with wavy patterns can be created within a few hours without access to clean room or commercial photolithography equipment.

  12. Integrated system for automated financial document processing

    Science.gov (United States)

    Hassanein, Khaled S.; Wesolkowski, Slawo; Higgins, Ray; Crabtree, Ralph; Peng, Antai

    1997-02-01

    A system was developed that integrates intelligent document analysis with multiple character/numeral recognition engines in order to achieve high accuracy automated financial document processing. In this system, images are accepted in both their grayscale and binary formats. A document analysis module starts by extracting essential features from the document to help identify its type (e.g. personal check, business check, etc.). These features are also utilized to conduct a full analysis of the image to determine the location of interesting zones such as the courtesy amount and the legal amount. These fields are then made available to several recognition knowledge sources such as courtesy amount recognition engines and legal amount recognition engines through a blackboard architecture. This architecture allows all the available knowledge sources to contribute incrementally and opportunistically to the solution of the given recognition query. Performance results on a test set of machine printed business checks using the integrated system are also reported.

  13. Adaptive and non-adaptive data hiding methods for grayscale images based on modulus function

    Directory of Open Access Journals (Sweden)

    Najme Maleki

    2014-07-01

    Full Text Available This paper presents two adaptive and non-adaptive data hiding methods for grayscale images based on modulus function. Our adaptive scheme is based on the concept of human vision sensitivity, so the pixels in edge areas than to smooth areas can tolerate much more changes without making visible distortion for human eyes. In our adaptive scheme, the average differencing value of four neighborhood pixels into a block via a threshold secret key determines whether current block is located in edge or smooth area. Pixels in the edge areas are embedded by Q-bit of secret data with a larger value of Q than that of pixels placed in smooth areas. Also in this scholar, we represent one non-adaptive data hiding algorithm. Our non-adaptive scheme, via an error reduction procedure, produces a high visual quality for stego-image. The proposed schemes present several advantages. 1-of aspects the embedding capacity and visual quality of stego-image are scalable. In other words, the embedding rate as well as the image quality can be scaled for practical applications 2-the high embedding capacity with minimal visual distortion can be achieved, 3-our methods require little memory space for secret data embedding and extracting phases, 4-secret keys have used to protect of the embedded secret data. Thus, level of security is high, 5-the problem of overflow or underflow does not occur. Experimental results indicated that the proposed adaptive scheme significantly is superior to the currently existing scheme, in terms of stego-image visual quality, embedding capacity and level of security and also our non-adaptive method is better than other non-adaptive methods, in view of stego-image quality. Results show which our adaptive algorithm can resist against the RS steganalysis attack.

  14. Robust super-resolution by fusion of interpolated frames for color and grayscale images

    Directory of Open Access Journals (Sweden)

    Barry eKarch

    2015-04-01

    Full Text Available Multi-frame super-resolution (SR processing seeks to overcome undersampling issues that can lead to undesirable aliasing artifacts. The key to effective multi-frame SR is accurate subpixel inter-frame registration. This accurate registration is challenging when the motion does not obey a simple global translational model and may include local motion. SR processing is further complicated when the camera uses a division-of-focal-plane (DoFP sensor, such as the Bayer color filter array. Various aspects of these SR challenges have been previously investigated. Fast SR algorithms tend to have difficulty accommodating complex motion and DoFP sensors. Furthermore, methods that can tolerate these complexities tend to be iterative in nature and may not be amenable to real-time processing. In this paper, we present a new fast approach for performing SR in the presence of these challenging imaging conditions. We refer to the new approach as Fusion of Interpolated Frames (FIF SR. The FIF SR method decouples the demosaicing, interpolation, and restoration steps to simplify the algorithm. Frames are first individually demosaiced and interpolated to the desired resolution. Next, FIF uses a novel weighted sum of the interpolated frames to fuse them into an improved resolution estimate. Finally, restoration is applied to deconvolve the modeled system PSF. The proposed FIF approach has a lower computational complexity than most iterative methods, making it a candidate for real-time implementation. We provide a detailed description of the FIF SR method and show experimental results using synthetic and real datasets in both constrained and complex imaging scenarios. The experiments include airborne grayscale imagery and Bayer color array images with affine background motion plus local motion.

  15. Nearest Neighborhood Grayscale Operator for Hardware-Efficient Microscale Texture Extraction

    Directory of Open Access Journals (Sweden)

    Andreas König

    2007-01-01

    Full Text Available First-stage feature computation and data rate reduction play a crucial role in an efficient visual information processing system. Hardware-based first stages usually win out where power consumption, dynamic range, and speed are the issue, but have severe limitations with regard to flexibility. In this paper, the local orientation coding (LOC, a nearest neighborhood grayscale operator, is investigated and enhanced for hardware implementation. The features produced by this operator are easy and fast to compute, compress the salient information contained in an image, and lend themselves naturally to various medium-to-high-level postprocessing methods such as texture segmentation, image decomposition, and feature tracking. An image sensor architecture based on the LOC has been elaborated, that combines high dynamic range (HDR image aquisition, feature computation, and inherent pixel-level ADC in the pixel cells. The mixed-signal design allows for simple readout as digital memory.

  16. Document image retrieval through word shape coding.

    Science.gov (United States)

    Lu, Shijian; Li, Linlin; Tan, Chew Lim

    2008-11-01

    This paper presents a document retrieval technique that is capable of searching document images without OCR (optical character recognition). The proposed technique retrieves document images by a new word shape coding scheme, which captures the document content through annotating each word image by a word shape code. In particular, we annotate word images by using a set of topological shape features including character ascenders/descenders, character holes, and character water reservoirs. With the annotated word shape codes, document images can be retrieved by either query keywords or a query document image. Experimental results show that the proposed document image retrieval technique is fast, efficient, and tolerant to various types of document degradation.

  17. Document image database indexing with pictorial dictionary

    Science.gov (United States)

    Akbari, Mohammad; Azimi, Reza

    2010-02-01

    In this paper we introduce a new approach for information retrieval from Persian document image database without using Optical Character Recognition (OCR).At first an attribute called subword upper contour label is defined then, a pictorial dictionary is constructed based on this attribute for the subwords. By this approach we address two issues in document image retrieval: keyword spotting and retrieval according to the document similarities. The proposed methods have been evaluated on a Persian document image database. The results have proved the ability of this approach in document image information retrieval.

  18. Grayscale lithography-automated mask generation for complex three-dimensional topography

    Science.gov (United States)

    Loomis, James; Ratnayake, Dilan; McKenna, Curtis; Walsh, Kevin M.

    2016-01-01

    Grayscale lithography is a relatively underutilized technique that enables fabrication of three-dimensional (3-D) microstructures in photosensitive polymers (photoresists). By spatially modulating ultraviolet (UV) dosage during the writing process, one can vary the depth at which photoresist is developed. This means complex structures and bioinspired designs can readily be produced that would otherwise be cost prohibitive or too time intensive to fabricate. The main barrier to widespread grayscale implementation, however, stems from the laborious generation of mask files required to create complex surface topography. We present a process and associated software utility for automatically generating grayscale mask files from 3-D models created within industry-standard computer-aided design (CAD) suites. By shifting the microelectromechanical systems (MEMS) design onus to commonly used CAD programs ideal for complex surfacing, engineering professionals already familiar with traditional 3-D CAD software can readily utilize their pre-existing skills to make valuable contributions to the MEMS community. Our conversion process is demonstrated by prototyping several samples on a laser pattern generator-capital equipment already in use in many foundries. Finally, an empirical calibration technique is shown that compensates for nonlinear relationships between UV exposure intensity and photoresist development depth as well as a thermal reflow technique to help smooth microstructure surfaces.

  19. Semantic Document Image Classification Based on Valuable Text Pattern

    Directory of Open Access Journals (Sweden)

    Hossein Pourghassem

    2011-01-01

    Full Text Available Knowledge extraction from detected document image is a complex problem in the field of information technology. This problem becomes more intricate when we know, a negligible percentage of the detected document images are valuable. In this paper, a segmentation-based classification algorithm is used to analysis the document image. In this algorithm, using a two-stage segmentation approach, regions of the image are detected, and then classified to document and non-document (pure region regions in the hierarchical classification. In this paper, a novel valuable definition is proposed to classify document image in to valuable or invaluable categories. The proposed algorithm is evaluated on a database consisting of the document and non-document image that provide from Internet. Experimental results show the efficiency of the proposed algorithm in the semantic document image classification. The proposed algorithm provides accuracy rate of 98.8% for valuable and invaluable document image classification problem.

  20. Quantifying the effect of colorization enhancement on mammogram images

    Science.gov (United States)

    Wojnicki, Paul J.; Uyeda, Elizabeth; Micheli-Tzanakou, Evangelia

    2002-04-01

    Current methods of radiological displays provide only grayscale images of mammograms. The limitation of the image space to grayscale provides only luminance differences and textures as cues for object recognition within the image. However, color can be an important and significant cue in the detection of shapes and objects. Increasing detection ability allows the radiologist to interpret the images in more detail, improving object recognition and diagnostic accuracy. Color detection experiments using our stimulus system, have demonstrated that an observer can only detect an average of 140 levels of grayscale. An optimally colorized image can allow a user to distinguish 250 - 1000 different levels, hence increasing potential image feature detection by 2-7 times. By implementing a colorization map, which follows the luminance map of the original grayscale images, the luminance profile is preserved and color is isolated as the enhancement mechanism. The effect of this enhancement mechanism on the shape, frequency composition and statistical characteristics of the Visual Evoked Potential (VEP) are analyzed and presented. Thus, the effectiveness of the image colorization is measured quantitatively using the Visual Evoked Potential (VEP).

  1. Simple Multi-level Microchannel Fabrication by Pseudo-Grayscale Backside Diffused Light Lithography.

    Science.gov (United States)

    Lai, David; Labuz, Joseph M; Kim, Jiwon; Luker, Gary D; Shikanov, Ariella; Takayama, Shuichi

    2013-11-14

    Photolithography of multi-level channel features in microfluidics is laborious and/or costly. Grayscale photolithography is mostly used with positive photoresists and conventional front side exposure, but the grayscale masks needed are generally costly and positive photoresists are not commonly used in microfluidic rapid prototyping. Here we introduce a simple and inexpensive alternative that uses pseudo-grayscale (pGS) photomasks in combination with backside diffused light lithography (BDLL) and the commonly used negative photoresist, SU-8. BDLL can produce smooth multi-level channels of gradually changing heights without use of true grayscale masks because of the use of diffused light. Since the exposure is done through a glass slide, the photoresist is cross-linked from the substrate side up enabling well-defined and stable structures to be fabricated from even unspun photoresist layers. In addition to providing unique structures and capabilities, the method is compatible with the "garage microfluidics" concept of creating useful tools at low cost since pGS BDLL can be performed with the use of only hot plates and a UV transilluminator: equipment commonly found in biology labs. Expensive spin coaters or collimated UV aligners are not needed. To demonstrate the applicability of pGS BDLL, a variety of weir-type cell traps were constructed with a single UV exposure to separate cancer cells (MDA-MB-231, 10-15 μm in size) from red blood cells (RBCs, 2-8 μm in size) as well as follicle clusters (40-50 μm in size) from cancer cells (MDA-MB-231, 10-15 μm in size).

  2. Robust binarization of degraded document images using heuristics

    Science.gov (United States)

    Parker, Jon; Frieder, Ophir; Frieder, Gideon

    2013-12-01

    Historically significant documents are often discovered with defects that make them difficult to read and analyze. This fact is particularly troublesome if the defects prevent software from performing an automated analysis. Image enhancement methods are used to remove or minimize document defects, improve software performance, and generally make images more legible. We describe an automated, image enhancement method that is input page independent and requires no training data. The approach applies to color or greyscale images with hand written script, typewritten text, images, and mixtures thereof. We evaluated the image enhancement method against the test images provided by the 2011 Document Image Binarization Contest (DIBCO). Our method outperforms all 2011 DIBCO entrants in terms of average F1 measure - doing so with a significantly lower variance than top contest entrants. The capability of the proposed method is also illustrated using select images from a collection of historic documents stored at Yad Vashem Holocaust Memorial in Israel.

  3. Main Road Extraction from ZY-3 Grayscale Imagery Based on Directional Mathematical Morphology and VGI Prior Knowledge in Urban Areas.

    Science.gov (United States)

    Liu, Bo; Wu, Huayi; Wang, Yandong; Liu, Wenming

    2015-01-01

    Main road features extracted from remotely sensed imagery play an important role in many civilian and military applications, such as updating Geographic Information System (GIS) databases, urban structure analysis, spatial data matching and road navigation. Current methods for road feature extraction from high-resolution imagery are typically based on threshold value segmentation. It is difficult however, to completely separate road features from the background. We present a new method for extracting main roads from high-resolution grayscale imagery based on directional mathematical morphology and prior knowledge obtained from the Volunteered Geographic Information found in the OpenStreetMap. The two salient steps in this strategy are: (1) using directional mathematical morphology to enhance the contrast between roads and non-roads; (2) using OpenStreetMap roads as prior knowledge to segment the remotely sensed imagery. Experiments were conducted on two ZiYuan-3 images and one QuickBird high-resolution grayscale image to compare our proposed method to other commonly used techniques for road feature extraction. The results demonstrated the validity and better performance of the proposed method for urban main road feature extraction.

  4. Main Road Extraction from ZY-3 Grayscale Imagery Based on Directional Mathematical Morphology and VGI Prior Knowledge in Urban Areas.

    Directory of Open Access Journals (Sweden)

    Bo Liu

    Full Text Available Main road features extracted from remotely sensed imagery play an important role in many civilian and military applications, such as updating Geographic Information System (GIS databases, urban structure analysis, spatial data matching and road navigation. Current methods for road feature extraction from high-resolution imagery are typically based on threshold value segmentation. It is difficult however, to completely separate road features from the background. We present a new method for extracting main roads from high-resolution grayscale imagery based on directional mathematical morphology and prior knowledge obtained from the Volunteered Geographic Information found in the OpenStreetMap. The two salient steps in this strategy are: (1 using directional mathematical morphology to enhance the contrast between roads and non-roads; (2 using OpenStreetMap roads as prior knowledge to segment the remotely sensed imagery. Experiments were conducted on two ZiYuan-3 images and one QuickBird high-resolution grayscale image to compare our proposed method to other commonly used techniques for road feature extraction. The results demonstrated the validity and better performance of the proposed method for urban main road feature extraction.

  5. Main Road Extraction from ZY-3 Grayscale Imagery Based on Directional Mathematical Morphology and VGI Prior Knowledge in Urban Areas

    Science.gov (United States)

    Liu, Bo; Wu, Huayi; Wang, Yandong; Liu, Wenming

    2015-01-01

    Main road features extracted from remotely sensed imagery play an important role in many civilian and military applications, such as updating Geographic Information System (GIS) databases, urban structure analysis, spatial data matching and road navigation. Current methods for road feature extraction from high-resolution imagery are typically based on threshold value segmentation. It is difficult however, to completely separate road features from the background. We present a new method for extracting main roads from high-resolution grayscale imagery based on directional mathematical morphology and prior knowledge obtained from the Volunteered Geographic Information found in the OpenStreetMap. The two salient steps in this strategy are: (1) using directional mathematical morphology to enhance the contrast between roads and non-roads; (2) using OpenStreetMap roads as prior knowledge to segment the remotely sensed imagery. Experiments were conducted on two ZiYuan-3 images and one QuickBird high-resolution grayscale image to compare our proposed method to other commonly used techniques for road feature extraction. The results demonstrated the validity and better performance of the proposed method for urban main road feature extraction. PMID:26397832

  6. Document imaging finding niche in petroleum industry

    International Nuclear Information System (INIS)

    Cisco, S.L.

    1992-01-01

    Optical disk-based document imaging systems can reduce operating costs, save office space, and improve access to necessary information for petroleum companies that have extensive records in various formats. These imaging systems help solve document management problems to improve technical and administrative operations. Enron Gas Pipeline Group has installed a document imaging system for engineering applications to integrate records stored on paper, microfilm, or computer-aided drafting (CAD) systems. BP Exploration Inc. recently implemented a document imaging system for administrative applications. The company is evaluating an expansion of the system to include engineering and technical applications. The petroleum industry creates, acquires, distributes, and retrieves enormous amounts of data and information, which are stored on multiple media, including paper, microfilm, and electronic formats. There are two main factors responsible for the immense information storage requirements in the petroleum industry

  7. DOCUMENT IMAGE REGISTRATION FOR IMPOSED LAYER EXTRACTION

    Directory of Open Access Journals (Sweden)

    Surabhi Narayan

    2017-02-01

    Full Text Available Extraction of filled-in information from document images in the presence of template poses challenges due to geometrical distortion. Filled-in document image consists of null background, general information foreground and vital information imposed layer. Template document image consists of null background and general information foreground layer. In this paper a novel document image registration technique has been proposed to extract imposed layer from input document image. A convex polygon is constructed around the content of the input and the template image using convex hull. The vertices of the convex polygons of input and template are paired based on minimum Euclidean distance. Each vertex of the input convex polygon is subjected to transformation for the permutable combinations of rotation and scaling. Translation is handled by tight crop. For every transformation of the input vertices, Minimum Hausdorff distance (MHD is computed. Minimum Hausdorff distance identifies the rotation and scaling values by which the input image should be transformed to align it to the template. Since transformation is an estimation process, the components in the input image do not overlay exactly on the components in the template, therefore connected component technique is applied to extract contour boxes at word level to identify partially overlapping components. Geometrical features such as density, area and degree of overlapping are extracted and compared between partially overlapping components to identify and eliminate components common to input image and template image. The residue constitutes imposed layer. Experimental results indicate the efficacy of the proposed model with computational complexity. Experiment has been conducted on variety of filled-in forms, applications and bank cheques. Data sets have been generated as test sets for comparative analysis.

  8. Quantification of heterogeneity observed in medical images

    OpenAIRE

    Brooks, Frank J; Grigsby, Perry W

    2013-01-01

    Background There has been much recent interest in the quantification of visually evident heterogeneity within functional grayscale medical images, such as those obtained via magnetic resonance or positron emission tomography. In the case of images of cancerous tumors, variations in grayscale intensity imply variations in crucial tumor biology. Despite these considerable clinical implications, there is as yet no standardized method for measuring the heterogeneity observed via these imaging mod...

  9. Adaptive Algorithms for Automated Processing of Document Images

    Science.gov (United States)

    2011-01-01

    ABSTRACT Title of dissertation: ADAPTIVE ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES Mudit Agrawal, Doctor of Philosophy, 2011...2011 4. TITLE AND SUBTITLE Adaptive Algorithms for Automated Processing of Document Images 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES by Mudit Agrawal Dissertation submitted to the Faculty of the Graduate School of the University

  10. Halftone Coding with JBIG2

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    2000-01-01

    of a halftone pattern dictionary.The decoder first decodes the gray-scale image. Then for each gray-scale pixel looks up the corresponding halftonepattern in the dictionary and places it in the reconstruction bitmap at the position corresponding to the gray-scale pixel. The coding method is inherently lossy......The emerging international standard for compression of bilevel images and bi-level documents, JBIG2,provides a mode dedicated for lossy coding of halftones. The encoding procedure involves descreening of the bi-levelimage into gray-scale, encoding of the gray-scale image, and construction...... and care must be taken to avoid introducing artifacts in the reconstructed image. We describe how to apply this coding method for halftones created by periodic ordered dithering, by clustered dot screening (offset printing), and by techniques which in effect dithers with blue noise, e.g., error diffusion...

  11. A New Wavelet-Based Document Image Segmentation Scheme

    Institute of Scientific and Technical Information of China (English)

    赵健; 李道京; 俞卞章; 耿军平

    2002-01-01

    The document image segmentation is very useful for printing, faxing and data processing. An algorithm is developed for segmenting and classifying document image. Feature used for classification is based on the histogram distribution pattern of different image classes. The important attribute of the algorithm is using wavelet correlation image to enhance raw image's pattern, so the classification accuracy is improved. In this paper document image is divided into four types: background, photo, text and graph. Firstly, the document image background has been distingusished easily by former normally method; secondly, three image types will be distinguished by their typical histograms, in order to make histograms feature clearer, each resolution' s HH wavelet subimage is used to add to the raw image at their resolution. At last, the photo, text and praph have been devided according to how the feature fit to the Laplacian distrbution by -X2 and L. Simulations show that classification accuracy is significantly improved. The comparison with related shows that our algorithm provides both lower classification error rates and better visual results.

  12. Goal-oriented rectification of camera-based document images.

    Science.gov (United States)

    Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J

    2011-04-01

    Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure.

  13. Image editing with Adobe Photoshop 6.0.

    Science.gov (United States)

    Caruso, Ronald D; Postel, Gregory C

    2002-01-01

    The authors introduce Photoshop 6.0 for radiologists and demonstrate basic techniques of editing gray-scale cross-sectional images intended for publication and for incorporation into computerized presentations. For basic editing of gray-scale cross-sectional images, the Tools palette and the History/Actions palette pair should be displayed. The History palette may be used to undo a step or series of steps. The Actions palette is a menu of user-defined macros that save time by automating an action or series of actions. Converting an image to 8-bit gray scale is the first editing function. Cropping is the next action. Both decrease file size. Use of the smallest file size necessary for the purpose at hand is recommended. Final file size for gray-scale cross-sectional neuroradiologic images (8-bit, single-layer TIFF [tagged image file format] at 300 pixels per inch) intended for publication varies from about 700 Kbytes to 3 Mbytes. Final file size for incorporation into computerized presentations is about 10-100 Kbytes (8-bit, single-layer, gray-scale, high-quality JPEG [Joint Photographic Experts Group]), depending on source and intended use. Editing and annotating images before they are inserted into presentation software is highly recommended, both for convenience and flexibility. Radiologists should find that image editing can be carried out very rapidly once the basic steps are learned and automated. Copyright RSNA, 2002

  14. Colour application on mammography image segmentation

    Science.gov (United States)

    Embong, R.; Aziz, N. M. Nik Ab.; Karim, A. H. Abd; Ibrahim, M. R.

    2017-09-01

    The segmentation process is one of the most important steps in image processing and computer vision since it is vital in the initial stage of image analysis. Segmentation of medical images involves complex structures and it requires precise segmentation result which is necessary for clinical diagnosis such as the detection of tumour, oedema, and necrotic tissues. Since mammography images are grayscale, researchers are looking at the effect of colour in the segmentation process of medical images. Colour is known to play a significant role in the perception of object boundaries in non-medical colour images. Processing colour images require handling more data, hence providing a richer description of objects in the scene. Colour images contain ten percent (10%) additional edge information as compared to their grayscale counterparts. Nevertheless, edge detection in colour image is more challenging than grayscale image as colour space is considered as a vector space. In this study, we implemented red, green, yellow, and blue colour maps to grayscale mammography images with the purpose of testing the effect of colours on the segmentation of abnormality regions in the mammography images. We applied the segmentation process using the Fuzzy C-means algorithm and evaluated the percentage of average relative error of area for each colour type. The results showed that all segmentation with the colour map can be done successfully even for blurred and noisy images. Also the size of the area of the abnormality region is reduced when compare to the segmentation area without the colour map. The green colour map segmentation produced the smallest percentage of average relative error (10.009%) while yellow colour map segmentation gave the largest percentage of relative error (11.367%).

  15. Hepatic hemangiomas: spectrum of US appearances on gray-scale, power doppler, and contrast-enhanced US

    International Nuclear Information System (INIS)

    Kim, Kyoung Won; Kim, Tae Kyoung; Han Joon Koo; Kim, Ah Young; Lee, Hyun Ju; Park, Seong Ho; Kim, Young Hoon; Choi, Byung Ihn

    2000-01-01

    Because US plays a key role in the initial evaluation of hepatic hemangiomas, knowledge of the entire spectrum of US appearances of these tumors is important. Most hemangiomas have a distinctive US appearance, and even with those with atypical appearances on conventional gray-scale US, specific diagnoses can be made using pulse-inversion harmonic US with contrast agents. In this essay, we review the spectrum of US appearances of hepatic hemangiomas on conventional gray-scale, power Doppler, and pulse-inversion harmonic US with contrast agents. (author)

  16. Stamp Detection in Color Document Images

    DEFF Research Database (Denmark)

    Micenkova, Barbora; van Beusekom, Joost

    2011-01-01

    , moreover, it can be imprinted with a variable quality and rotation. Previous methods were restricted to detection of stamps of particular shapes or colors. The method presented in the paper includes segmentation of the image by color clustering and subsequent classification of candidate solutions...... by geometrical and color-related features. The approach allows for differentiation of stamps from other color objects in the document such as logos or texts. For the purpose of evaluation, a data set of 400 document images has been collected, annotated and made public. With the proposed method, recall of 83...

  17. Animal Detection in Natural Images: Effects of Color and Image Database

    Science.gov (United States)

    Zhu, Weina; Drewes, Jan; Gegenfurtner, Karl R.

    2013-01-01

    The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used. PMID:24130744

  18. Animal detection in natural images: effects of color and image database.

    Directory of Open Access Journals (Sweden)

    Weina Zhu

    Full Text Available The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used.

  19. Three-dimensional volumetric gray-scale uterine cervix histogram prediction of days to delivery in full term pregnancy.

    Science.gov (United States)

    Kim, Ji Youn; Kim, Hai-Joong; Hahn, Meong Hi; Jeon, Hye Jin; Cho, Geum Joon; Hong, Sun Chul; Oh, Min Jeong

    2013-09-01

    Our aim was to figure out whether volumetric gray-scale histogram difference between anterior and posterior cervix can indicate the extent of cervical consistency. We collected data of 95 patients who were appropriate for vaginal delivery with 36th to 37th weeks of gestational age from September 2010 to October 2011 in the Department of Obstetrics and Gynecology, Korea University Ansan Hospital. Patients were excluded who had one of the followings: Cesarean section, labor induction, premature rupture of membrane. Thirty-four patients were finally enrolled. The patients underwent evaluation of the cervix through Bishop score, cervical length, cervical volume, three-dimensional (3D) cervical volumetric gray-scale histogram. The interval days from the cervix evaluation to the delivery day were counted. We compared to 3D cervical volumetric gray-scale histogram, Bishop score, cervical length, cervical volume with interval days from the evaluation of the cervix to the delivery. Gray-scale histogram difference between anterior and posterior cervix was significantly correlated to days to delivery. Its correlation coefficient (R) was 0.500 (P = 0.003). The cervical length was significantly related to the days to delivery. The correlation coefficient (R) and P-value between them were 0.421 and 0.013. However, anterior lip histogram, posterior lip histogram, total cervical volume, Bishop score were not associated with days to delivery (P >0.05). By using gray-scale histogram difference between anterior and posterior cervix and cervical length correlated with the days to delivery. These methods can be utilized to better help predict a cervical consistency.

  20. Document Examination: Applications of Image Processing Systems.

    Science.gov (United States)

    Kopainsky, B

    1989-12-01

    Dealing with images is a familiar business for an expert in questioned documents: microscopic, photographic, infrared, and other optical techniques generate images containing the information he or she is looking for. A recent method for extracting most of this information is digital image processing, ranging from the simple contrast and contour enhancement to the advanced restoration of blurred texts. When combined with a sophisticated physical imaging system, an image pricessing system has proven to be a powerful and fast tool for routine non-destructive scanning of suspect documents. This article reviews frequent applications, comprising techniques to increase legibility, two-dimensional spectroscopy (ink discrimination, alterations, erased entries, etc.), comparison techniques (stamps, typescript letters, photo substitution), and densitometry. Computerized comparison of handwriting is not included. Copyright © 1989 Central Police University.

  1. Signature detection and matching for document image retrieval.

    Science.gov (United States)

    Zhu, Guangyu; Zheng, Yefeng; Doermann, David; Jaeger, Stefan

    2009-11-01

    As one of the most pervasive methods of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. However, detection and segmentation of free-form objects such as signatures from clustered background is currently an open document analysis problem. In this paper, we focus on two fundamental problems in signature-based document image retrieval. First, we propose a novel multiscale approach to jointly detecting and segmenting signatures from document images. Rather than focusing on local features that typically have large variations, our approach captures the structural saliency using a signature production model and computes the dynamic curvature of 2D contour fragments over multiple scales. This detection framework is general and computationally tractable. Second, we treat the problem of signature retrieval in the unconstrained setting of translation, scale, and rotation invariant nonrigid shape matching. We propose two novel measures of shape dissimilarity based on anisotropic scaling and registration residual error and present a supervised learning framework for combining complementary shape information from different dissimilarity metrics using LDA. We quantitatively study state-of-the-art shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple instances as query in document image retrieval. We further demonstrate our matching techniques in offline signature verification. Extensive experiments using large real-world collections of English and Arabic machine-printed and handwritten documents demonstrate the excellent performance of our approaches.

  2. Performance of SU-8 Membrane Suitable for Deep X-Ray Grayscale Lithography

    Directory of Open Access Journals (Sweden)

    Harutaka Mekaru

    2015-02-01

    Full Text Available In combination with tapered-trench-etching of Si and SU-8 photoresist, a grayscale mask for deep X-ray lithography was fabricated and passed a 10-times-exposure test. The performance of the X-ray grayscale mask was evaluated using the TERAS synchrotron radiation facility at the National Institute of Advanced Industrial Science and Technology (AIST. Although the SU-8 before photo-curing has been evaluated as a negative-tone photoresist for ultraviolet (UV and X-ray lithographies, the characteristic of the SU-8 after photo-curing has not been investigated. A polymethyl methacrylate (PMMA sheet was irradiated by a synchrotron radiation through an X-ray mask, and relationships between the dose energy and exposure depth, and between the dose energy and dimensional transition, were investigated. Using such a technique, the shape of a 26-μm-high Si absorber was transformed into the shape of a PMMA microneedle with a height of 76 μm, and done with a high contrast. Although during the fabrication process of the X-ray mask a 100-μm-pattern-pitch (by design was enlarged to 120 μm. However, with an increase in an integrated dose energy this number decreased to 99 μm. These results show that the X-ray grayscale mask has many practical applications. In this paper, the author reports on the evaluation results of SU-8 when used as a membrane material for an X-ray mask.

  3. Pixel Color Clustering of Multi-Temporally Acquired Digital Photographs of a Rice Canopy by Luminosity-Normalization and Pseudo-Red-Green-Blue Color Imaging

    Directory of Open Access Journals (Sweden)

    Ryoichi Doi

    2014-01-01

    Full Text Available Red-green-blue (RGB channels of RGB digital photographs were loaded with luminosity-adjusted R, G, and completely white grayscale images, respectively (RGwhtB method, or R, G, and R + G (RGB yellow grayscale images, respectively (RGrgbyB method, to adjust the brightness of the entire area of multi-temporally acquired color digital photographs of a rice canopy. From the RGwhtB or RGrgbyB pseudocolor image, cyan, magenta, CMYK yellow, black, L*, a*, and b* grayscale images were prepared. Using these grayscale images and R, G, and RGB yellow grayscale images, the luminosity-adjusted pixels of the canopy photographs were statistically clustered. With the RGrgbyB and the RGwhtB methods, seven and five major color clusters were given, respectively. The RGrgbyB method showed clear differences among three rice growth stages, and the vegetative stage was further divided into two substages. The RGwhtB method could not clearly discriminate between the second vegetative and midseason stages. The relative advantages of the RGrgbyB method were attributed to the R, G, B, magenta, yellow, L*, and a* grayscale images that contained richer information to show the colorimetrical differences among objects than those of the RGwhtB method. The comparison of rice canopy colors at different time points was enabled by the pseudocolor imaging method.

  4. Better Steganalysis (BEST) - Reduction of Interfering Influence of Image Content on Steganalysis

    Science.gov (United States)

    2009-10-08

    LSB embedding, let us consider greyscale images with pixel values in the range 0. . . 255 as carrier medium. LSB steganography replaces the least...Detecting LSB steganography in color and grayscale images . IEEE Multimedia, 8(4):22–28, 2001. [9] Jessica Fridrich, Miroslav Goljan, and Dorin Hogea...January, 19–22 2004. [13] Andrew D. Ker. Improved detection of LSB steganography in grayscale images . In Jessica Fridrich, editor, Information Hiding

  5. An Introduction to Document Imaging in the Financial Aid Office.

    Science.gov (United States)

    Levy, Douglas A.

    2001-01-01

    First describes the components of a document imaging system in general and then addresses this technology specifically in relation to financial aid document management: its uses and benefits, considerations in choosing a document imaging system, and additional sources for information. (EV)

  6. Imaging and visual documentation in medicine

    International Nuclear Information System (INIS)

    Wamsteker, K.; Jonas, U.; Veen, G. van der; Waes, P.F.G.M. van

    1987-01-01

    DOCUMED EUROPE '87 was organized to provide information to the physician on the constantly progressing developments in medical imaging technology. Leading specialists lectured on the state-of-the-art of imaging technology and visual documentation in medicine. This book presents a collection of the papers presented at the conference. refs.; figs.; tabs

  7. Image Processing for Binarization Enhancement via Fuzzy Reasoning

    Science.gov (United States)

    Dominguez, Jesus A. (Inventor)

    2009-01-01

    A technique for enhancing a gray-scale image to improve conversions of the image to binary employs fuzzy reasoning. In the technique, pixels in the image are analyzed by comparing the pixel's gray scale value, which is indicative of its relative brightness, to the values of pixels immediately surrounding the selected pixel. The degree to which each pixel in the image differs in value from the values of surrounding pixels is employed as the variable in a fuzzy reasoning-based analysis that determines an appropriate amount by which the selected pixel's value should be adjusted to reduce vagueness and ambiguity in the image and improve retention of information during binarization of the enhanced gray-scale image.

  8. Granulomatous Prostatitis: Gray-scale Transrectal Ultrasonography and Color Doppler Ultrasonography Findings

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyoung Jung; Lim, Joo Won; Lee, Dong Ho; Ko, Young Tae; Kim, Eui Jong [Kyung Hee University Medical Center, Seoul (Korea, Republic of)

    2007-12-15

    We report here three cases of granulomatous prostatitis. All cases were confirmed by a transrectal ultrasonography (TRUS)-guided core biopsy of the prostate. Two cases received intravesical BCG therapy for a bladder tumor, and one case had no known predisposing condition. Gray-scale TRUS showed low echoic nodules in the outer gland in all cases. Color Doppler ultrasonography (CDUS) showed several dot-like blood flows within the low echoic nodules in two cases and several dot-like blood flows and short linear blood flows within the low echoic nodules in one case. Gray-scale TRUS findings of granulomatous prostatitis are similar to findings of prostate cancer. On CDUS, several dot-like blood flows or short linear blood flows were noted within the low echoic nodules in patients with granulomatous prostatitis. If low echoic nodules with dot-like or short linear blood flows are noted in patients with genitourinary tract tuberculosis or previous BCG therapy, granulomatous prostatitis should be included in the differential diagnosis. However, a prostatic biopsy is required for a final diagnosis

  9. Document image binarization using "multi-scale" predefined filters

    Science.gov (United States)

    Saabni, Raid M.

    2018-04-01

    Reading text or searching for key words within a historical document is a very challenging task. one of the first steps of the complete task is binarization, where we separate foreground such as text, figures and drawings from the background. Successful results of this important step in many cases can determine next steps to success or failure, therefore it is very vital to the success of the complete task of reading and analyzing the content of a document image. Generally, historical documents images are of poor quality due to their storage condition and degradation over time, which mostly cause to varying contrasts, stains, dirt and seeping ink from reverse side. In this paper, we use banks of anisotropic predefined filters in different scales and orientations to develop a binarization method for degraded documents and manuscripts. Using the fact, that handwritten strokes may follow different scales and orientations, we use predefined sets of filter banks having various scales, weights, and orientations to seek a compact set of filters and weights in order to generate diffrent layers of foregrounds and background. Results of convolving these fiters on the gray level image locally, weighted and accumulated to enhance the original image. Based on the different layers, seeds of components in the gray level image and a learning process, we present an improved binarization algorithm to separate the background from layers of foreground. Different layers of foreground which may be caused by seeping ink, degradation or other factors are also separated from the real foreground in a second phase. Promising experimental results were obtained on the DIBCO2011 , DIBCO2013 and H-DIBCO2016 data sets and a collection of images taken from real historical documents.

  10. Performance evaluation methodology for historical document image binarization.

    Science.gov (United States)

    Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis

    2013-02-01

    Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.

  11. Comparison between a new computer program and the reference software for gray-scale median analysis of atherosclerotic carotid plaques.

    Science.gov (United States)

    Casella, Ivan Benaduce; Fukushima, Rodrigo Bono; Marques, Anita Battistini de Azevedo; Cury, Marcus Vinícius Martins; Presti, Calógero

    2015-03-01

    To compare a new dedicated software program and Adobe Photoshop for gray-scale median (GSM) analysis of B-mode images of carotid plaques. A series of 42 carotid plaques generating ≥50% diameter stenosis was evaluated by a single observer. The best segment for visualization of internal carotid artery plaque was identified on a single longitudinal view and images were recorded in JPEG format. Plaque analysis was performed by both programs. After normalization of image intensity (blood = 0, adventitial layer = 190), histograms were obtained after manual delineation of plaque. Results were compared with nonparametric Wilcoxon signed rank test and Kendall tau-b correlation analysis. GSM ranged from 00 to 100 with Adobe Photoshop and from 00 to 96 with IMTPC, with a high grade of similarity between image pairs, and a highly significant correlation (R = 0.94, p < .0001). IMTPC software appears suitable for the GSM analysis of carotid plaques. © 2014 Wiley Periodicals, Inc.

  12. Quantitative Evaluation for Differentiating Malignant and Benign Thyroid Nodules Using Histogram Analysis of Grayscale Sonograms.

    Science.gov (United States)

    Nam, Se Jin; Yoo, Jaeheung; Lee, Hye Sun; Kim, Eun-Kyung; Moon, Hee Jung; Yoon, Jung Hyun; Kwak, Jin Young

    2016-04-01

    To evaluate the diagnostic value of histogram analysis using grayscale sonograms for differentiation of malignant and benign thyroid nodules. From July 2013 through October 2013, 579 nodules in 563 patients who had undergone ultrasound-guided fine-needle aspiration were included. For the grayscale histogram analysis, pixel echogenicity values in regions of interest were measured as 0 to 255 (0, black; 255, white) with in-house software. Five parameters (mean, skewness, kurtosis, standard deviation, and entropy) were obtained for each thyroid nodule. With principal component analysis, an index was derived. Diagnostic performance rates for the 5 histogram parameters and the principal component analysis index were calculated. A total of 563 patients were included in the study (mean age ± SD, 50.3 ± 12.3 years;range, 15-79 years). Of the 579 nodules, 431 were benign, and 148 were malignant. Among the 5 parameters and the principal component analysis index, the standard deviation (75.546 ± 14.153 versus 62.761 ± 16.01; P histogram analysis was feasible for differentiating malignant and benign thyroid nodules but did not show better diagnostic performance than subjective analysis performed by radiologists. Further technical advances will be needed to objectify interpretations of thyroid grayscale sonograms. © 2016 by the American Institute of Ultrasound in Medicine.

  13. Fast processing of foreign fiber images by image blocking

    Directory of Open Access Journals (Sweden)

    Yutao Wu

    2014-08-01

    Full Text Available In the textile industry, it is always the case that cotton products are constitutive of many types of foreign fibers which affect the overall quality of cotton products. As the foundation of the foreign fiber automated inspection, image process exerts a critical impact on the process of foreign fiber identification. This paper presents a new approach for the fast processing of foreign fiber images. This approach includes five main steps, image block, image pre-decision, image background extraction, image enhancement and segmentation, and image connection. At first, the captured color images were transformed into gray-scale images; followed by the inversion of gray-scale of the transformed images ; then the whole image was divided into several blocks. Thereafter, the subsequent step is to judge which image block contains the target foreign fiber image through image pre-decision. Then we segment the image block via OSTU which possibly contains target images after background eradication and image strengthening. Finally, we connect those relevant segmented image blocks to get an intact and clear foreign fiber target image. The experimental result shows that this method of segmentation has the advantage of accuracy and speed over the other segmentation methods. On the other hand, this method also connects the target image that produce fractures therefore getting an intact and clear foreign fiber target image.

  14. A universal color image quality metric

    NARCIS (Netherlands)

    Toet, A.; Lucassen, M.P.

    2003-01-01

    We extend a recently introduced universal grayscale image quality index to a newly developed perceptually decorrelated color space. The resulting color image quality index quantifies the distortion of a processed color image relative to its original version. We evaluated the new color image quality

  15. Ultrasound Imaging and its modeling

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2002-01-01

    Modern medical ultrasound scanners are used for imaging nearly all soft tissue structures in the body. The anatomy can be studied from gray-scale B-mode images, where the reflectivity and scattering strength of the tissues are displayed. The imaging is performed in real time with 20 to 100 images...

  16. Using Photoshop with images created by a confocal system.

    Science.gov (United States)

    Sedgewick, Jerry

    2014-01-01

    Many pure colors and grayscales tones that result from confocal imaging are not reproducible to output devices, such as printing presses, laptop projectors, and laser jet printers. Part of the difficulty in predicting the colors and tones that will reproduce lies in both the computer display, and in the display of unreproducible colors chosen for fluorophores. The use of a grayscale display for confocal channels and a LUT display to show saturated (clipped) tonal values aids visualization in the former instance and image integrity in the latter. Computer monitors used for post-processing in order to conform the image to the output device can be placed in darkened rooms, and the gamma for the display can be set to create darker shadow regions, and to control the display of color. These conditions aid in visualization of images so that blacks are set to grayer values that are more amenable to faithful reproduction. Preferences can be set in Photoshop for consistent display of colors, along with other settings to optimize use of memory. The Info window is opened so that tonal information can be shown via readouts. Images that are saved as indexed color are converted to grayscale or RGB Color, 16-bit is converted to 8-bit when desired, and colorized images from confocal software is returned to grayscale and re-colorized according to presented methods so that reproducible colors are made. Images may also be sharpened and noise may be reduced, or more than one image layered to show colocalization according to specific methods. Images are then converted to CMYK (Cyan, Magenta, Yellow and Black) for consequent assignment of pigment percentages for printing presses. Changes to single images and multiple images from image stacks are automated for efficient and consistent image processing changes. Some additional changes are done to those images destined for 3D visualization to better separate regions of interest from background. Files are returned to image stacks, saved and

  17. New public dataset for spotting patterns in medieval document images

    Science.gov (United States)

    En, Sovann; Nicolas, Stéphane; Petitjean, Caroline; Jurie, Frédéric; Heutte, Laurent

    2017-01-01

    With advances in technology, a large part of our cultural heritage is becoming digitally available. In particular, in the field of historical document image analysis, there is now a growing need for indexing and data mining tools, thus allowing us to spot and retrieve the occurrences of an object of interest, called a pattern, in a large database of document images. Patterns may present some variability in terms of color, shape, or context, making the spotting of patterns a challenging task. Pattern spotting is a relatively new field of research, still hampered by the lack of available annotated resources. We present a new publicly available dataset named DocExplore dedicated to spotting patterns in historical document images. The dataset contains 1500 images and 1464 queries, and allows the evaluation of two tasks: image retrieval and pattern localization. A standardized benchmark protocol along with ad hoc metrics is provided for a fair comparison of the submitted approaches. We also provide some first results obtained with our baseline system on this new dataset, which show that there is room for improvement and that should encourage researchers of the document image analysis community to design new systems and submit improved results.

  18. A new universal colour image fidelity metric

    NARCIS (Netherlands)

    Toet, A.; Lucassen, M.P.

    2003-01-01

    We extend a recently introduced universal grayscale image quality index to a newly developed perceptually decorrelated colour space. The resulting colour image fidelity metric quantifies the distortion of a processed colour image relative to its original version. We evaluated the new colour image

  19. Utility of Digital Stereo Images for Optic Disc Evaluation

    Science.gov (United States)

    Ying, Gui-shuang; Pearson, Denise J.; Bansal, Mayank; Puri, Manika; Miller, Eydie; Alexander, Judith; Piltz-Seymour, Jody; Nyberg, William; Maguire, Maureen G.; Eledath, Jayan; Sawhney, Harpreet

    2010-01-01

    Purpose. To assess the suitability of digital stereo images for optic disc evaluations in glaucoma. Methods. Stereo color optic disc images in both digital and 35-mm slide film formats were acquired contemporaneously from 29 subjects with various cup-to-disc ratios (range, 0.26–0.76; median, 0.475). Using a grading scale designed to assess image quality, the ease of visualizing optic disc features important for glaucoma diagnosis, and the comparative diameters of the optic disc cup, experienced observers separately compared the primary digital stereo images to each subject's 35-mm slides, to scanned images of the same 35-mm slides, and to grayscale conversions of the digital images. Statistical analysis accounted for multiple gradings and comparisons and also assessed image formats under monoscopic viewing. Results. Overall, the quality of primary digital color images was judged superior to that of 35-mm slides (P digital color images were mostly equivalent to the scanned digitized images of the same slides. Color seemingly added little to grayscale optic disc images, except that peripapillary atrophy was best seen in color (P digital over film images was maintained under monoscopic viewing conditions. Conclusions. Digital stereo optic disc images are useful for evaluating the optic disc in glaucoma and allow the application of advanced image processing applications. Grayscale images, by providing luminance distinct from color, may be informative for assessing certain features. PMID:20505199

  20. RGB Color Cube-Based Histogram Specification for Hue-Preserving Color Image Enhancement

    Directory of Open Access Journals (Sweden)

    Kohei Inoue

    2017-07-01

    Full Text Available A large number of color image enhancement methods are based on the methods for grayscale image enhancement in which the main interest is contrast enhancement. However, since colors usually have three attributes, including hue, saturation and intensity of more than only one attribute of grayscale values, the naive application of the methods for grayscale images to color images often results in unsatisfactory consequences. Conventional hue-preserving color image enhancement methods utilize histogram equalization (HE for enhancing the contrast. However, they cannot always enhance the saturation simultaneously. In this paper, we propose a histogram specification (HS method for enhancing the saturation in hue-preserving color image enhancement. The proposed method computes the target histogram for HS on the basis of the geometry of RGB (rad, green and blue color space, whose shape is a cube with a unit side length. Therefore, the proposed method includes no parameters to be set by users. Experimental results show that the proposed method achieves higher color saturation than recent parameter-free methods for hue-preserving color image enhancement. As a result, the proposed method can be used for an alternative method of HE in hue-preserving color image enhancement.

  1. Histogram and gray level co-occurrence matrix on gray-scale ultrasound images for diagnosing lymphocytic thyroiditis.

    Science.gov (United States)

    Shin, Young Gyung; Yoo, Jaeheung; Kwon, Hyeong Ju; Hong, Jung Hwa; Lee, Hye Sun; Yoon, Jung Hyun; Kim, Eun-Kyung; Moon, Hee Jung; Han, Kyunghwa; Kwak, Jin Young

    2016-08-01

    The objective of the study was to evaluate whether texture analysis using histogram and gray level co-occurrence matrix (GLCM) parameters can help clinicians diagnose lymphocytic thyroiditis (LT) and differentiate LT according to pathologic grade. The background thyroid pathology of 441 patients was classified into no evidence of LT, chronic LT (CLT), and Hashimoto's thyroiditis (HT). Histogram and GLCM parameters were extracted from the regions of interest on ultrasound. The diagnostic performances of the parameters for diagnosing and differentiating LT were calculated. Of the histogram and GLCM parameters, the mean on histogram had the highest Az (0.63) and VUS (0.303). As the degrees of LT increased, the mean decreased and the standard deviation and entropy increased. The mean on histogram from gray-scale ultrasound showed the best diagnostic performance as a single parameter in differentiating LT according to pathologic grade as well as in diagnosing LT. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. A Document Imaging Technique for Implementing Electronic Loan Approval Process

    Directory of Open Access Journals (Sweden)

    J. Manikandan

    2015-04-01

    Full Text Available The image processing is one of the leading technologies of computer applications. Image processing is a type of signal processing, the input for image processor is an image or video frame and the output will be an image or subset of image [1]. Computer graphics and computer vision process uses an image processing techniques. Image processing systems are used in various environments like medical fields, computer-aided design (CAD, research fields, crime investigation fields and military fields. In this paper, we proposed a document image processing technique, for establishing electronic loan approval process (E-LAP [2]. Loan approval process has been tedious process, the E-LAP system attempts to reduce the complexity of loan approval process. Customers have to login to fill the loan application form online with all details and submit the form. The loan department then processes the submitted form and then sends an acknowledgement mail via the E-LAP to the requested customer with the details about list of documents required for the loan approval process [3]. The approaching customer can upload the scanned copies of all required documents. All this interaction between customer and bank take place using an E-LAP system.

  3. Color Multifocus Image Fusion Using Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    S. Savić

    2013-11-01

    Full Text Available In this paper, a recently proposed grayscale multifocus image fusion method based on the first level of Empirical Mode Decomposition (EMD has been extended to color images. In addition, this paper deals with low contrast multifocus image fusion. The major advantages of the proposed methods are simplicity, absence of artifacts and control of contrast, while this isn’t the case with other pyramidal multifocus fusion methods. The efficiency of the proposed method is tested subjectively and with a vector gradient based objective measure, that is proposed in this paper for multifocus color image fusion. Subjective analysis performed on a multifocus image dataset has shown its superiority to the existing EMD and DWT based methods. The objective measures of grayscale and color image fusion show significantly better scores for this method than for the classic complex EMD fusion method.

  4. Grayscale optical correlator for real-time onboard ATR

    Science.gov (United States)

    Chao, Tien-Hsin; Zhou, Hanying; Reyes, George F.

    2001-03-01

    Jet Propulsion Laboratory has been developing grayscale optical correlator (GOC) for a variety of automatic target recognition (ATR) applications. As reported in previous papers, a 128 X 128 camcorder-sized GOC has been demonstrated for real-time field ATR demos. In this paper, we will report the recent development of a prototype 512 X 512 GOC utilizing a new miniature ferroelectric liquid crystal spatial light modulator with a 7-micrometers pixel pitch. Experimental demonstration of ATR applications using this new GOC will be presented. The potential of developing a matchbox-sized GOC will also be discussed. A new application of synthesizing new complex-valued correlation filters using this real-axis 512 X 512 SLM will also be included.

  5. Adaptive removal of background and white space from document images using seam categorization

    Science.gov (United States)

    Fillion, Claude; Fan, Zhigang; Monga, Vishal

    2011-03-01

    Document images are obtained regularly by rasterization of document content and as scans of printed documents. Resizing via background and white space removal is often desired for better consumption of these images, whether on displays or in print. While white space and background are easy to identify in images, existing methods such as naïve removal and content aware resizing (seam carving) each have limitations that can lead to undesirable artifacts, such as uneven spacing between lines of text or poor arrangement of content. An adaptive method based on image content is hence needed. In this paper we propose an adaptive method to intelligently remove white space and background content from document images. Document images are different from pictorial images in structure. They typically contain objects (text letters, pictures and graphics) separated by uniform background, which include both white paper space and other uniform color background. Pixels in uniform background regions are excellent candidates for deletion if resizing is required, as they introduce less change in document content and style, compared with deletion of object pixels. We propose a background deletion method that exploits both local and global context. The method aims to retain the document structural information and image quality.

  6. Cross-calibration of Fuji TR image plate and RAR 2492 x-ray film to determine the response of a DITABIS Super Micron image plate scanner

    Energy Technology Data Exchange (ETDEWEB)

    Dunham, G., E-mail: gsdunha@sandia.gov; Harding, E. C.; Loisel, G. P.; Lake, P. W.; Nielsen-Weber, L. B. [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)

    2016-11-15

    Fuji TR image plate is frequently used as a replacement detector medium for x-ray imaging and spectroscopy diagnostics at NIF, Omega, and Z facilities. However, the familiar Fuji BAS line of image plate scanners is no longer supported by the industry, and so a replacement scanning system is needed. While the General Electric Typhoon line of scanners could replace the Fuji systems, the shift away from photo stimulated luminescence units to 16-bit grayscale Tag Image File Format (TIFF) leaves a discontinuity when comparing data collected from both systems. For the purposes of quantitative spectroscopy, a known unit of intensity applied to the grayscale values of the TIFF is needed. The DITABIS Super Micron image plate scanning system was tested and shown to potentially rival the resolution and dynamic range of Kodak RAR 2492 x-ray film. However, the absolute sensitivity of the scanner is unknown. In this work, a methodology to cross calibrate Fuji TR image plate and the absolutely calibrated Kodak RAR 2492 x-ray film is presented. Details of the experimental configurations used are included. An energy dependent scale factor to convert Fuji TR IP scanned on a DITABIS Super Micron scanner from 16-bit grayscale TIFF to intensity units (i.e., photons per square micron) is discussed.

  7. Document image mosaicing: A novel approach

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    MS received 28 April 2003; revised 22 July 2003. Abstract. ... Hence, document image mosaicing is the process of merging split ..... Case 2: Algorithm 2 is an improved version of algorithm 1 which eliminates the drawbacks of ... One of the authors (PS) thanks the All India Council for Technical Education, New Delhi for.

  8. Segmentation of Brain Tissues from Magnetic Resonance Images Using Adaptively Regularized Kernel-Based Fuzzy C-Means Clustering

    Directory of Open Access Journals (Sweden)

    Ahmed Elazab

    2015-01-01

    Full Text Available An adaptively regularized kernel-based fuzzy C-means clustering framework is proposed for segmentation of brain magnetic resonance images. The framework can be in the form of three algorithms for the local average grayscale being replaced by the grayscale of the average filter, median filter, and devised weighted images, respectively. The algorithms employ the heterogeneity of grayscales in the neighborhood and exploit this measure for local contextual information and replace the standard Euclidean distance with Gaussian radial basis kernel functions. The main advantages are adaptiveness to local context, enhanced robustness to preserve image details, independence of clustering parameters, and decreased computational costs. The algorithms have been validated against both synthetic and clinical magnetic resonance images with different types and levels of noises and compared with 6 recent soft clustering algorithms. Experimental results show that the proposed algorithms are superior in preserving image details and segmentation accuracy while maintaining a low computational complexity.

  9. Ns-scaled time-gated fluorescence lifetime imaging for forensic document examination

    Science.gov (United States)

    Zhong, Xin; Wang, Xinwei; Zhou, Yan

    2018-01-01

    A method of ns-scaled time-gated fluorescence lifetime imaging (TFLI) is proposed to distinguish different fluorescent substances in forensic document examination. Compared with Video Spectral Comparator (VSC) which can examine fluorescence intensity images only, TFLI can detect questioned documents like falsification or alteration. TFLI system can enhance weak signal by accumulation method. The two fluorescence intensity images of the interval delay time tg are acquired by ICCD and fitted into fluorescence lifetime image. The lifetimes of fluorescence substances are represented by different colors, which make it easy to detect the fluorescent substances and the sequence of handwritings. It proves that TFLI is a powerful tool for forensic document examination. Furthermore, the advantages of TFLI system are ns-scaled precision preservation and powerful capture capability.

  10. A grayscale pneumatic micro-valve for use in a reconfigurable tactile tablet for vision-impaired individuals

    International Nuclear Information System (INIS)

    Schneider, Joseph Devin; Rebolledo-Mendez, Jovan David; McNamara, Shamus

    2015-01-01

    The design, fabrication, and characterization of a strained bilayer film for use in a micro-valve for a reconfigurable tactile tablet for vision-impaired individuals is presented. The bilayer film consists of a compressive and tensile layer to cause the film to coil and retract from the gas channel when the micro-valve is in the open position. A novel support structure that improves yield and controls the direction of coiling is demonstrated. An array of 225 strained bilayer films was designed and fabricated. Each strained bilayer film was able to be actuated individually with a voltage that ranged between 60 and 70 V. The relationship between the applied voltage and the percentage open of the micro-valve is found to be linear over an extended voltage range, enabling the reconfigurable tactile tablet to produce the equivalent of a grayscale image. (paper)

  11. Example-Based Image Colorization Using Locality Consistent Sparse Representation.

    Science.gov (United States)

    Bo Li; Fuchen Zhao; Zhuo Su; Xiangguo Liang; Yu-Kun Lai; Rosin, Paul L

    2017-11-01

    Image colorization aims to produce a natural looking color image from a given gray-scale image, which remains a challenging problem. In this paper, we propose a novel example-based image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target gray-scale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features, and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation, which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target gray-scale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms the state-of-the-art methods, both visually and quantitatively using a user study.

  12. Fast Fourier single-pixel imaging via binary illumination.

    Science.gov (United States)

    Zhang, Zibang; Wang, Xueying; Zheng, Guoan; Zhong, Jingang

    2017-09-20

    Fourier single-pixel imaging (FSI) employs Fourier basis patterns for encoding spatial information and is capable of reconstructing high-quality two-dimensional and three-dimensional images. Fourier-domain sparsity in natural scenes allows FSI to recover sharp images from undersampled data. The original FSI demonstration, however, requires grayscale Fourier basis patterns for illumination. This requirement imposes a limitation on the imaging speed as digital micro-mirror devices (DMDs) generate grayscale patterns at a low refreshing rate. In this paper, we report a new strategy to increase the speed of FSI by two orders of magnitude. In this strategy, we binarize the Fourier basis patterns based on upsampling and error diffusion dithering. We demonstrate a 20,000 Hz projection rate using a DMD and capture 256-by-256-pixel dynamic scenes at a speed of 10 frames per second. The reported technique substantially accelerates image acquisition speed of FSI. It may find broad imaging applications at wavebands that are not accessible using conventional two-dimensional image sensors.

  13. High-performance method of morphological medical image processing

    Directory of Open Access Journals (Sweden)

    Ryabykh M. S.

    2016-07-01

    Full Text Available the article shows the implementation of grayscale morphology vHGW algorithm for selection borders in the medical image. Image processing is executed using OpenMP and NVIDIA CUDA technology for images with different resolution and different size of the structuring element.

  14. Non-Local Sparse Image Inpainting for Document Bleed-Through Removal

    Directory of Open Access Journals (Sweden)

    Muhammad Hanif

    2018-05-01

    Full Text Available Bleed-through is a frequent, pervasive degradation in ancient manuscripts, which is caused by ink seeped from the opposite side of the sheet. Bleed-through, appearing as an extra interfering text, hinders document readability and makes it difficult to decipher the information contents. Digital image restoration techniques have been successfully employed to remove or significantly reduce this distortion. This paper proposes a two-step restoration method for documents affected by bleed-through, exploiting information from the recto and verso images. First, the bleed-through pixels are identified, based on a non-stationary, linear model of the two texts overlapped in the recto-verso pair. In the second step, a dictionary learning-based sparse image inpainting technique, with non-local patch grouping, is used to reconstruct the bleed-through-contaminated image information. An overcomplete sparse dictionary is learned from the bleed-through-free image patches, which is then used to estimate a befitting fill-in for the identified bleed-through pixels. The non-local patch similarity is employed in the sparse reconstruction of each patch, to enforce the local similarity. Thanks to the intrinsic image sparsity and non-local patch similarity, the natural texture of the background is well reproduced in the bleed-through areas, and even a possible overestimation of the bleed through pixels is effectively corrected, so that the original appearance of the document is preserved. We evaluate the performance of the proposed method on the images of a popular database of ancient documents, and the results validate the performance of the proposed method compared to the state of the art.

  15. Digital grayscale printing for patterned transparent conducting Ag electrodes and their applications in flexible electronics

    DEFF Research Database (Denmark)

    Gupta, Ritu; Hösel, Markus; Jensen, Jacob

    2014-01-01

    Grayscale (halftone) laser printing is developed as a low-cost and solution processable fabrication method for ITO-free, semi-transparent and conducting Ag electrodes extendable over large area on a flexible substrate. The transmittance and sheet resistance is easily tunable by varying the graysc...

  16. Mapping the Salinity Gradient in a Microfluidic Device with Schlieren Imaging

    Directory of Open Access Journals (Sweden)

    Chen-li Sun

    2015-05-01

    Full Text Available This work presents the use of the schlieren imaging to quantify the salinity gradients in a microfluidic device. By partially blocking the back focal plane of the objective lens, the schlieren microscope produces an image with patterns that correspond to spatial derivative of refractive index in the specimen. Since salinity variation leads to change in refractive index, the fluid mixing of an aqueous salt solution of a known concentration and water in a T-microchannel is used to establish the relation between salinity gradients and grayscale readouts. This relation is then employed to map the salinity gradients in the target microfluidic device from the grayscale readouts of the corresponding micro-schlieren image. For saline solution with salinity close to that of the seawater, the grayscale readouts vary linearly with the salinity gradient, and the regression line is independent of the flow condition and the salinity of the injected solution. It is shown that the schlieren technique is well suited to quantify the salinity gradients in microfluidic devices, for it provides a spatially resolved, non-invasive, full-field measurement.

  17. Temporal high-pass non-uniformity correction algorithm based on grayscale mapping and hardware implementation

    Science.gov (United States)

    Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo

    2015-08-01

    In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.

  18. Ancient administrative handwritten documents: X-ray analysis and imaging

    International Nuclear Information System (INIS)

    Albertin, F.; Astolfo, A.; Stampanoni, M.; Peccenini, Eva; Hwu, Y.; Kaplan, F.; Margaritondo, G.

    2015-01-01

    The heavy-element content of ink in ancient administrative documents makes it possible to detect the characters with different synchrotron imaging techniques, based on attenuation or refraction. This is the first step in the direction of non-interactive virtual X-ray reading. Handwritten characters in administrative antique documents from three centuries have been detected using different synchrotron X-ray imaging techniques. Heavy elements in ancient inks, present even for everyday administrative manuscripts as shown by X-ray fluorescence spectra, produce attenuation contrast. In most cases the image quality is good enough for tomography reconstruction in view of future applications to virtual page-by-page ‘reading’. When attenuation is too low, differential phase contrast imaging can reveal the characters from refractive index effects. The results are potentially important for new information harvesting strategies, for example from the huge Archivio di Stato collection, objective of the Venice Time Machine project

  19. Ancient administrative handwritten documents: X-ray analysis and imaging

    Energy Technology Data Exchange (ETDEWEB)

    Albertin, F., E-mail: fauzia.albertin@epfl.ch [Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland); Astolfo, A. [Paul Scherrer Institut (PSI), Villigen (Switzerland); Stampanoni, M. [Paul Scherrer Institut (PSI), Villigen (Switzerland); ETHZ, Zürich (Switzerland); Peccenini, Eva [University of Ferrara (Italy); Technopole of Ferrara (Italy); Hwu, Y. [Academia Sinica, Taipei, Taiwan (China); Kaplan, F. [Ecole Polytechnique Fédérale de Lausanne (EPFL) (Switzerland); Margaritondo, G. [Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland)

    2015-01-30

    The heavy-element content of ink in ancient administrative documents makes it possible to detect the characters with different synchrotron imaging techniques, based on attenuation or refraction. This is the first step in the direction of non-interactive virtual X-ray reading. Handwritten characters in administrative antique documents from three centuries have been detected using different synchrotron X-ray imaging techniques. Heavy elements in ancient inks, present even for everyday administrative manuscripts as shown by X-ray fluorescence spectra, produce attenuation contrast. In most cases the image quality is good enough for tomography reconstruction in view of future applications to virtual page-by-page ‘reading’. When attenuation is too low, differential phase contrast imaging can reveal the characters from refractive index effects. The results are potentially important for new information harvesting strategies, for example from the huge Archivio di Stato collection, objective of the Venice Time Machine project.

  20. Script Identification from Printed Indian Document Images and Performance Evaluation Using Different Classifiers

    OpenAIRE

    Sk Md Obaidullah; Anamika Mondal; Nibaran Das; Kaushik Roy

    2014-01-01

    Identification of script from document images is an active area of research under document image processing for a multilingual/ multiscript country like India. In this paper the real life problem of printed script identification from official Indian document images is considered and performances of different well-known classifiers are evaluated. Two important evaluating parameters, namely, AAR (average accuracy rate) and MBT (model building time), are computed for this performance analysi...

  1. Fractal Image Coding with Digital Watermarks

    Directory of Open Access Journals (Sweden)

    Z. Klenovicova

    2000-12-01

    Full Text Available In this paper are presented some results of implementation of digitalwatermarking methods into image coding based on fractal principles. Thepaper focuses on two possible approaches of embedding digitalwatermarks into fractal code of images - embedding digital watermarksinto parameters for position of similar blocks and coefficients ofblock similarity. Both algorithms were analyzed and verified on grayscale static images.

  2. Image matching for digital close-range stereo photogrammetry based on constraints of Delaunay triangulated network and epipolar-line

    Science.gov (United States)

    Zhang, K.; Sheng, Y. H.; Li, Y. Q.; Han, B.; Liang, Ch.; Sha, W.

    2006-10-01

    In the field of digital photogrammetry and computer vision, the determination of conjugate points in a stereo image pair, referred to as "image matching," is the critical step to realize automatic surveying and recognition. Traditional matching methods encounter some problems in the digital close-range stereo photogrammetry, because the change of gray-scale or texture is not obvious in the close-range stereo images. The main shortcoming of traditional matching methods is that geometric information of matching points is not fully used, which will lead to wrong matching results in regions with poor texture. To fully use the geometry and gray-scale information, a new stereo image matching algorithm is proposed in this paper considering the characteristics of digital close-range photogrammetry. Compared with the traditional matching method, the new algorithm has three improvements on image matching. Firstly, shape factor, fuzzy maths and gray-scale projection are introduced into the design of synthetical matching measure. Secondly, the topology connecting relations of matching points in Delaunay triangulated network and epipolar-line are used to decide matching order and narrow the searching scope of conjugate point of the matching point. Lastly, the theory of parameter adjustment with constraint is introduced into least square image matching to carry out subpixel level matching under epipolar-line constraint. The new algorithm is applied to actual stereo images of a building taken by digital close-range photogrammetric system. The experimental result shows that the algorithm has a higher matching speed and matching accuracy than pyramid image matching algorithm based on gray-scale correlation.

  3. Red blood cell image enhancement techniques for cells with ...

    African Journals Online (AJOL)

    quality or challenging conditions of the images such as poor illumination of blood smear and most importantly overlapping RBC. The algorithm comprises of two RBC segmentation that can be selected based on the image quality, circle mask technique and grayscale blood smear image processing. Detail explanations ...

  4. Wavelet/scalar quantization compression standard for fingerprint images

    Energy Technology Data Exchange (ETDEWEB)

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  5. LOCAL BINARIZATION FOR DOCUMENT IMAGES CAPTURED BY CAMERAS WITH DECISION TREE

    Directory of Open Access Journals (Sweden)

    Naser Jawas

    2012-07-01

    Full Text Available Character recognition in a document image captured by a digital camera requires a good binary image as the input for the separation the text from the background. Global binarization method does not provide such good separation because of the problem of uneven levels of lighting in images captured by cameras. Local binarization method overcomes the problem but requires a method to partition the large image into local windows properly. In this paper, we propose a local binariation method with dynamic image partitioning using integral image and decision tree for the binarization decision. The integral image is used to estimate the number of line in the document image. The number of line in the document image is used to devide the document into local windows. The decision tree makes a decision for threshold in every local window. The result shows that the proposed method can separate the text from the background better than using global thresholding with the best OCR result of the binarized image is 99.4%. Pengenalan karakter pada sebuah dokumen citra yang diambil menggunakan kamera digital membutuhkan citra yang terbinerisasi dengan baik untuk memisahkan antara teks dengan background. Metode binarisasi global tidak memberikan hasil pemisahan yang bagus karena permasalahan tingkat pencahayaan yang tidak seimbang pada citra hasil kamera digital. Metode binarisasi lokal dapat mengatasi permasalahan tersebut namun metode tersebut membutuhkan metode untuk membagi citra ke dalam bagian-bagian window lokal. Pada paper ini diusulkan sebuah metode binarisasi lokal dengan pembagian citra secara dinamis menggunakan integral image dan decision tree untuk keputusan binarisasi lokalnya. Integral image digunakan untuk mengestimasi jumlah baris teks dalam dokumen citra. Jumlah baris tersebut kemudian digunakan untuk membagi citra dokumen ke dalam window lokal. Keputusan nilai threshold untuk setiap window lokal ditentukan dengan decisiontree. Hasilnya menunjukkan

  6. Correspondence normalized ghost imaging on compressive sensing

    International Nuclear Information System (INIS)

    Zhao Sheng-Mei; Zhuang Peng

    2014-01-01

    Ghost imaging (GI) offers great potential with respect to conventional imaging techniques. It is an open problem in GI systems that a long acquisition time is be required for reconstructing images with good visibility and signal-to-noise ratios (SNRs). In this paper, we propose a new scheme to get good performance with a shorter construction time. We call it correspondence normalized ghost imaging based on compressive sensing (CCNGI). In the scheme, we enhance the signal-to-noise performance by normalizing the reference beam intensity to eliminate the noise caused by laser power fluctuations, and reduce the reconstruction time by using both compressive sensing (CS) and time-correspondence imaging (CI) techniques. It is shown that the qualities of the images have been improved and the reconstruction time has been reduced using CCNGI scheme. For the two-grayscale ''double-slit'' image, the mean square error (MSE) by GI and the normalized GI (NGI) schemes with the measurement number of 5000 are 0.237 and 0.164, respectively, and that is 0.021 by CCNGI scheme with 2500 measurements. For the eight-grayscale ''lena'' object, the peak signal-to-noise rates (PSNRs) are 10.506 and 13.098, respectively using GI and NGI schemes while the value turns to 16.198 using CCNGI scheme. The results also show that a high-fidelity GI reconstruction has been achieved using only 44% of the number of measurements corresponding to the Nyquist limit for the two-grayscale “double-slit'' object. The qualities of the reconstructed images using CCNGI are almost the same as those from GI via sparsity constraints (GISC) with a shorter reconstruction time. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  7. An optical color image watermarking scheme by using compressive sensing with human visual characteristics in gyrator domain

    Science.gov (United States)

    Liansheng, Sui; Bei, Zhou; Zhanmin, Wang; Ailing, Tian

    2017-05-01

    A novel optical color image watermarking scheme considering human visual characteristics is presented in gyrator transform domain. Initially, an appropriate reference image is constructed of significant blocks chosen from the grayscale host image by evaluating visual characteristics such as visual entropy and edge entropy. Three components of the color watermark image are compressed based on compressive sensing, and the corresponding results are combined to form the grayscale watermark. Then, the frequency coefficients of the watermark image are fused into the frequency data of the gyrator-transformed reference image. The fused result is inversely transformed and partitioned, and eventually the watermarked image is obtained by mapping the resultant blocks into their original positions. The scheme can reconstruct the watermark with high perceptual quality and has the enhanced security due to high sensitivity of the secret keys. Importantly, the scheme can be implemented easily under the framework of double random phase encoding with the 4f optical system. To the best of our knowledge, it is the first report on embedding the color watermark into the grayscale host image which will be out of attacker's expectation. Simulation results are given to verify the feasibility and its superior performance in terms of noise and occlusion robustness.

  8. [Application of the grayscale standard display function to general purpose liquid-crystal display monitors for clinical use].

    Science.gov (United States)

    Tanaka, Nobukazu; Naka, Kentaro; Sueoka, Masaki; Higashida, Yoshiharu; Morishita, Junji

    2010-01-20

    Interpretations of medical images have been shifting to soft-copy readings with liquid-crystal display (LCD) monitors. The display function of the medical-grade LCD monitor for soft-copy readings is recommended to calibrate the grayscale standard display function (GSDF) in accordance with the guidelines of Japan and other countries. In this study, the luminance and display function of five models of eight general purpose LCD monitors were measured to gain an understanding of their characteristics. Moreover, the display function (gamma 2.2 or gamma 1.8) of general purpose LCD monitors was converted to GSDF through the use of a look-up table, and the detectability of a simulated lung nodule in the chest x-ray image was examined. As a result, the maximum luminance, contrast ratio, and luminance uniformity of general purpose LCD monitors, except for one model of two LCD monitors, met the management grade 1 standard in the guideline JESRA X-0093-2005. In addition, the detectability of simulated lung nodule in the mediastinal space was obviously improved by converting the display function of a general purpose LCD monitor into GSDF.

  9. A 1,000 Frames/s Programmable Vision Chip with Variable Resolution and Row-Pixel-Mixed Parallel Image Processors

    Directory of Open Access Journals (Sweden)

    Nanjian Wu

    2009-07-01

    Full Text Available A programmable vision chip with variable resolution and row-pixel-mixed parallel image processors is presented. The chip consists of a CMOS sensor array, with row-parallel 6-bit Algorithmic ADCs, row-parallel gray-scale image processors, pixel-parallel SIMD Processing Element (PE array, and instruction controller. The resolution of the image in the chip is variable: high resolution for a focused area and low resolution for general view. It implements gray-scale and binary mathematical morphology algorithms in series to carry out low-level and mid-level image processing and sends out features of the image for various applications. It can perform image processing at over 1,000 frames/s (fps. A prototype chip with 64 × 64 pixels resolution and 6-bit gray-scale image is fabricated in 0.18 mm Standard CMOS process. The area size of chip is 1.5 mm × 3.5 mm. Each pixel size is 9.5 μm × 9.5 μm and each processing element size is 23 μm × 29 μm. The experiment results demonstrate that the chip can perform low-level and mid-level image processing and it can be applied in the real-time vision applications, such as high speed target tracking.

  10. Spotting Separator Points at Line Terminals in Compressed Document Images for Text-line Segmentation

    OpenAIRE

    R, Amarnath; Nagabhushan, P.

    2017-01-01

    Line separators are used to segregate text-lines from one another in document image analysis. Finding the separator points at every line terminal in a document image would enable text-line segmentation. In particular, identifying the separators in handwritten text could be a thrilling exercise. Obviously it would be challenging to perform this in the compressed version of a document image and that is the proposed objective in this research. Such an effort would prevent the computational burde...

  11. Gray-scale contrast-enhanced utrasonography in detecting sentinel lymph nodes: An animal study

    International Nuclear Information System (INIS)

    Wang Yuexiang; Cheng Zhigang; Li Junlai; Tang Jie

    2010-01-01

    Objective: To investigate the usefulness of gray-scale contrast-enhanced ultrasonography for detecting sentinel lymph nodes. Methods: Contrast-enhanced ultrasonography was performed in five normal dogs (four female and one male) after subcutaneous administration of a sonographic contrast agent (Sonovue, Bracco, Milan, Italy). Four distinct regions in each animal were examined. After contrast-enhanced ultrasonography, 0.8 ml of blue dye was injected into the same location as Sonovue and the sentinel lymph nodes were detected by surgical dissection. The findings of contrast-enhanced ultrasonography were compared with those of the blue dye. Results: Twenty-one sentinel lymph nodes were detected by contrast-enhanced ultrasonography while 23 were identified by blue dye with surgical dissection. Compared with the blue dye, the detection rate of enhanced ultrasonography for the sentinel lymph nodes is 91.3% (21/23). Two patterns of enhancement in the sentinel lymph nodes were observed: complete enhancement (5 sentinel lymph nodes) and partial enhancement (16 sentinel lymph nodes). The lymphatic channels were demonstrated as hyperechoic linear structures leading from the injection site and could be readily followed to their sentinel lymph nodes. Histopathologic examination showed proliferation of lymphatic follicles or lymphatic sinus in partial enhanced sentinel lymph nodes while normal lymphatic tissue was demonstrated in completely enhanced sentinel lymph nodes. Conclusions: Sonovue combined with gray-scale contrast-enhanced ultrasonography may provide a feasible method for detecting sentinel lymph nodes.

  12. Significance of perceptually relevant image decolorization for scene classification

    Science.gov (United States)

    Viswanathan, Sowmya; Divakaran, Govind; Soman, Kutti Padanyl

    2017-11-01

    Color images contain luminance and chrominance components representing the intensity and color information, respectively. The objective of this paper is to show the significance of incorporating chrominance information to the task of scene classification. An improved color-to-grayscale image conversion algorithm that effectively incorporates chrominance information is proposed using the color-to-gray structure similarity index and singular value decomposition to improve the perceptual quality of the converted grayscale images. The experimental results based on an image quality assessment for image decolorization and its success rate (using the Cadik and COLOR250 datasets) show that the proposed image decolorization technique performs better than eight existing benchmark algorithms for image decolorization. In the second part of the paper, the effectiveness of incorporating the chrominance component for scene classification tasks is demonstrated using a deep belief network-based image classification system developed using dense scale-invariant feature transforms. The amount of chrominance information incorporated into the proposed image decolorization technique is confirmed with the improvement to the overall scene classification accuracy. Moreover, the overall scene classification performance improved by combining the models obtained using the proposed method and conventional decolorization methods.

  13. An Image Enhancement Method Using the Quantum-Behaved Particle Swarm Optimization with an Adaptive Strategy

    Directory of Open Access Journals (Sweden)

    Xiaoping Su

    2013-01-01

    Full Text Available Image enhancement techniques are very important to image processing, which are used to improve image quality or extract the fine details in degraded images. In this paper, two novel objective functions based on the normalized incomplete Beta transform function are proposed to evaluate the effectiveness of grayscale image enhancement and color image enhancement, respectively. Using these objective functions, the parameters of transform functions are estimated by the quantum-behaved particle swarm optimization (QPSO. We also propose an improved QPSO with an adaptive parameter control strategy. The QPSO and the AQPSO algorithms, along with genetic algorithm (GA and particle swarm optimization (PSO, are tested on several benchmark grayscale and color images. The results show that the QPSO and AQPSO perform better than GA and PSO for the enhancement of these images, and the AQPSO has some advantages over QPSO due to its adaptive parameter control strategy.

  14. Text extraction method for historical Tibetan document images based on block projections

    Science.gov (United States)

    Duan, Li-juan; Zhang, Xi-qun; Ma, Long-long; Wu, Jian

    2017-11-01

    Text extraction is an important initial step in digitizing the historical documents. In this paper, we present a text extraction method for historical Tibetan document images based on block projections. The task of text extraction is considered as text area detection and location problem. The images are divided equally into blocks and the blocks are filtered by the information of the categories of connected components and corner point density. By analyzing the filtered blocks' projections, the approximate text areas can be located, and the text regions are extracted. Experiments on the dataset of historical Tibetan documents demonstrate the effectiveness of the proposed method.

  15. Extended local binary pattern features for improving settlement type classification of quickbird images

    CSIR Research Space (South Africa)

    Mdakane, L

    2012-11-01

    Full Text Available Despite the fact that image texture features extracted from high-resolution remotely sensed images over urban areas have demonstrated their ability to distinguish different classes, they are still far from being ideal. Multiresolution grayscale...

  16. Comparing identically designed grayscale (50 phase level) and binary (5 phase levels) splitters: actual versus modeled performance

    Science.gov (United States)

    Lizotte, Todd E.; Ohar, Orest P.; Tuttle, Tracie

    2006-04-01

    Performance of diffractive optics is determined by high-quality design and a suitable fabrication process that can actually realize the design. Engineers who are tasked with developing or implementing a diffractive optic solution into a product need to take into consideration the risks of using grayscale versus binary fabrication processes. In many cases, grayscale design doesn't always provide the best solution or cost benefit during product development. This fabrication dilemma arises when the engineer has to select a source for design and/or fabrication. Engineers come face to face with reality in view of the fact that diffractive optic suppliers tend to provide their services on a "best effort basis". This can be very disheartening to an engineer who is trying to implement diffractive optics. This paper will compare and contrast the design and performance of a 1 to 24 beam, two dimensional; beam splitter fabricated using a fifty (50) phase level grayscale and a five (5) phase level binary fabrication methods. Optical modeling data will be presented showing both designs and the performance expected prior to fabrication. An overview of the optical testing methods used will be discussed including the specific test equipment and metrology techniques used to verify actual optical performance and fabricated dimensional stability of each optical element. Presentation of the two versions of the splitter will include data on fabrication dimensional errors, split beam-to-beam uniformity, split beam-to-beam spatial size uniformity and splitter efficiency as compared to the original intended design performance and models. This is a continuation of work from 2005, Laser Beam Shaping VI.

  17. Comparison of approaches for mobile document image analysis using server supported smartphones

    Science.gov (United States)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-03-01

    With the recent advances in mobile technologies, new capabilities are emerging, such as mobile document image analysis. However, mobile phones are still less powerful than servers, and they have some resource limitations. One approach to overcome these limitations is performing resource-intensive processes of the application on remote servers. In mobile document image analysis, the most resource consuming process is the Optical Character Recognition (OCR) process, which is used to extract text in mobile phone captured images. In this study, our goal is to compare the in-phone and the remote server processing approaches for mobile document image analysis in order to explore their trade-offs. For the inphone approach, all processes required for mobile document image analysis run on the mobile phone. On the other hand, in the remote-server approach, core OCR process runs on the remote server and other processes run on the mobile phone. Results of the experiments show that the remote server approach is considerably faster than the in-phone approach in terms of OCR time, but adds extra delays such as network delay. Since compression and downscaling of images significantly reduce file sizes and extra delays, the remote server approach overall outperforms the in-phone approach in terms of selected speed and correct recognition metrics, if the gain in OCR time compensates for the extra delays. According to the results of the experiments, using the most preferable settings, the remote server approach performs better than the in-phone approach in terms of speed and acceptable correct recognition metrics.

  18. Document Image Processing: Going beyond the Black-and-White Barrier. Progress, Issues and Options with Greyscale and Colour Image Processing.

    Science.gov (United States)

    Hendley, Tom

    1995-01-01

    Discussion of digital document image processing focuses on issues and options associated with greyscale and color image processing. Topics include speed; size of original document; scanning resolution; markets for different categories of scanners, including photographic libraries, publishing, and office applications; hybrid systems; data…

  19. Nonlinear filtering for character recognition in low quality document images

    Science.gov (United States)

    Diaz-Escobar, Julia; Kober, Vitaly

    2014-09-01

    Optical character recognition in scanned printed documents is a well-studied task, where the captured conditions like sheet position, illumination, contrast and resolution are controlled. Nowadays, it is more practical to use mobile devices for document capture than a scanner. So as a consequence, the quality of document images is often poor owing to presence of geometric distortions, nonhomogeneous illumination, low resolution, etc. In this work we propose to use multiple adaptive nonlinear composite filters for detection and classification of characters. Computer simulation results obtained with the proposed system are presented and discussed.

  20. Diagnostic ultrasound imaging for lateral epicondylalgia: a case-control study.

    Science.gov (United States)

    Heales, Luke James; Broadhurst, Nathan; Mellor, Rebecca; Hodges, Paul William; Vicenzino, Bill

    2014-11-01

    Lateral epicondylalgia (LE) is clinically diagnosed as pain over the lateral elbow that is provoked by gripping. Usually, LE responds well to conservative intervention; however, those who fail such treatment require further evaluation, including musculoskeletal ultrasound. Previous studies of musculoskeletal ultrasound have methodological flaws, such as lack of assessor blinding and failure to control for participant age, sex, and arm dominance. The purpose of this study was to assess the diagnostic use of blinded ultrasound imaging in people with clinically diagnosed LE compared with that in a control group matched for age, sex, and arm dominance. Participants (30 with LE and 30 controls) underwent clinical examination as the criterion standard test. Unilateral LE was defined as pain over the lateral epicondyle, which was provoked by palpation, resisted wrist and finger extension, and gripping. Controls without symptoms were matched for age, sex, and arm dominance. Ultrasound investigations were performed by two sonographers using a standardized protocol. Grayscale images were assessed for signs of tendon pathology and rated on a four-point ordinal scale. Power Doppler was used to assess neovascularity and rated on a five-point ordinal scale. The combination of grayscale and power Doppler imaging revealed an overall sensitivity of 90% and specificity of 47%. The positive and negative likelihood ratios for combined grayscale and power Doppler imaging were 1.69 and 0.21, respectively. Although ultrasound imaging helps confirm the absence of LE, when findings are negative for tendinopathic changes, the high prevalence of tendinopathic changes in pain-free controls challenges the specificity of the measure. The validity of ultrasound imaging to confirm tendon pathology in clinically diagnosed LE requires further study with strong methodology.

  1. Fast words boundaries localization in text fields for low quality document images

    Science.gov (United States)

    Ilin, Dmitry; Novikov, Dmitriy; Polevoy, Dmitry; Nikolaev, Dmitry

    2018-04-01

    The paper examines the problem of word boundaries precise localization in document text zones. Document processing on a mobile device consists of document localization, perspective correction, localization of individual fields, finding words in separate zones, segmentation and recognition. While capturing an image with a mobile digital camera under uncontrolled capturing conditions, digital noise, perspective distortions or glares may occur. Further document processing gets complicated because of its specifics: layout elements, complex background, static text, document security elements, variety of text fonts. However, the problem of word boundaries localization has to be solved at runtime on mobile CPU with limited computing capabilities under specified restrictions. At the moment, there are several groups of methods optimized for different conditions. Methods for the scanned printed text are quick but limited only for images of high quality. Methods for text in the wild have an excessively high computational complexity, thus, are hardly suitable for running on mobile devices as part of the mobile document recognition system. The method presented in this paper solves a more specialized problem than the task of finding text on natural images. It uses local features, a sliding window and a lightweight neural network in order to achieve an optimal algorithm speed-precision ratio. The duration of the algorithm is 12 ms per field running on an ARM processor of a mobile device. The error rate for boundaries localization on a test sample of 8000 fields is 0.3

  2. Concatenated image completion via tensor augmentation and completion

    OpenAIRE

    Bengua, Johann A.; Tuan, Hoang D.; Phien, Ho N.; Do, Minh N.

    2016-01-01

    This paper proposes a novel framework called concatenated image completion via tensor augmentation and completion (ICTAC), which recovers missing entries of color images with high accuracy. Typical images are second- or third-order tensors (2D/3D) depending if they are grayscale or color, hence tensor completion algorithms are ideal for their recovery. The proposed framework performs image completion by concatenating copies of a single image that has missing entries into a third-order tensor,...

  3. An image correlation procedure for digitally reconstructed radiographs and electronic portal images

    International Nuclear Information System (INIS)

    Dong, Lei; Boyer, Arthur L.

    1995-01-01

    Purpose: To study a procedure that uses megavoltage digitally reconstructed radiographs (DRRs) calculated from patient's three-dimensional (3D) computed tomography (CT) data as a reference image for correlation with on-line electronic portal images (EPIs) to detect patient setup errors. Methods and Materials: Megavoltage DRRs were generated by ray tracing through a modified volumetric CT data set in which CT numbers were converted into linear attenuation coefficients for the therapeutic beam energy. The DRR transmission image was transformed to the grayscale window of the EPI by a histogram-matching technique. An alternative approach was to calibrate the transmission DRR using a measured response curve of the electronic portal imaging device (EPID). This forces the calculated transmission fluence values to be distributed in the same range as that of the EPID image. A cross-correlation technique was used to determine the degree of alignment of the patient anatomy found in the EPID image relative to the reference DRR. Results: Phantom studies demonstrated that the correlation procedure had a standard deviation of 0.5 mm and 0.5 deg. in aligning translational shifts and in-plane rotations. Systematic errors were found between a reference DRR and a reference EPID image. The automated grayscale image-correlation process was completed within 3 s on a workstation computer or 12 s on a PC. Conclusion: The alignment procedure allows the direct comparison of a patient's treatment portal designed with a 3D planning computer with a patient's on-line portal image acquired at the treatment unit. The image registration process is automated to the extent that it requires minimal user intervention, and it is fast and accurate enough for on-line clinical applications

  4. RECOVERY OF DOCUMENT TEXT FROM TORN FRAGMENTS USING IMAGE PROCESSING

    OpenAIRE

    C.Prasad; Dr.Mahesh; Dr.S.A.K. Jilani

    2016-01-01

    Recovery of document from its torn or damaged fragments play an important role in the field of forensics and archival study. Reconstruction of the torn papers manually with the help of glue and tapes etc., is tedious, time consuming and not satisfactory. For torn images reconstruction we go for image mosaicing, where we reconstruct the image using features (corners) and RANSAC with homography.But for the torn fragments there is no such similarity portion between fragments. Hence we propose a ...

  5. Efficient 2-D DCT Computation from an Image Representation Point of View

    OpenAIRE

    Papakostas, G.A.; Koulouriotis, D.E.; Karakasis, E.G.

    2009-01-01

    A novel methodology that ensures the computation of 2-D DCT coefficients in gray-scale images as well as in binary ones, with high computation rates, was presented in the previous sections. Through a new image representation scheme, called ISR (Image Slice Representation) the 2-D DCT coefficients can be computed in significantly reduced time, with the same accuracy.

  6. Page Layout Analysis of the Document Image Based on the Region Classification in a Decision Hierarchical Structure

    Directory of Open Access Journals (Sweden)

    Hossein Pourghassem

    2010-10-01

    Full Text Available The conversion of document image to its electronic version is a very important problem in the saving, searching and retrieval application in the official automation system. For this purpose, analysis of the document image is necessary. In this paper, a hierarchical classification structure based on a two-stage segmentation algorithm is proposed. In this structure, image is segmented using the proposed two-stage segmentation algorithm. Then, the type of the image regions such as document and non-document image is determined using multiple classifiers in the hierarchical classification structure. The proposed segmentation algorithm uses two algorithms based on wavelet transform and thresholding. Texture features such as correlation, homogeneity and entropy that extracted from co-occurrenc matrix and also two new features based on wavelet transform are used to classifiy and lable the regions of the image. The hierarchical classifier is consisted of two Multilayer Perceptron (MLP classifiers and a Support Vector Machine (SVM classifier. The proposed algorithm is evaluated on a database consisting of document and non-document images that provides from Internet. The experimental results show the efficiency of the proposed approach in the region segmentation and classification. The proposed algorithm provides accuracy rate of 97.5% on classification of the regions.

  7. Correcting geometric and photometric distortion of document images on a smartphone

    Science.gov (United States)

    Simon, Christian; Williem; Park, In Kyu

    2015-01-01

    A set of document image processing algorithms for improving the optical character recognition (OCR) capability of smartphone applications is presented. The scope of the problem covers the geometric and photometric distortion correction of document images. The proposed framework was developed to satisfy industrial requirements. It is implemented on an off-the-shelf smartphone with limited resources in terms of speed and memory. Geometric distortions, i.e., skew and perspective distortion, are corrected by sending horizontal and vertical vanishing points toward infinity in a downsampled image. Photometric distortion includes image degradation from moiré pattern noise and specular highlights. Moiré pattern noise is removed using low-pass filters with different sizes independently applied to the background and text region. The contrast of the text in a specular highlighted area is enhanced by locally enlarging the intensity difference between the background and text while the noise is suppressed. Intensive experiments indicate that the proposed methods show a consistent and robust performance on a smartphone with a runtime of less than 1 s.

  8. USE OF IMAGE BASED MODELLING FOR DOCUMENTATION OF INTRICATELY SHAPED OBJECTS

    Directory of Open Access Journals (Sweden)

    M. Marčiš

    2016-06-01

    Full Text Available In the documentation of cultural heritage, we can encounter three dimensional shapes and structures which are complicated to measure. Such objects are for example spiral staircases, timber roof trusses, historical furniture or folk costume where it is nearly impossible to effectively use the traditional surveying or the terrestrial laser scanning due to the shape of the object, its dimensions and the crowded environment. The actual methods of digital photogrammetry can be very helpful in such cases with the emphasis on the automated processing of the extensive image data. The created high resolution 3D models and 2D orthophotos are very important for the documentation of architectural elements and they can serve as an ideal base for the vectorization and 2D drawing documentation. This contribution wants to describe the various usage of image based modelling in specific interior spaces and specific objects. The advantages and disadvantages of the photogrammetric measurement of such objects in comparison to other surveying methods are reviewed.

  9. Use of Image Based Modelling for Documentation of Intricately Shaped Objects

    Science.gov (United States)

    Marčiš, M.; Barták, P.; Valaška, D.; Fraštia, M.; Trhan, O.

    2016-06-01

    In the documentation of cultural heritage, we can encounter three dimensional shapes and structures which are complicated to measure. Such objects are for example spiral staircases, timber roof trusses, historical furniture or folk costume where it is nearly impossible to effectively use the traditional surveying or the terrestrial laser scanning due to the shape of the object, its dimensions and the crowded environment. The actual methods of digital photogrammetry can be very helpful in such cases with the emphasis on the automated processing of the extensive image data. The created high resolution 3D models and 2D orthophotos are very important for the documentation of architectural elements and they can serve as an ideal base for the vectorization and 2D drawing documentation. This contribution wants to describe the various usage of image based modelling in specific interior spaces and specific objects. The advantages and disadvantages of the photogrammetric measurement of such objects in comparison to other surveying methods are reviewed.

  10. Text segmentation in degraded historical document images

    Directory of Open Access Journals (Sweden)

    A.S. Kavitha

    2016-07-01

    Full Text Available Text segmentation from degraded Historical Indus script images helps Optical Character Recognizer (OCR to achieve good recognition rates for Hindus scripts; however, it is challenging due to complex background in such images. In this paper, we present a new method for segmenting text and non-text in Indus documents based on the fact that text components are less cursive compared to non-text ones. To achieve this, we propose a new combination of Sobel and Laplacian for enhancing degraded low contrast pixels. Then the proposed method generates skeletons for text components in enhanced images to reduce computational burdens, which in turn helps in studying component structures efficiently. We propose to study the cursiveness of components based on branch information to remove false text components. The proposed method introduces the nearest neighbor criterion for grouping components in the same line, which results in clusters. Furthermore, the proposed method classifies these clusters into text and non-text cluster based on characteristics of text components. We evaluate the proposed method on a large dataset containing varieties of images. The results are compared with the existing methods to show that the proposed method is effective in terms of recall and precision.

  11. Document authentication at molecular levels using desorption atmospheric pressure chemical ionization mass spectrometry imaging.

    Science.gov (United States)

    Li, Ming; Jia, Bin; Ding, Liying; Hong, Feng; Ouyang, Yongzhong; Chen, Rui; Zhou, Shumin; Chen, Huanwen; Fang, Xiang

    2013-09-01

    Molecular images of documents were obtained by sequentially scanning the surface of the document using desorption atmospheric pressure chemical ionization mass spectrometry (DAPCI-MS), which was operated in either a gasless, solvent-free or methanol vapor-assisted mode. The decay process of the ink used for handwriting was monitored by following the signal intensities recorded by DAPCI-MS. Handwritings made using four types of inks on four kinds of paper surfaces were tested. By studying the dynamic decay of the inks, DAPCI-MS imaging differentiated a 10-min old from two 4 h old samples. Non-destructive forensic analysis of forged signatures either handwritten or computer-assisted was achieved according to the difference of the contour in DAPCI images, which was attributed to the strength personalized by different writers. Distinction of the order of writing/stamping on documents and detection of illegal printings were accomplished with a spatial resolution of about 140 µm. A Matlab® written program was developed to facilitate the visualization of the similarity between signature images obtained by DAPCI-MS. The experimental results show that DAPCI-MS imaging provides rich information at the molecular level and thus can be used for the reliable document analysis in forensic applications. © 2013 The Authors. Journal of Mass Spectrometry published by John Wiley & Sons, Ltd.

  12. Acquisition and Post-Processing of Immunohistochemical Images.

    Science.gov (United States)

    Sedgewick, Jerry

    2017-01-01

    Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.

  13. Wide-field time-resolved luminescence imaging and spectroscopy to decipher obliterated documents in forensic science

    Science.gov (United States)

    Suzuki, Mototsugu; Akiba, Norimitsu; Kurosawa, Kenji; Kuroki, Kenro; Akao, Yoshinori; Higashikawa, Yoshiyasu

    2016-01-01

    We applied a wide-field time-resolved luminescence (TRL) method with a pulsed laser and a gated intensified charge coupled device (ICCD) for deciphering obliterated documents for use in forensic science. The TRL method can nondestructively measure the dynamics of luminescence, including fluorescence and phosphorescence lifetimes, which prove to be useful parameters for image detection. First, we measured the TRL spectra of four brands of black porous-tip pen inks on paper to estimate their luminescence lifetimes. Next, we acquired the TRL images of 12 obliterated documents at various delay times and gate times of the ICCD. The obliterated contents were revealed in the TRL images because of the difference in the luminescence lifetimes of the inks. This method requires no pretreatment, is nondestructive, and has the advantage of wide-field imaging, which makes it is easy to control the gate timing. This demonstration proves that TRL imaging and spectroscopy are powerful tools for forensic document examination.

  14. Digital image display system for emergency room

    International Nuclear Information System (INIS)

    Murry, R.C.; Lane, T.J.; Miax, L.S.

    1989-01-01

    This paper reports on a digital image display system for the emergency room (ER) in a major trauma hospital. Its objective is to reduce radiographic image delivery time to a busy ER while simultaneously providing a multimodality capability. Image storage, retrieval, and display will also be facilitated with this system. The system's backbone is a token-ring network of RISC and personal computers. The display terminals are higher- function RISC computers with 1,024 2 color or gray-scale monitors. The PCs serve as administrative terminals. Nuclear medicine, CT, MR, and digitized film images are transferred to the image display system

  15. Quantification of heterogeneity observed in medical images

    International Nuclear Information System (INIS)

    Brooks, Frank J; Grigsby, Perry W

    2013-01-01

    There has been much recent interest in the quantification of visually evident heterogeneity within functional grayscale medical images, such as those obtained via magnetic resonance or positron emission tomography. In the case of images of cancerous tumors, variations in grayscale intensity imply variations in crucial tumor biology. Despite these considerable clinical implications, there is as yet no standardized method for measuring the heterogeneity observed via these imaging modalities. In this work, we motivate and derive a statistical measure of image heterogeneity. This statistic measures the distance-dependent average deviation from the smoothest intensity gradation feasible. We show how this statistic may be used to automatically rank images of in vivo human tumors in order of increasing heterogeneity. We test this method against the current practice of ranking images via expert visual inspection. We find that this statistic provides a means of heterogeneity quantification beyond that given by other statistics traditionally used for the same purpose. We demonstrate the effect of tumor shape upon our ranking method and find the method applicable to a wide variety of clinically relevant tumor images. We find that the automated heterogeneity rankings agree very closely with those performed visually by experts. These results indicate that our automated method may be used reliably to rank, in order of increasing heterogeneity, tumor images whether or not object shape is considered to contribute to that heterogeneity. Automated heterogeneity ranking yields objective results which are more consistent than visual rankings. Reducing variability in image interpretation will enable more researchers to better study potential clinical implications of observed tumor heterogeneity

  16. Quantification of heterogeneity observed in medical images.

    Science.gov (United States)

    Brooks, Frank J; Grigsby, Perry W

    2013-03-02

    There has been much recent interest in the quantification of visually evident heterogeneity within functional grayscale medical images, such as those obtained via magnetic resonance or positron emission tomography. In the case of images of cancerous tumors, variations in grayscale intensity imply variations in crucial tumor biology. Despite these considerable clinical implications, there is as yet no standardized method for measuring the heterogeneity observed via these imaging modalities. In this work, we motivate and derive a statistical measure of image heterogeneity. This statistic measures the distance-dependent average deviation from the smoothest intensity gradation feasible. We show how this statistic may be used to automatically rank images of in vivo human tumors in order of increasing heterogeneity. We test this method against the current practice of ranking images via expert visual inspection. We find that this statistic provides a means of heterogeneity quantification beyond that given by other statistics traditionally used for the same purpose. We demonstrate the effect of tumor shape upon our ranking method and find the method applicable to a wide variety of clinically relevant tumor images. We find that the automated heterogeneity rankings agree very closely with those performed visually by experts. These results indicate that our automated method may be used reliably to rank, in order of increasing heterogeneity, tumor images whether or not object shape is considered to contribute to that heterogeneity. Automated heterogeneity ranking yields objective results which are more consistent than visual rankings. Reducing variability in image interpretation will enable more researchers to better study potential clinical implications of observed tumor heterogeneity.

  17. Chaos-based image encryption algorithm

    International Nuclear Information System (INIS)

    Guan Zhihong; Huang Fangjun; Guan Wenjie

    2005-01-01

    In this Letter, a new image encryption scheme is presented, in which shuffling the positions and changing the grey values of image pixels are combined to confuse the relationship between the cipher-image and the plain-image. Firstly, the Arnold cat map is used to shuffle the positions of the image pixels in the spatial-domain. Then the discrete output signal of the Chen's chaotic system is preprocessed to be suitable for the grayscale image encryption, and the shuffled image is encrypted by the preprocessed signal pixel by pixel. The experimental results demonstrate that the key space is large enough to resist the brute-force attack and the distribution of grey values of the encrypted image has a random-like behavior

  18. Neurilemmoma of the glans penis: ultrasonography and magnetic resonance imaging findings.

    Science.gov (United States)

    Jung, Dae Chul; Hwang, Sung Il; Jung, Sung Il; Kim, Sun Ho; Kim, Seung Hyup

    2006-01-01

    Neurilemmoma of the glans penis is rare, and no imaging findings have been reported. A case of neurilemmoma of the glans penis is presented. Ultrasonography (US) and magnetic resonance imaging revealed a well-defined small mass in the glans penis. The mass appeared hypoechoic on gray-scale US and hypervascular on color Doppler US. Magnetic resonance imaging revealed high signal intensity of the mass on a T2-weighted image and strong enhancement on a contrast-enhanced T1-weighted image.

  19. The wavelet/scalar quantization compression standard for digital fingerprint images

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  20. Molecular imaging of banknote and questioned document using solvent-free gold nanoparticle-assisted laser desorption/ionization imaging mass spectrometry.

    Science.gov (United States)

    Tang, Ho-Wai; Wong, Melody Yee-Man; Chan, Sharon Lai-Fung; Che, Chi-Ming; Ng, Kwan-Ming

    2011-01-01

    Direct chemical analysis and molecular imaging of questioned documents in a non/minimal-destructive manner is important in forensic science. Here, we demonstrate that solvent-free gold-nanoparticle-assisted laser desorption/ionization mass spectrometry is a sensitive and minimal destructive method for direct detection and imaging of ink and visible and/or fluorescent dyes printed on banknotes or written on questioned documents. Argon ion sputtering of a gold foil allows homogeneous coating of a thin layer of gold nanoparticles on banknotes and checks in a dry state without delocalizing spatial distributions of the analytes. Upon N(2) laser irradiation of the gold nanoparticle-coated banknotes or checks, abundant ions are desorbed and detected. Recording the spatial distributions of the ions can reveal the molecular images of visible and fluorescent ink printed on banknotes and determine the printing order of different ink which may be useful in differentiating real banknotes from fakes. The method can also be applied to identify forged parts in questioned documents, such as number/writing alteration on a check, by tracing different writing patterns that come from different pens.

  1. Script Identification from Printed Indian Document Images and Performance Evaluation Using Different Classifiers

    Directory of Open Access Journals (Sweden)

    Sk Md Obaidullah

    2014-01-01

    multiscript country like India. In this paper the real life problem of printed script identification from official Indian document images is considered and performances of different well-known classifiers are evaluated. Two important evaluating parameters, namely, AAR (average accuracy rate and MBT (model building time, are computed for this performance analysis. Experiment was carried out on 459 printed document images with 5-fold cross-validation. Simple Logistic model shows highest AAR of 98.9% among all. BayesNet and Random Forest model have average accuracy rate of 96.7% and 98.2% correspondingly with lowest MBT of 0.09 s.

  2. Does use of a PACS increase the number of images per study? A case study in ultrasound.

    Science.gov (United States)

    Horii, Steven; Nisenbaum, Harvey; Farn, James; Coleman, Beverly; Rowling, Susan; Langer, Jill; Jacobs, Jill; Arger, Peter; Pinheiro, Lisa; Klein, Wendy; Reber, Michele; Iyoob, Christopher

    2002-03-01

    The purpose of this study was to determine if the use of a picture archiving and communications system (PACS) in ultrasonography increased the number of images acquired per examination. The hypothesis that such an increase does occur was based on anecdotal information; this study sought to test the hypothesis. A random sample of all ultrasound examination types was drawn from the period 1998 through 1999. The ultrasound PACS in use (ACCESS; Kodak Health Information Systems, Dallas, TX) records the number of grayscale and color images saved as part of each study. Each examination in the sample was checked in the ultrasound PACS database,.and the number of grayscale and color images was recorded. The comparison film-based sample was drawn from the period 1994 through 1995. The number of examinations of each type selected was based on the overall statistics of the section; that is, the sample was designed to represent the approximate frequency with which the various examination types are done. For film-based image counts, the jackets were retrieved, and the number of grayscale and color images were counted. The number of images obtained per examination (for most examinations) in ultrasound increased with PACS use. This was more evident with some examination types (eg, pelvis). This result, however, has to be examined for possible systematic biases because ultrasound practice has changed over the time since the authors stopped using film routinely. The use of PACS in ultrasonography was not associated with an increase in the number of images per examination based solely on the use of PACS, with the exception of neonatal head studies. Increases in the number of images per study was otherwise associated with examinations for which changes in protocols resulted in the increased image counts.

  3. Dual-camera design for coded aperture snapshot spectral imaging.

    Science.gov (United States)

    Wang, Lizhi; Xiong, Zhiwei; Gao, Dahua; Shi, Guangming; Wu, Feng

    2015-02-01

    Coded aperture snapshot spectral imaging (CASSI) provides an efficient mechanism for recovering 3D spectral data from a single 2D measurement. However, since the reconstruction problem is severely underdetermined, the quality of recovered spectral data is usually limited. In this paper we propose a novel dual-camera design to improve the performance of CASSI while maintaining its snapshot advantage. Specifically, a beam splitter is placed in front of the objective lens of CASSI, which allows the same scene to be simultaneously captured by a grayscale camera. This uncoded grayscale measurement, in conjunction with the coded CASSI measurement, greatly eases the reconstruction problem and yields high-quality 3D spectral data. Both simulation and experimental results demonstrate the effectiveness of the proposed method.

  4. Whole mount nuclear fluorescent imaging: convenient documentation of embryo morphology.

    Science.gov (United States)

    Sandell, Lisa L; Kurosaka, Hiroshi; Trainor, Paul A

    2012-11-01

    Here, we describe a relatively inexpensive and easy method to produce high quality images that reveal fine topological details of vertebrate embryonic structures. The method relies on nuclear staining of whole mount embryos in combination with confocal microscopy or conventional wide field fluorescent microscopy. In cases where confocal microscopy is used in combination with whole mount nuclear staining, the resulting embryo images can rival the clarity and resolution of images produced by scanning electron microscopy (SEM). The fluorescent nuclear staining may be performed with a variety of cell permeable nuclear dyes, enabling the technique to be performed with multiple standard microscope/illumination or confocal/laser systems. The method may be used to document morphology of embryos of a variety of organisms, as well as individual organs and tissues. Nuclear stain imaging imposes minimal impact on embryonic specimens, enabling imaged specimens to be utilized for additional assays. Copyright © 2012 Wiley Periodicals, Inc.

  5. Which supplementary imaging modality should be used for breast ultrasonography? Comparison of the diagnostic performance of elastography and computer-aided diagnosis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Si Eun; Moon, Ji Eun Ho; Kim, Eun Kyung; Yoon, Jung Hyun [Yonsei University College of Medicine, Seoul (Korea, Republic of)

    2017-04-15

    The aim of this study was to evaluate and compare the diagnostic performance of grayscale ultrasonography (US), US elastography, and US computer-aided diagnosis (US-CAD) in the differential diagnosis of breast masses. A total of 193 breast masses in 175 consecutive women (mean age, 46.4 years) from June to August 2015 were included. US and elastography images were obtained and recorded. A US-CAD system was applied to the grayscale sonograms, which were automatically analyzed and visualized in order to generate a final assessment. The final assessments of breast masses were based on the American College of Radiology Breast Imaging Reporting and Data System (BI-RADS) categories, while elasticity scores were assigned using a 5-point scoring system. The diagnostic performance of grayscale US, elastography, and US-CAD was calculated and compared. Of the 193 breast masses, 120 (62.2%) were benign and 73 (37.8%) were malignant. Breast masses had significantly higher rates of malignancy in BI-RADS categories 4c and 5, elastography patterns 4 and 5, and when the US-CAD assessment was possibly malignant (all P<0.001). Elastography had higher specificity (40.8%, P=0.042) than grayscale US. US-CAD showed the highest specificity (67.5%), positive predictive value (PPV) (61.4%), accuracy (74.1%), and area under the curve (AUC) (0.762, all P<0.05) among the three diagnostic tools. US-CAD had higher values for specificity, PPV, accuracy, and AUC than grayscale US or elastography. Computer-based analysis based on the morphologic features of US may be very useful in improving the diagnostic performance of breast US.

  6. New second-order difference algorithm for image segmentation based on cellular neural networks (CNNs)

    Science.gov (United States)

    Meng, Shukai; Mo, Yu L.

    2001-09-01

    Image segmentation is one of the most important operations in many image analysis problems, which is the process that subdivides an image into its constituents and extracts those parts of interest. In this paper, we present a new second order difference gray-scale image segmentation algorithm based on cellular neural networks. A 3x3 CNN cloning template is applied, which can make smooth processing and has a good ability to deal with the conflict between the capability of noise resistance and the edge detection of complex shapes. We use second order difference operator to calculate the coefficients of the control template, which are not constant but rather depend on the input gray-scale values. It is similar to Contour Extraction CNN in construction, but there are some different in algorithm. The result of experiment shows that the second order difference CNN has a good capability in edge detection. It is better than Contour Extraction CNN in detail detection and more effective than the Laplacian of Gauss (LOG) algorithm.

  7. Two-dimensional grayscale ultrasound and spectral Doppler waveform evaluation of dogs with chronic enteropathies.

    Science.gov (United States)

    Gaschen, Lorrie; Kircher, Patrick

    2007-08-01

    Sonography is an important diagnostic tool to examine the gastrointestinal tract of dogs with chronic diarrhea. Two-dimensional grayscale ultrasound parameters to assess for various enteropathies primarily focus on wall thickness and layering. Mild, generalized thickening of the intestinal wall with maintenance of the wall layering is common in inflammatory bowel disease. Quantitative and semi-quantitative spectral Doppler arterial waveform analysis can be utilized for various enteropathies, including inflammatory bowel disease and food allergies. Dogs with inflammatory bowel disease have inadequate hemodynamic responses during digestion of food. Dogs with food allergies have prolonged vasodilation and lower resistive and pulsatility indices after eating allergen-inducing foods.

  8. What Can Pictures Tell Us About Web Pages? Improving Document Search Using Images.

    Science.gov (United States)

    Rodriguez-Vaamonde, Sergio; Torresani, Lorenzo; Fitzgibbon, Andrew W

    2015-06-01

    Traditional Web search engines do not use the images in the HTML pages to find relevant documents for a given query. Instead, they typically operate by computing a measure of agreement between the keywords provided by the user and only the text portion of each page. In this paper we study whether the content of the pictures appearing in a Web page can be used to enrich the semantic description of an HTML document and consequently boost the performance of a keyword-based search engine. We present a Web-scalable system that exploits a pure text-based search engine to find an initial set of candidate documents for a given query. Then, the candidate set is reranked using visual information extracted from the images contained in the pages. The resulting system retains the computational efficiency of traditional text-based search engines with only a small additional storage cost needed to encode the visual information. We test our approach on one of the TREC Million Query Track benchmarks where we show that the exploitation of visual content yields improvement in accuracies for two distinct text-based search engines, including the system with the best reported performance on this benchmark. We further validate our approach by collecting document relevance judgements on our search results using Amazon Mechanical Turk. The results of this experiment confirm the improvement in accuracy produced by our image-based reranker over a pure text-based system.

  9. Application of Color Transformation Techniques in Pediatric Spinal Cord MR Images: Typically Developing and Spinal Cord Injury Population.

    Science.gov (United States)

    Alizadeh, Mahdi; Shah, Pallav; Conklin, Chris J; Middleton, Devon M; Saksena, Sona; Flanders, Adam E; Krisa, Laura; Mulcahey, M J; Faro, Scott H; Mohamed, Feroze B

    2018-01-16

    The purpose of this study was to evaluate an improved and reliable visualization method for pediatric spinal cord MR images in healthy subjects and patients with spinal cord injury (SCI). A total of 15 pediatric volunteers (10 healthy subjects and 5 subjects with cervical SCI) with a mean age of 11.41 years (range 8-16 years) were recruited and scanned using a 3.0T Siemens Verio MR scanner. T2-weighted axial images were acquired covering entire cervical spinal cord level C1 to C7. These gray-scale images were then converted to color images by using five different techniques including hue-saturation-value (HSV), rainbow, red-green-blue (RGB), and two enhanced RGB techniques using automated contrast stretching and intensity inhomogeneity correction. Performance of these techniques was scored visually by two neuroradiologists within three selected cervical spinal cord intervertebral disk levels (C2-C3, C4-C5, and C6-C7) and quantified using signal to noise ratio (SNR) and contrast to noise ratio (CNR). Qualitative and quantitative evaluation of the color images shows consistent improvement across all the healthy and SCI subjects over conventional gray-scale T2-weighted gradient echo (GRE) images. Inter-observer reliability test showed moderate to strong intra-class correlation (ICC) coefficients in the proposed techniques (ICC > 0.73). The results suggest that the color images could be used for quantification and enhanced visualization of the spinal cord structures in addition to the conventional gray-scale images. This would immensely help towards improved delineation of the gray/white and CSF structures and further aid towards accurate manual or automatic drawings of region of interests (ROIs).

  10. Machine printed text and handwriting identification in noisy document images.

    Science.gov (United States)

    Zheng, Yefeng; Li, Huiping; Doermann, David

    2004-03-01

    In this paper, we address the problem of the identification of text in noisy document images. We are especially focused on segmenting and identifying between handwriting and machine printed text because: 1) Handwriting in a document often indicates corrections, additions, or other supplemental information that should be treated differently from the main content and 2) the segmentation and recognition techniques requested for machine printed and handwritten text are significantly different. A novel aspect of our approach is that we treat noise as a separate class and model noise based on selected features. Trained Fisher classifiers are used to identify machine printed text and handwriting from noise and we further exploit context to refine the classification. A Markov Random Field-based (MRF) approach is used to model the geometrical structure of the printed text, handwriting, and noise to rectify misclassifications. Experimental results show that our approach is robust and can significantly improve page segmentation in noisy document collections.

  11. Skeletonization and Partitioning of Digital Images Using Discrete Morse Theory.

    Science.gov (United States)

    Delgado-Friedrichs, Olaf; Robins, Vanessa; Sheppard, Adrian

    2015-03-01

    We show how discrete Morse theory provides a rigorous and unifying foundation for defining skeletons and partitions of grayscale digital images. We model a grayscale image as a cubical complex with a real-valued function defined on its vertices (the voxel values). This function is extended to a discrete gradient vector field using the algorithm presented in Robins, Wood, Sheppard TPAMI 33:1646 (2011). In the current paper we define basins (the building blocks of a partition) and segments of the skeleton using the stable and unstable sets associated with critical cells. The natural connection between Morse theory and homology allows us to prove the topological validity of these constructions; for example, that the skeleton is homotopic to the initial object. We simplify the basins and skeletons via Morse-theoretic cancellation of critical cells in the discrete gradient vector field using a strategy informed by persistent homology. Simple working Python code for our algorithms for efficient vector field traversal is included. Example data are taken from micro-CT images of porous materials, an application area where accurate topological models of pore connectivity are vital for fluid-flow modelling.

  12. Comparison of liquid crystal display monitors calibrated with gray-scale standard display function and with γ 2.2 and iPad: observer performance in detection of cerebral infarction on brain CT.

    Science.gov (United States)

    Yoshimura, Kumiko; Nihashi, Takashi; Ikeda, Mitsuru; Ando, Yoshio; Kawai, Hisashi; Kawakami, Kenichi; Kimura, Reiko; Okada, Yumiko; Okochi, Yoshiyuki; Ota, Naotoshi; Tsuchiya, Kenichi; Naganawa, Shinji

    2013-06-01

    The purpose of the study was to compare observer performance in the detection of cerebral infarction on a brain CT using medical-grade liquid crystal display (LCD) monitors calibrated with the gray-scale standard display function and with γ 2.2 and using an iPad with a simulated screen setting. We amassed 97 sample sets, from 47 patients with proven cerebral infarction and 50 healthy control subjects. Nine radiologists independently assessed brain CT on a gray-scale standard display function LCD, a γ 2.2 LCD, and an iPad in random order over 4-week intervals. Receiver operating characteristic (ROC) analysis was performed by using the continuous scale, and the area under the ROC curve (A(z)) was calculated for each monitor. The A(z) values for gray-scale standard display function LCD, γ 2.2 LCD, and iPad were 0.875, 0.884, and 0.839, respectively. The difference among the three monitors was very small. There was no significant difference between gray-scale standard display function LCD and γ 2.2 LCD. However, the A(z) value was statistically significantly smaller for the iPad than the γ 2.2 LCD (p iPad was poorer than that using the other LCDs, the difference was small. Therefore, the iPad could not substitute for other LCD monitors. However, owing to the promising potential advantages of tablet PCs, such as portability, further examination is needed into the clinical use of tablet PCs.

  13. Chaos-based image encryption algorithm [rapid communication

    Science.gov (United States)

    Guan, Zhi-Hong; Huang, Fangjun; Guan, Wenjie

    2005-10-01

    In this Letter, a new image encryption scheme is presented, in which shuffling the positions and changing the grey values of image pixels are combined to confuse the relationship between the cipher-image and the plain-image. Firstly, the Arnold cat map is used to shuffle the positions of the image pixels in the spatial-domain. Then the discrete output signal of the Chen's chaotic system is preprocessed to be suitable for the grayscale image encryption, and the shuffled image is encrypted by the preprocessed signal pixel by pixel. The experimental results demonstrate that the key space is large enough to resist the brute-force attack and the distribution of grey values of the encrypted image has a random-like behavior.

  14. [Digitalization, archival storage and use of image documentation in the GastroBase-II system].

    Science.gov (United States)

    Kocna, P

    1997-05-14

    "GastroBase-II" is a module of the clinical information system "KIS-ComSyD"; The main part is represented by structured data-text with an expert system including on-line image digitalization in gastroenterology (incl. endoscopic, X-ray and endosonography pictures). The hardware and software of the GastroBase are described as well as six-years experiences with application of digitalized image data. An integration of a picture into text, reports, slides for a lecture or an electronic atlas is documented with examples. Briefly are reported out experiences with graphic editors (PhotoStyler), text editor (WordPerfect) and slide preparation for lecturing with the presentation software PowerPoint. The multimedia applications on the CD-ROM illustrate a modern trend using digitalized image documentation for pregradual and postgradual education.

  15. A kind of color image segmentation algorithm based on super-pixel and PCNN

    Science.gov (United States)

    Xu, GuangZhu; Wang, YaWen; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun

    2018-04-01

    Image segmentation is a very important step in the low-level visual computing. Although image segmentation has been studied for many years, there are still many problems. PCNN (Pulse Coupled Neural network) has biological background, when it is applied to image segmentation it can be viewed as a region-based method, but due to the dynamics properties of PCNN, many connectionless neurons will pulse at the same time, so it is necessary to identify different regions for further processing. The existing PCNN image segmentation algorithm based on region growing is used for grayscale image segmentation, cannot be directly used for color image segmentation. In addition, the super-pixel can better reserve the edges of images, and reduce the influences resulted from the individual difference between the pixels on image segmentation at the same time. Therefore, on the basis of the super-pixel, the original PCNN algorithm based on region growing is improved by this paper. First, the color super-pixel image was transformed into grayscale super-pixel image which was used to seek seeds among the neurons that hadn't been fired. And then it determined whether to stop growing by comparing the average of each color channel of all the pixels in the corresponding regions of the color super-pixel image. Experiment results show that the proposed algorithm for the color image segmentation is fast and effective, and has a certain effect and accuracy.

  16. Quantum color image watermarking based on Arnold transformation and LSB steganography

    Science.gov (United States)

    Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Luo, Gaofeng

    In this paper, a quantum color image watermarking scheme is proposed through twice-scrambling of Arnold transformations and steganography of least significant bit (LSB). Both carrier image and watermark images are represented by the novel quantum representation of color digital images model (NCQI). The image sizes for carrier and watermark are assumed to be 2n×2n and 2n‑1×2n‑1, respectively. At first, the watermark is scrambled into a disordered form through image preprocessing technique of exchanging the image pixel position and altering the color information based on Arnold transforms, simultaneously. Then, the scrambled watermark with 2n‑1×2n‑1 image size and 24-qubit grayscale is further expanded to an image with size 2n×2n and 6-qubit grayscale using the nearest-neighbor interpolation method. Finally, the scrambled and expanded watermark is embedded into the carrier by steganography of LSB scheme, and a key image with 2n×2n size and 3-qubit information is generated at the meantime, which only can use the key image to retrieve the original watermark. The extraction of watermark is the reverse process of embedding, which is achieved by applying a sequence of operations in the reverse order. Simulation-based experimental results involving different carrier and watermark images (i.e. conventional or non-quantum) are simulated based on the classical computer’s MATLAB 2014b software, which illustrates that the present method has a good performance in terms of three items: visual quality, robustness and steganography capacity.

  17. Optimized lighting method of applying shaped-function signal for increasing the dynamic range of LED-multispectral imaging system

    Science.gov (United States)

    Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling

    2018-02-01

    This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.

  18. Imaging quality evaluation method of pixel coupled electro-optical imaging system

    Science.gov (United States)

    He, Xu; Yuan, Li; Jin, Chunqi; Zhang, Xiaohui

    2017-09-01

    With advancements in high-resolution imaging optical fiber bundle fabrication technology, traditional photoelectric imaging system have become ;flexible; with greatly reduced volume and weight. However, traditional image quality evaluation models are limited by the coupling discrete sampling effect of fiber-optic image bundles and charge-coupled device (CCD) pixels. This limitation substantially complicates the design, optimization, assembly, and evaluation image quality of the coupled discrete sampling imaging system. Based on the transfer process of grayscale cosine distribution optical signal in the fiber-optic image bundle and CCD, a mathematical model of coupled modulation transfer function (coupled-MTF) is established. This model can be used as a basis for following studies on the convergence and periodically oscillating characteristics of the function. We also propose the concept of the average coupled-MTF, which is consistent with the definition of traditional MTF. Based on this concept, the relationships among core distance, core layer radius, and average coupled-MTF are investigated.

  19. High-resolution electron microscope image analysis approach for superconductor YBa2Cu3O7-x

    International Nuclear Information System (INIS)

    Xu, J.; Lu, F.; Jia, C.; Hua, Z.

    1991-01-01

    In this paper, an HREM (High-resolution electron microscope) image analysis approach has been developed. The image filtering, segmentation and particles extraction based on gray-scale mathematical morphological operations, are performed on the original HREM image. The final image is a pseudocolor image, with the background removed, relatively uniform brightness, filtered slanting elongation, regular shape for every kind of particle, and particle boundaries that no longer touch each other so that the superconducting material structure can be shown clearly

  20. In vivo skin imaging for hydration and micro relief-measurement.

    Science.gov (United States)

    Kardosova, Z; Hegyi, V

    2013-01-01

    We present the results of our work with device used for measurement of skin capacitance before and after application of moisturizing creams and results of experiment performed on cellulose filter papers soaked with different solvents. The measurements were performed by a device built on capacitance sensor, which provides an investigator with a capacitance image of the skin. The capacitance values are coded in a range of 256 gray levels then the skin hydration can be characterized using parameters derived from gray level histogram by specific software. The images obtained by device allow a highly precise observation of skin topography. Measuring of skin capacitance brings new, objective, reliable information about topographical, physical and chemical parameters of the skin. The study shows that there is a good correlation between the average grayscale values and skin hydration. In future works we need to complete more comparison studies, interpret the average grayscale values to skin hydration levels and use it for follow-up of dynamics of skin micro-relief and hydration changes (Fig. 6, Ref. 15).

  1. 3D Documentation of Archaeological Excavations Using Image-Based Point Cloud

    Directory of Open Access Journals (Sweden)

    Umut Ovalı

    2017-03-01

    Full Text Available Rapid progress in digital technology enables us to create three-dimensional models using digital images. Low cost, time efficiency and accurate results of this method put to question if this technique can be an alternative to conventional documentation techniques, which generally are 2D orthogonal drawings. Accurate and detailed 3D models of archaeological features have potential for many other purposes besides geometric documentation. This study presents a recent image-based three-dimensional registration technique employed in 2013 at one of the ancient city in Turkey, using “Structure from Motion” (SfM algorithms. A commercial software is applied to investigate whether this method can be used as an alternative to other techniques. Mesh model of the some section of the excavation section of the site were produced using point clouds were produced from the digital photographs. Accuracy assessment of the produced model was realized using the comparison of the directly measured coordinates of the ground control points with produced from model. Obtained results presented that the accuracy is around 1.3 cm.

  2. Method of forming latent image to protect documents based on the effect moire

    OpenAIRE

    Troyan, О.

    2015-01-01

    Analysis of modern methods of information protection based on printed documents. It is shown that methods of protection from moiré effect provide reliable and effective protection by gaining new protection technology that is displayed in the optical acceleration motion layers and causes moire in fraud. Latent images can securely protect paper documents. Introduce a system of equations to calculate curvilinear patterns, where the optical formula of acceleration and periods moire stored in i...

  3. Fast image matching algorithm based on projection characteristics

    Science.gov (United States)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  4. Document image analysis: A primer

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    (1) Typical documents in today's office are computer-generated, but even so, inevitably by different computers and ... different sizes, from a business card to a large engineering drawing. Document analysis ... Whether global or adaptive ...

  5. Two-stage image denoising considering interscale and intrascale dependencies

    Science.gov (United States)

    Shahdoosti, Hamid Reza

    2017-11-01

    A solution to the problem of reducing the noise of grayscale images is presented. To consider the intrascale and interscale dependencies, this study makes use of a model. It is shown that the dependency between a wavelet coefficient and its predecessors can be modeled by the first-order Markov chain, which means that the parent conveys all of the information necessary for efficient estimation. Using this fact, the proposed method employs the Kalman filter in the wavelet domain for image denoising. The proposed method has two stages. The first stage employs a simple denoising algorithm to provide the noise-free image, by which the parameters of the model such as state transition matrix, variance of the process noise, the observation model, and the covariance of the observation noise are estimated. In the second stage, the Kalman filter is applied to the wavelet coefficients of the noisy image to estimate the noise-free coefficients. In fact, the Kalman filter is used to estimate the coefficients of high-frequency subbands from the coefficients of coarser scales and noisy observations of neighboring coefficients. In this way, both the interscale and intrascale dependencies are taken into account. Results are presented and discussed on a set of standard 8-bit grayscale images. The experimental results demonstrate that the proposed method achieves performances competitive with the state-of-the-art denoising methods in terms of both peak-signal-to-noise ratio and subjective visual quality.

  6. Least-squares model-based halftoning

    Science.gov (United States)

    Pappas, Thrasyvoulos N.; Neuhoff, David L.

    1992-08-01

    A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach

  7. Just Noticeable Distortion Model and Its Application in Color Image Watermarking

    Science.gov (United States)

    Liu, Kuo-Cheng

    In this paper, a perceptually adaptive watermarking scheme for color images is proposed in order to achieve robustness and transparency. A new just noticeable distortion (JND) estimator for color images is first designed in the wavelet domain. The key issue of the JND model is to effectively integrate visual masking effects. The estimator is an extension to the perceptual model that is used in image coding for grayscale images. Except for the visual masking effects given coefficient by coefficient by taking into account the luminance content and the texture of grayscale images, the crossed masking effect given by the interaction between luminance and chrominance components and the effect given by the variance within the local region of the target coefficient are investigated such that the visibility threshold for the human visual system (HVS) can be evaluated. In a locally adaptive fashion based on the wavelet decomposition, the estimator applies to all subbands of luminance and chrominance components of color images and is used to measure the visibility of wavelet quantization errors. The subband JND profiles are then incorporated into the proposed color image watermarking scheme. Performance in terms of robustness and transparency of the watermarking scheme is obtained by means of the proposed approach to embed the maximum strength watermark while maintaining the perceptually lossless quality of the watermarked color image. Simulation results show that the proposed scheme with inserting watermarks into luminance and chrominance components is more robust than the existing scheme while retaining the watermark transparency.

  8. La Documentation photographique

    Directory of Open Access Journals (Sweden)

    Magali Hamm

    2009-03-01

    Full Text Available La Documentation photographique, revue destinée aux enseignants et étudiants en histoire-géographie, place l’image au cœur de sa ligne éditoriale. Afin de suivre les évolutions actuelles de la géographie, la collection propose une iconographie de plus en plus diversifiée : cartes, photographies, mais aussi caricatures, une de journal ou publicité, toutes étant considérées comme un document géographique à part entière. Car l’image peut se faire synthèse ; elle peut au contraire montrer les différentes facettes d’un objet ; souvent elle permet d’incarner des phénomènes géographiques. Associées à d’autres documents, les images aident les enseignants à initier leurs élèves à des raisonnements géographiques complexes. Mais pour apprendre à les lire, il est fondamental de les contextualiser, de les commenter et d’interroger leur rapport au réel.The Documentation photographique, magazine dedicated to teachers and students in History - Geography, places the image at the heart of its editorial line. In order to follow the evolutions of Geography, the collection presents a more and more diversified iconography: maps, photographs, but also drawings or advertisements, all this documents being considered as geographical ones. Because image can be a synthesis; on the contrary it can present the different facets of a same object; often it enables to portray geographical phenomena. Related to other documents, images assist the teachers in the students’ initiation to complex geographical reasoning. But in order to learn how to read them, it is fundamental to contextualize them, comment them and question their relations with reality.

  9. Ultrafuzziness Optimization Based on Type II Fuzzy Sets for Image Thresholding

    Directory of Open Access Journals (Sweden)

    Hudan Studiawan

    2010-11-01

    Full Text Available Image thresholding is one of the processing techniques to provide high quality preprocessed image. Image vagueness and bad illumination are common obstacles yielding in a poor image thresholding output. By assuming image as fuzzy sets, several different fuzzy thresholding techniques have been proposed to remove these obstacles during threshold selection. In this paper, we proposed an algorithm for thresholding image using ultrafuzziness optimization to decrease uncertainty in fuzzy system by common fuzzy sets like type II fuzzy sets. Optimization was conducted by involving ultrafuzziness measurement for background and object fuzzy sets separately. Experimental results demonstrated that the proposed image thresholding method had good performances for images with high vagueness, low level contrast, and grayscale ambiguity.

  10. Iris recognition using image moments and k-means algorithm.

    Science.gov (United States)

    Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed

    2014-01-01

    This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%.

  11. Use of gray-scale ultrasonography in the diagnosis of reproductive disease in the bitch: 18 cases (1981-1984)

    International Nuclear Information System (INIS)

    Poffenbarger, E.M.; Feeney, D.A.

    1986-01-01

    Gray-scale ultrasonography was utilized in addition to radiography in the diagnosis of reproductive disease in 18 bitches. In 72% of the cases, ultrasonography was considered diagnostic because it revealed information on organ architecture, relationships of radiographically silhouetting soft tissue structures, and fetal viability that was unobtainable by radiography alone. In the remainder of the cases, ultrasonography was contributory to the diagnostic process by supporting the clinical and radiographic diagnoses. The benefits of ultrasonography are discussed, as is the ultrasonographic appearance of a variety of reproductive tract diseases

  12. Candidiasis of the liver and spleen in childhood

    International Nuclear Information System (INIS)

    Miller, J.H.; Greenfield, L.D.; Wald, B.R.

    1982-01-01

    Four children with acute leukemia and surgically documented candidiasis of the liver and/or spleen were examined with a combination of diagnostic imaging modalities including /sup 99m/Tc-sulfur colloid and /sup 67/Ga- citrate scintigraphy, gray-scale ultrasound, and body computed tomography (CT). Abnormalities were detected in every individual examined. /sup 99m/Tc-sulfur colloid scintigraphy revealed ''cold'' areas in the liver or spleen. With /sup 67/Ga scintigraphy, these areas were ''cold'' in some individuals and ''hot'' in others. Gray-scale ultrasound demonstrated hypoechoic lesions with central areas of increased echogenicity in hepatic involvement, and hypoechoic replacement of the spleen in splenic involvement. CT in one patient revealed low-density areas without contrast enhancement within the hepatic parenchyma and unsuspected renal involvement

  13. Candidiasis of the liver and spleen in childhood

    International Nuclear Information System (INIS)

    Miller, J.H.; Greenfield, L.D.; Wald, B.R.

    1982-01-01

    Four children with acute leukemia and surgically documented candidiasis of the liver and/or spleen were examined with a combination of diagnostic imaging modalities including 99 mTc-sulfur colloid and 67 Ga-citrate scintigraphy, gray-scale ultrasound, and body computed tomography (CT). Abnormalities were detected in every individual examined. 99 mTc-sulfur colloid scintigraphy revealed cold areas in the liver or spleen. With 67 Ga scintigraphy, these areas were cold in some individuals and hot in others. Gray-scale ultrasound demonstrated hypoechoic lesions with central areas of increased echogenicity in hepatic involvement, and hypoechoic replacement of the spleen in splenic involvement. CT in one patient revealed low-density areas without contrast enhancement within the hepatic parenchyma and unsuspected renal involvement

  14. Candidiasis of the liver and spleen in childhood

    Energy Technology Data Exchange (ETDEWEB)

    Miller, J.H.; Greenfield, L.D.; Wald, B.R.

    1982-02-01

    Four children with acute leukemia and surgically documented candidiasis of the liver and/or spleen were examined with a combination of diagnostic imaging modalities including /sup 99/mTc-sulfur colloid and /sup 67/Ga-citrate scintigraphy, gray-scale ultrasound, and body computed tomography (CT). Abnormalities were detected in every individual examined. /sup 99/mTc-sulfur colloid scintigraphy revealed cold areas in the liver or spleen. With /sup 67/Ga scintigraphy, these areas were cold in some individuals and hot in others. Gray-scale ultrasound demonstrated hypoechoic lesions with central areas of increased echogenicity in hepatic involvement, and hypoechoic replacement of the spleen in splenic involvement. CT in one patient revealed low-density areas without contrast enhancement within the hepatic parenchyma and unsuspected renal involvement.

  15. UV beam shaper alignment sensitivity: grayscale versus binary designs

    Science.gov (United States)

    Lizotte, Todd E.

    2008-08-01

    What defines a good flat top beam shaper? What is more important; an ideal flat top profile or ease of alignment and stability? These are the questions designers and fabricators can not easily define, since they are a function of experience. Anyone can generate a theoretical beam shaper design and model it until it is clear that on paper the design looks good and meets the general needs of the end customer. However, the method of fabrication can add a twist that is not fully understood by either party until the beam shaper is actually tested for the first time in a system and also produced in high volume. This paper provides some insight into how grayscale and binary fabrication methods can produce the same style of beam shaper, with similar beam shaping performance; however provide a result wherein each fabricated design has separate degrees of sensitivity for alignment and stability. The paper will explain the design and fabrication approach for the two units and present alignment and testing data to provide a contrast comparison. Further data will show that over twenty sets of each fabricated design there is a consistency to the sensitivity issue. An understanding of this phenomenon is essential when considering the use of beam shapers on production equipment that is dedicated to producing micron-precision features within high value microelectronic and consumer products. We will present our findings and explore potential explanations and solutions.

  16. Characteristics of a multi-image camera on a CT image

    International Nuclear Information System (INIS)

    Mihara, Kazuhiro; Fujino, Tatsuo; Abe, Katsuhito

    1984-01-01

    A multi-imaging camera was used for obtaining a hard-copy image from an imaging device of a CT scanner. The contrast and brightness of the CRT and time exposure of the camera were the three important factors which influenced the quality of the hard-copy image. Two kinds of original test patterns were designed to examine the characteristics of the factors. One was the grayscale test pattern which was used to obtain the density curve. This curve, named the Film-CRT curve, (F-C curve) is to distinguish it from the H D curve. The other was the sharpness test pattern which was used to examine the relationship between brightness and sharpness. As a result, the slope of F-C curve became steeper with a decrease in brightness, with an increase in contrast and with increase in exposure time. Sharpness became worse with an increase in brightness. Therefore, to obtain a good hard copy image, the brightness must be set as dark as possible, and the contrast and exposure time must be controlled after due consideration is given to their characteristics. (author)

  17. The FBI compression standard for digitized fingerprint images

    Energy Technology Data Exchange (ETDEWEB)

    Brislawn, C.M.; Bradley, J.N. [Los Alamos National Lab., NM (United States); Onyshczak, R.J. [National Inst. of Standards and Technology, Gaithersburg, MD (United States); Hopper, T. [Federal Bureau of Investigation, Washington, DC (United States)

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  18. Quantum Image Filtering in the Frequency Domain

    Directory of Open Access Journals (Sweden)

    MANTA, V. I.

    2013-08-01

    Full Text Available In this paper we address the emerging field of Quantum Image Processing. We investigate the use of quantum computing systems to represent and manipulate images. In particular, we consider the basic task of image filtering. We prove that a quantum version for this operation can be achieved, even though the quantum convolution of two sequences is physically impossible. In our approach we use the principle of the quantum oracle to implement the filter function. We provide the quantum circuit that implements the filtering task and present the results of several simulation experiments on grayscale images. There are important differences between the classical and the quantum implementations for image filtering. We analyze these differences and show that the major advantage of the quantum approach lies in the exploitation of the efficient implementation of the quantum Fourier transform.

  19. A MULTISCALE APPROACH TO THE REPRESENTATION OF 3D IMAGES, WITH APPLICATION TO POLYMER SOLAR CELLS

    Directory of Open Access Journals (Sweden)

    Ralf Thiedmann

    2011-03-01

    Full Text Available A multiscale approach to the description of geometrically complex 3D image data is proposed which distinguishes between morphological features on a ‘macro-scale’ and a ‘micro-scale’. Since our method is mainly tailored to nanostructures observed in composite materials consisting of two different phases, an appropriate binarization of grayscale images is required first. Then, a morphological smoothing is applied to extract the structural information from binarized image data on the ‘macro-scale’. A stochastic algorithm is developed for the morphologically smoothed images whose goal is to find a suitable representation of the macro-scale structure by unions of overlapping spheres. Such representations can be interpreted as marked point patterns. They lead to an enormous reduction of data and allow the application of well-known tools from point-process theory for their analysis and structural modeling. All those voxels which have been ‘misspecified’ by the morphological smoothing and subsequent representation by unions of overlapping spheres are interpreted as ‘micro-scale’ structure. The exemplary data sets considered in this paper are 3D grayscale images of photoactive layers in hybrid solar cells gained by electron tomography. These composite materials consist of two phases: a polymer phase and a zinc oxide phase. The macro-scale structure of the latter is represented by unions of overlapping spheres.

  20. Optical images of quasars and radio galaxies

    Science.gov (United States)

    Hutchings, J. B.; Johnson, I.; Pyke, R.

    1988-04-01

    Matched contour plots and gray-scale diagrams are presented for 54 radio quasars or radio galaxies of redshift 0.1-0.6, observed with the Canada-France-Hawaii Telescope. All except four were recorded on the RCA1 CCD chip; four were summed from several photographic exposures behind an image tube. All except nine of the objects form the principal data base used by Hutchings (1987). Detailed comments are given on all objects, and some further measures of the objects and their companions.

  1. An alternate way for image documentation in gamma camera processing units

    International Nuclear Information System (INIS)

    Schneider, P.

    1980-01-01

    For documentation of images and curves generated by a gamma camera processing system a film exposure tool from a CT system was linked to the video monitor by use of a resistance bridge. The machine has a stock capacity of 100 plane films. For advantage there is no need for an interface, the complete information on the monitor is transferred to the plane film and compared to software controlled data output on printer or plotter the device is tremendously time saving. (orig.) [de

  2. Color Histogram Diffusion for Image Enhancement

    Science.gov (United States)

    Kim, Taemin

    2011-01-01

    Various color histogram equalization (CHE) methods have been proposed to extend grayscale histogram equalization (GHE) for color images. In this paper a new method called histogram diffusion that extends the GHE method to arbitrary dimensions is proposed. Ranges in a histogram are specified as overlapping bars of uniform heights and variable widths which are proportional to their frequencies. This diagram is called the vistogram. As an alternative approach to GHE, the squared error of the vistogram from the uniform distribution is minimized. Each bar in the vistogram is approximated by a Gaussian function. Gaussian particles in the vistoram diffuse as a nonlinear autonomous system of ordinary differential equations. CHE results of color images showed that the approach is effective.

  3. Contrast Enhancement Method Based on Gray and Its Distance Double-Weighting Histogram Equalization for 3D CT Images of PCBs

    Directory of Open Access Journals (Sweden)

    Lei Zeng

    2016-01-01

    Full Text Available Cone beam computed tomography (CBCT is a new detection method for 3D nondestructive testing of printed circuit boards (PCBs. However, the obtained 3D image of PCBs exhibits low contrast because of several factors, such as the occurrence of metal artifacts and beam hardening, during the process of CBCT imaging. Histogram equalization (HE algorithms cannot effectively extend the gray difference between a substrate and a metal in 3D CT images of PCBs, and the reinforcing effects are insignificant. To address this shortcoming, this study proposes an image enhancement algorithm based on gray and its distance double-weighting HE. Considering the characteristics of 3D CT images of PCBs, the proposed algorithm uses gray and its distance double-weighting strategy to change the form of the original image histogram distribution, suppresses the grayscale of a nonmetallic substrate, and expands the grayscale of wires and other metals. The proposed algorithm also enhances the gray difference between a substrate and a metal and highlights metallic materials. The proposed algorithm can enhance the gray value of wires and other metals in 3D CT images of PCBs. It applies enhancement strategies of changing gray and its distance double-weighting mechanism to adapt to this particular purpose. The flexibility and advantages of the proposed algorithm are confirmed by analyses and experimental results.

  4. Textural Analysis of Fatique Crack Surfaces: Image Pre-processing

    Directory of Open Access Journals (Sweden)

    H. Lauschmann

    2000-01-01

    Full Text Available For the fatique crack history reconstitution, new methods of quantitative microfractography are beeing developed based on the image processing and textural analysis. SEM magnifications between micro- and macrofractography are used. Two image pre-processing operatins were suggested and proved to prepare the crack surface images for analytical treatment: 1. Normalization is used to transform the image to a stationary form. Compared to the generally used equalization, it conserves the shape of brightness distribution and saves the character of the texture. 2. Binarization is used to transform the grayscale image to a system of thick fibres. An objective criterion for the threshold brightness value was found as that resulting into the maximum number of objects. Both methods were succesfully applied together with the following textural analysis.

  5. Development of a contrast phantom for active millimeter-wave imaging systems

    Science.gov (United States)

    Barber, Jeffrey; Weatherall, James C.; Brauer, Carolyn S.; Smith, Barry T.

    2011-06-01

    As the development of active millimeter wave imaging systems continues, it is necessary to validate materials that simulate the expected response of explosives. While physics-based models have been used to develop simulants, it is desirable to image both the explosive and simulant together in a controlled fashion in order to demonstrate success. To this end, a millimeter wave contrast phantom has been created to calibrate image grayscale while controlling the configuration of the explosive and simulant such that direct comparison of their respective returns can be performed. The physics of the phantom are described, with millimeter wave images presented to show successful development of the phantom and simulant validation at GHz frequencies.

  6. Imaging in pancreatic transplants

    International Nuclear Information System (INIS)

    Heller, Matthew T; Bhargava, Puneet

    2014-01-01

    Pancreatic transplantation, performed alone or in conjunction with kidney transplantation, is an effective treatment for advanced type I diabetes mellitus and select patients with type II diabetes mellitus. Following advancements in surgical technique, postoperative management, and immunosuppression, pancreatic transplantation has significantly improved the length and quality of life for patients suffering from pancreatic dysfunction. While computed tomography (CT) and magnetic resonance imaging (MRI) have more limited utility, ultrasound is the preferred initial imaging modality to evaluate the transplanted pancreas; gray-scale assesses the parenchyma and fluid collections, while Doppler interrogation assesses vascular flow and viability. Ultrasound is also useful to guide percutaneous interventions for the transplanted pancreas. With knowledge of the surgical anatomy and common complications, the abdominal radiologist plays a central role in the perioperative and postoperative evaluation of the transplanted pancreas

  7. An HVS-based location-sensitive definition of mutual information between two images

    Science.gov (United States)

    Zhu, Haijun; Wu, Huayi

    2006-10-01

    Quantitative measure of image information amount is of great importance in many image processing applications, e.g. image compression and image registration. Many commonly used metrics are defined mathematically. However, the ultimate consumers of images are human observers in most situations, thus such measures without consideration of internal mechanism of human visual system (HVS) may not be appropriate. This paper proposes an improved definition of mutual information between two images based on the visual information which is actually perceived by human beings in different subbands of image. This definition is both sensitive to the pixels' spatial location and correlates well with human perceptual feeling than mutual information purely calculated by pixels' grayscale value. Experimental results on images with different noises and JPEG&JPEG2000 compressed images are also given.

  8. Image enhancement using thermal-visible fusion for human detection

    Science.gov (United States)

    Zaihidee, Ezrinda Mohd; Hawari Ghazali, Kamarul; Zuki Saleh, Mohd

    2017-09-01

    An increased interest in detecting human beings in video surveillance system has emerged in recent years. Multisensory image fusion deserves more research attention due to the capability to improve the visual interpretability of an image. This study proposed fusion techniques for human detection based on multiscale transform using grayscale visual light and infrared images. The samples for this study were taken from online dataset. Both images captured by the two sensors were decomposed into high and low frequency coefficients using Stationary Wavelet Transform (SWT). Hence, the appropriate fusion rule was used to merge the coefficients and finally, the final fused image was obtained by using inverse SWT. From the qualitative and quantitative results, the proposed method is more superior than the two other methods in terms of enhancement of the target region and preservation of details information of the image.

  9. The cigarette pack as image: new evidence from tobacco industry documents.

    Science.gov (United States)

    Wakefield, M; Morley, C; Horan, J K; Cummings, K M

    2002-03-01

    To gain an understanding of the role of pack design in tobacco marketing. A search of tobacco company document sites using a list of specified search terms was undertaken during November 2000 to July 2001. Documents show that, especially in the context of tighter restrictions on conventional avenues for tobacco marketing, tobacco companies view cigarette packaging as an integral component of marketing strategy and a vehicle for (a) creating significant in-store presence at the point of purchase, and (b) communicating brand image. Market testing results indicate that such imagery is so strong as to influence smoker's taste ratings of the same cigarettes when packaged differently. Documents also reveal the careful balancing act that companies have employed in using pack design and colour to communicate the impression of lower tar or milder cigarettes, while preserving perceived taste and "satisfaction". Systematic and extensive research is carried out by tobacco companies to ensure that cigarette packaging appeals to selected target groups, including young adults and women. Cigarette pack design is an important communication device for cigarette brands and acts as an advertising medium. Many smokers are misled by pack design into thinking that cigarettes may be "safer". There is a need to consider regulation of cigarette packaging.

  10. Optical images of quasars and radio galaxies

    Energy Technology Data Exchange (ETDEWEB)

    Hutchings, J.B.; Johnson, I.; Pyke, R.

    1988-04-01

    Matched contour plots and gray-scale diagrams are presented for 54 radio quasars or radio galaxies of redshift 0.1-0.6, observed with the Canada-France-Hawaii Telescope. All except four were recorded on the RCA1 CCD chip; four were summed from several photographic exposures behind an image tube. All except nine of the objects form the principal data base used by Hutchings (1987). Detailed comments are given on all objects, and some further measures of the objects and their companions. 12 references.

  11. Optical images of quasars and radio galaxies

    International Nuclear Information System (INIS)

    Hutchings, J.B.; Johnson, I.; Pyke, R.

    1988-01-01

    Matched contour plots and gray-scale diagrams are presented for 54 radio quasars or radio galaxies of redshift 0.1-0.6, observed with the Canada-France-Hawaii Telescope. All except four were recorded on the RCA1 CCD chip; four were summed from several photographic exposures behind an image tube. All except nine of the objects form the principal data base used by Hutchings (1987). Detailed comments are given on all objects, and some further measures of the objects and their companions. 12 references

  12. ITERATION FREE FRACTAL COMPRESSION USING GENETIC ALGORITHM FOR STILL COLOUR IMAGES

    Directory of Open Access Journals (Sweden)

    A.R. Nadira Banu Kamal

    2014-02-01

    Full Text Available The storage requirements for images can be excessive, if true color and a high-perceived image quality are desired. An RGB image may be viewed as a stack of three gray-scale images that when fed into the red, green and blue inputs of a color monitor, produce a color image on the screen. The abnormal size of many images leads to long, costly, transmission times. Hence, an iteration free fractal algorithm is proposed in this research paper to design an efficient search of the domain pools for colour image compression using Genetic Algorithm (GA. The proposed methodology reduces the coding process time and intensive computation tasks. Parameters such as image quality, compression ratio and coding time are analyzed. It is observed that the proposed method achieves excellent performance in image quality with reduction in storage space.

  13. Quantum image pseudocolor coding based on the density-stratified method

    Science.gov (United States)

    Jiang, Nan; Wu, Wenya; Wang, Luo; Zhao, Na

    2015-05-01

    Pseudocolor processing is a branch of image enhancement. It dyes grayscale images to color images to make the images more beautiful or to highlight some parts on the images. This paper proposes a quantum image pseudocolor coding scheme based on the density-stratified method which defines a colormap and changes the density value from gray to color parallel according to the colormap. Firstly, two data structures: quantum image GQIR and quantum colormap QCR are reviewed or proposed. Then, the quantum density-stratified algorithm is presented. Based on them, the quantum realization in the form of circuits is given. The main advantages of the quantum version for pseudocolor processing over the classical approach are that it needs less memory and can speed up the computation. Two kinds of examples help us to describe the scheme further. Finally, the future work are analyzed.

  14. SNAPSHOT SPECTRAL AND COLOR IMAGING USING A REGULAR DIGITAL CAMERA WITH A MONOCHROMATIC IMAGE SENSOR

    Directory of Open Access Journals (Sweden)

    J. Hauser

    2017-10-01

    Full Text Available Spectral imaging (SI refers to the acquisition of the three-dimensional (3D spectral cube of spatial and spectral data of a source object at a limited number of wavelengths in a given wavelength range. Snapshot spectral imaging (SSI refers to the instantaneous acquisition (in a single shot of the spectral cube, a process suitable for fast changing objects. Known SSI devices exhibit large total track length (TTL, weight and production costs and relatively low optical throughput. We present a simple SSI camera based on a regular digital camera with (i an added diffusing and dispersing phase-only static optical element at the entrance pupil (diffuser and (ii tailored compressed sensing (CS methods for digital processing of the diffused and dispersed (DD image recorded on the image sensor. The diffuser is designed to mix the spectral cube data spectrally and spatially and thus to enable convergence in its reconstruction by CS-based algorithms. In addition to performing SSI, this SSI camera is capable to perform color imaging using a monochromatic or gray-scale image sensor without color filter arrays.

  15. QR code based noise-free optical encryption and decryption of a gray scale image

    Science.gov (United States)

    Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-03-01

    In optical encryption systems, speckle noise is one major challenge in obtaining high quality decrypted images. This problem can be addressed by employing a QR code based noise-free scheme. Previous works have been conducted for optically encrypting a few characters or a short expression employing QR codes. This paper proposes a practical scheme for optically encrypting and decrypting a gray-scale image based on QR codes for the first time. The proposed scheme is compatible with common QR code generators and readers. Numerical simulation results reveal the proposed method can encrypt and decrypt an input image correctly.

  16. An FPGA-based heterogeneous image fusion system design method

    Science.gov (United States)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  17. A Review of Imaging Methods for Prostate Cancer Detection

    Directory of Open Access Journals (Sweden)

    Saradwata Sarkar

    2016-01-01

    Full Text Available Imaging is playing an increasingly important role in the detection of prostate cancer (PCa. This review summarizes the key imaging modalities–multiparametric ultrasound (US, multiparametric magnetic resonance imaging (MRI, MRI-US fusion imaging, and positron emission tomography (PET imaging–-used in the diagnosis and localization of PCa. Emphasis is laid on the biological and functional characteristics of tumors that rationalize the use of a specific imaging technique. Changes to anatomical architecture of tissue can be detected by anatomical grayscale US and T2-weighted MRI. Tumors are known to progress through angiogenesis–-a fact exploited by Doppler and contrast-enhanced US and dynamic contrast-enhanced MRI. The increased cellular density of tumors is targeted by elastography and diffusion-weighted MRI. PET imaging employs several different radionuclides to target the metabolic and cellular activities during tumor growth. Results from studies using these various imaging techniques are discussed and compared.

  18. Strategy for magnetic resonance imaging of the head: results of a semi-empirical model. Part 1

    International Nuclear Information System (INIS)

    Droege, R.T.; Wiener, S.N.; Rzeszotarski, M.S.

    1984-01-01

    This paper is an introduction to lesion detection problems of MR. A mathematical model previously developed for normal anatomy has been extended to predict the appearance of any hypothetical lesion in magnetic (MR) images of the head. The model is applied to selected clinical images to demonstrate the loss of lesion visibility attributable to ''crossover'' and ''boundary effect.'' The model is also used to explain the origins of these problems, and to demonstrate that appropriate gray-scale manipulations can remedy these problems

  19. Color correction with blind image restoration based on multiple images using a low-rank model

    Science.gov (United States)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  20. Candidiasis of the liver and spleen in childhood

    Energy Technology Data Exchange (ETDEWEB)

    Miller, J.H. (Childrens Hospital of Los Angeles, CA); Greenfield, L.D.; Wald, B.R.

    1982-02-01

    Four children with acute leukemia and surgically documented candidiasis of the liver and/or spleen were examined with a combination of diagnostic imaging modalities including /sup 99m/Tc-sulfur colloid and /sup 67/Ga- citrate scintigraphy, gray-scale ultrasound, and body computed tomography (CT). Abnormalities were detected in every individual examined. /sup 99m/Tc-sulfur colloid scintigraphy revealed ''cold'' areas in the liver or spleen. With /sup 67/Ga scintigraphy, these areas were ''cold'' in some individuals and ''hot'' in others. Gray-scale ultrasound demonstrated hypoechoic lesions with central areas of increased echogenicity in hepatic involvement, and hypoechoic replacement of the spleen in splenic involvement. CT in one patient revealed low-density areas without contrast enhancement within the hepatic parenchyma and unsuspected renal involvement.

  1. Segmentation of complex document

    Directory of Open Access Journals (Sweden)

    Souad Oudjemia

    2014-06-01

    Full Text Available In this paper we present a method for segmentation of documents image with complex structure. This technique based on GLCM (Grey Level Co-occurrence Matrix used to segment this type of document in three regions namely, 'graphics', 'background' and 'text'. Very briefly, this method is to divide the document image, in block size chosen after a series of tests and then applying the co-occurrence matrix to each block in order to extract five textural parameters which are energy, entropy, the sum entropy, difference entropy and standard deviation. These parameters are then used to classify the image into three regions using the k-means algorithm; the last step of segmentation is obtained by grouping connected pixels. Two performance measurements are performed for both graphics and text zones; we have obtained a classification rate of 98.3% and a Misclassification rate of 1.79%.

  2. Embedding the shapes of regions of interest into a Clinical Document Architecture document.

    Science.gov (United States)

    Minh, Nguyen Hai; Yi, Byoung-Kee; Kim, Il Kon; Song, Joon Hyun; Binh, Pham Viet

    2015-03-01

    Sharing a medical image visually annotated by a region of interest with a remotely located specialist for consultation is a good practice. It may, however, require a special-purpose (and most likely expensive) system to send and view them, which is an unfeasible solution in developing countries such as Vietnam. In this study, we design and implement interoperable methods based on the HL7 Clinical Document Architecture and the eXtensible Markup Language Stylesheet Language for Transformation standards to seamlessly exchange and visually present the shapes of regions of interest using web browsers. We also propose a new integration architecture for a Clinical Document Architecture generator that enables embedding of regions of interest and simultaneous auto-generation of corresponding style sheets. Using the Clinical Document Architecture document and style sheet, a sender can transmit clinical documents and medical images together with coordinate values of regions of interest to recipients. Recipients can easily view the documents and display embedded regions of interest by rendering them in their web browser of choice. © The Author(s) 2014.

  3. The suitability of gray-scale electronic readers for dermatology journals.

    Science.gov (United States)

    Choi, Jae Eun; Kim, Dai Hyun; Seo, Soo Hong; Kye, Young Chul; Ahn, Hyo Hyun

    2014-12-01

    The rapid development of information and communication technology has replaced traditional books by electronic versions. Most print dermatology journals have been replaced with electronic journals (e-journals), which are readily used by clinicians and medical students. The objectives of this study were to determine whether e-readers are appropriate for reading dermatology journals, to conduct an attitude study of both medical personnel and students, and to find a way of improving e-book use in the field of dermatology. All articles in the Korean Journal of Dermatology published from January 2010 to December 2010 were utilized in this study. Dermatology house officers, student trainees in their fourth year of medical school, and interns at Korea University Medical Center participated in the study. After reading the articles with Kindle 2, their impressions and evaluations were recorded using a questionnaire with a 5-point Likert scale. The results demonstrated that gray-scale e-readers might not be suitable for reading dermatology journals, especially for case reports compared to the original articles. Only three of the thirty-one respondents preferred e-readers to printed papers. The most common suggestions from respondents to encourage usage of e-books in the field of dermatology were the introduction of a color display, followed by the use of a touch screen system, a cheaper price, and ready-to-print capabilities. In conclusion, our study demonstrated that current e-readers might not be suitable for reading dermatology journals. However, they may be utilized in selected situations according to the type and topic of the papers.

  4. Imaging evaluation of fetal vascular anomalies

    Energy Technology Data Exchange (ETDEWEB)

    Calvo-Garcia, Maria A.; Kline-Fath, Beth M.; Koch, Bernadette L.; Laor, Tal [MLC 5031 Cincinnati Children' s Hospital Medical Center, Department of Radiology, Cincinnati, OH (United States); Adams, Denise M. [Cincinnati Children' s Hospital Medical Center, Department of Pediatrics and Hemangioma and Vascular Malformation Center, Cincinnati, OH (United States); Gupta, Anita [Cincinnati Children' s Hospital Medical Center, Department of Pathology, Cincinnati, OH (United States); Lim, Foong-Yen [Cincinnati Children' s Hospital Medical Center, Pediatric Surgery and Fetal Center of Cincinnati, Cincinnati, OH (United States)

    2015-08-15

    Vascular anomalies can be detected in utero and should be considered in the setting of solid, mixed or cystic lesions in the fetus. Evaluation of the gray-scale and color Doppler US and MRI characteristics can guide diagnosis. We present a case-based pictorial essay to illustrate the prenatal imaging characteristics in 11 pregnancies with vascular malformations (5 lymphatic malformations, 2 Klippel-Trenaunay syndrome, 1 venous-lymphatic malformation, 1 Parkes-Weber syndrome) and vascular tumors (1 congenital hemangioma, 1 kaposiform hemangioendothelioma). Concordance between prenatal and postnatal diagnoses is analyzed, with further discussion regarding potential pitfalls in identification. (orig.)

  5. Imaging evaluation of fetal vascular anomalies

    International Nuclear Information System (INIS)

    Calvo-Garcia, Maria A.; Kline-Fath, Beth M.; Koch, Bernadette L.; Laor, Tal; Adams, Denise M.; Gupta, Anita; Lim, Foong-Yen

    2015-01-01

    Vascular anomalies can be detected in utero and should be considered in the setting of solid, mixed or cystic lesions in the fetus. Evaluation of the gray-scale and color Doppler US and MRI characteristics can guide diagnosis. We present a case-based pictorial essay to illustrate the prenatal imaging characteristics in 11 pregnancies with vascular malformations (5 lymphatic malformations, 2 Klippel-Trenaunay syndrome, 1 venous-lymphatic malformation, 1 Parkes-Weber syndrome) and vascular tumors (1 congenital hemangioma, 1 kaposiform hemangioendothelioma). Concordance between prenatal and postnatal diagnoses is analyzed, with further discussion regarding potential pitfalls in identification. (orig.)

  6. Neural Network Blind Equalization Algorithm Applied in Medical CT Image Restoration

    Directory of Open Access Journals (Sweden)

    Yunshan Sun

    2013-01-01

    Full Text Available A new algorithm for iterative blind image restoration is presented in this paper. The method extends blind equalization found in the signal case to the image. A neural network blind equalization algorithm is derived and used in conjunction with Zigzag coding to restore the original image. As a result, the effect of PSF can be removed by using the proposed algorithm, which contributes to eliminate intersymbol interference (ISI. In order to obtain the estimation of the original image, what is proposed in this method is to optimize constant modulus blind equalization cost function applied to grayscale CT image by using conjugate gradient method. Analysis of convergence performance of the algorithm verifies the feasibility of this method theoretically; meanwhile, simulation results and performance evaluations of recent image quality metrics are provided to assess the effectiveness of the proposed method.

  7. PCA-based polling strategy in machine learning framework for coronary artery disease risk assessment in intravascular ultrasound: A link between carotid and coronary grayscale plaque morphology.

    Science.gov (United States)

    Araki, Tadashi; Ikeda, Nobutaka; Shukla, Devarshi; Jain, Pankaj K; Londhe, Narendra D; Shrivastava, Vimal K; Banchhor, Sumit K; Saba, Luca; Nicolaides, Andrew; Shafique, Shoaib; Laird, John R; Suri, Jasjit S

    2016-05-01

    Percutaneous coronary interventional procedures need advance planning prior to stenting or an endarterectomy. Cardiologists use intravascular ultrasound (IVUS) for screening, risk assessment and stratification of coronary artery disease (CAD). We hypothesize that plaque components are vulnerable to rupture due to plaque progression. Currently, there are no standard grayscale IVUS tools for risk assessment of plaque rupture. This paper presents a novel strategy for risk stratification based on plaque morphology embedded with principal component analysis (PCA) for plaque feature dimensionality reduction and dominant feature selection technique. The risk assessment utilizes 56 grayscale coronary features in a machine learning framework while linking information from carotid and coronary plaque burdens due to their common genetic makeup. This system consists of a machine learning paradigm which uses a support vector machine (SVM) combined with PCA for optimal and dominant coronary artery morphological feature extraction. Carotid artery proven intima-media thickness (cIMT) biomarker is adapted as a gold standard during the training phase of the machine learning system. For the performance evaluation, K-fold cross validation protocol is adapted with 20 trials per fold. For choosing the dominant features out of the 56 grayscale features, a polling strategy of PCA is adapted where the original value of the features is unaltered. Different protocols are designed for establishing the stability and reliability criteria of the coronary risk assessment system (cRAS). Using the PCA-based machine learning paradigm and cross-validation protocol, a classification accuracy of 98.43% (AUC 0.98) with K=10 folds using an SVM radial basis function (RBF) kernel was achieved. A reliability index of 97.32% and machine learning stability criteria of 5% were met for the cRAS. This is the first Computer aided design (CADx) system of its kind that is able to demonstrate the ability of coronary

  8. [Present status and trend of heart fluid mechanics research based on medical image analysis].

    Science.gov (United States)

    Gan, Jianhong; Yin, Lixue; Xie, Shenghua; Li, Wenhua; Lu, Jing; Luo, Anguo

    2014-06-01

    With introduction of current main methods for heart fluid mechanics researches, we studied the characteristics and weakness for three primary analysis methods based on magnetic resonance imaging, color Doppler ultrasound and grayscale ultrasound image, respectively. It is pointed out that particle image velocity (PIV), speckle tracking and block match have the same nature, and three algorithms all adopt block correlation. The further analysis shows that, with the development of information technology and sensor, the research for cardiac function and fluid mechanics will focus on energy transfer process of heart fluid, characteristics of Chamber wall related to blood fluid and Fluid-structure interaction in the future heart fluid mechanics fields.

  9. Real-time single image dehazing based on dark channel prior theory and guided filtering

    Science.gov (United States)

    Zhang, Zan

    2017-10-01

    Images and videos taken outside the foggy day are serious degraded. In order to restore degraded image taken in foggy day and overcome traditional Dark Channel prior algorithms problems of remnant fog in edge, we propose a new dehazing method.We first find the fog area in the dark primary color map to obtain the estimated value of the transmittance using quadratic tree. Then we regard the gray-scale image after guided filtering as atmospheric light map and remove haze based on it. Box processing and image down sampling technology are also used to improve the processing speed. Finally, the atmospheric light scattering model is used to restore the image. A plenty of experiments show that algorithm is effective, efficient and has a wide range of application.

  10. IMAGE STEGANOGRAPHY DENGAN METODE LEAST SIGNIFICANT BIT (LSB

    Directory of Open Access Journals (Sweden)

    M. Miftakul Amin

    2014-02-01

    Full Text Available Security in delivering a secret message is an important factor in the spread of information in cyberspace. Protecting that message to be delivered to the party entitled to, should be made a message concealment mechanism. The purpose of this study was to hide a secret text message into digital images in true color 24 bit RGB format. The method used to insert a secret message using the LSB (Least Significant Bit by replacing the last bit or 8th bit in each RGB color component. RGB image file types option considering that messages can be inserted capacity greater than if use a grayscale image, this is because in one pixel can be inserted 3 bits message. Tests provide results that are hidden messages into a digital image does not reduce significantly the quality of the digital image, and the message has been hidden can be extracted again, so that messages can be delivered to the recipient safely.

  11. Devil’s Vortex Phase Structure as Frequency Plane Mask for Image Encryption Using the Fractional Mellin Transform

    Directory of Open Access Journals (Sweden)

    Sunanda Vashisth

    2014-01-01

    Full Text Available A frequency plane phase mask based on Devil’s vortex structure has been used for image encryption using the fractional Mellin transform. The phase key for decryption is obtained by an iterative phase retrieval algorithm. The proposed scheme has been validated for grayscale secret target images, by numerical simulation. The efficacy of the scheme has been evaluated by computing mean-squared-error between the secret target image and the decrypted image. Sensitivity analysis of the decryption process to variations in various encryption parameters has been carried out. The proposed encryption scheme has been seen to exhibit reasonable robustness against occlusion attack.

  12. A Simple Encryption Algorithm for Quantum Color Image

    Science.gov (United States)

    Li, Panchi; Zhao, Ya

    2017-06-01

    In this paper, a simple encryption scheme for quantum color image is proposed. Firstly, a color image is transformed into a quantum superposition state by employing NEQR (novel enhanced quantum representation), where the R,G,B values of every pixel in a 24-bit RGB true color image are represented by 24 single-qubit basic states, and each value has 8 qubits. Then, these 24 qubits are respectively transformed from a basic state into a balanced superposition state by employed the controlled rotation gates. At this time, the gray-scale values of R, G, B of every pixel are in a balanced superposition of 224 multi-qubits basic states. After measuring, the whole image is an uniform white noise, which does not provide any information. Decryption is the reverse process of encryption. The experimental results on the classical computer show that the proposed encryption scheme has better security.

  13. Representation and traversal of documentation space. Data analysis, neuron networks and image banks

    International Nuclear Information System (INIS)

    Lelu, A.; Rosenblatt, D.

    1986-01-01

    Improvements in the visual representation of considerable amounts of data for the user is necessary for progress in documentation systems. We review practical implementations in this area, which additionally integrate concepts arising from data analysis in the most general sense. The relationship between data analysis and neuron networks is then established. Following a description of simulation experiments, we finally present software for outputting and traversing image banks which integrate most of the concept developed in this article [fr

  14. Feature Matching for SAR and Optical Images Based on Gaussian-Gamma-shaped Edge Strength Map

    Directory of Open Access Journals (Sweden)

    CHEN Min

    2016-03-01

    Full Text Available A matching method for SAR and optical images, robust to pixel noise and nonlinear grayscale differences, is presented. Firstly, a rough correction to eliminate rotation and scale change between images is performed. Secondly, features robust to speckle noise of SAR image are detected by improving the original phase congruency based method. Then, feature descriptors are constructed on the Gaussian-Gamma-shaped edge strength map according to the histogram of oriented gradient pattern. Finally, descriptor similarity and geometrical relationship are combined to constrain the matching processing.The experimental results demonstrate that the proposed method provides significant improvement in correct matches number and image registration accuracy compared with other traditional methods.

  15. Indian Language Document Analysis and Understanding

    Indian Academy of Sciences (India)

    documents would contain text of more than one script (for example, English, Hindi and the ... O'Gorman and Govindaraju provides a good overview on document image ... word level in bilingual documents containing Roman and Tamil scripts.

  16. Research of x-ray automatic image mosaic method

    Science.gov (United States)

    Liu, Bin; Chen, Shunan; Guo, Lianpeng; Xu, Wanpeng

    2013-10-01

    Image mosaic has widely applications value in the fields of medical image analysis, and it is a technology that carries on the spatial matching to a series of image which are overlapped with each other, and finally builds a seamless and high quality image which has high resolution and big eyeshot. In this paper, the method of grayscale cutting pseudo-color enhancement was firstly used to complete the mapping transformation from gray to the pseudo-color, and to extract SIFT features from the images. And then by making use of a similar measure of NCC (normalized cross correlation - Normalized cross-correlation), the method of RANSAC (Random Sample Consensus) was used to exclude the pseudofeature points right in order to complete the exact match of feature points. Finally, seamless mosaic and color fusion were completed by using wavelet multi-decomposition. The experiment shows that the method we used can effectively improve the precision and automation of the medical image mosaic, and provide an effective technical approach for automatic medical image mosaic.

  17. Multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement

    Science.gov (United States)

    Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing

    2018-02-01

    For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.

  18. Image processor of model-based vision system for assembly robots

    International Nuclear Information System (INIS)

    Moribe, H.; Nakano, M.; Kuno, T.; Hasegawa, J.

    1987-01-01

    A special purpose image preprocessor for the visual system of assembly robots has been developed. The main function unit is composed of lookup tables to utilize the advantage of semiconductor memory for large scale integration, high speed and low price. More than one unit may be operated in parallel since it is designed on the standard IEEE 796 bus. The operation time of the preprocessor in line segment extraction is usually 200 ms per 500 segments, though it differs according to the complexity of scene image. The gray-scale visual system supported by the model-based analysis program using the extracted line segments recognizes partially visible or overlapping industrial workpieces, and detects these locations and orientations

  19. Comparative study of digital laser film and analog paper image recordings

    International Nuclear Information System (INIS)

    Lee, K.R.; Cox, G.G.; Templeton, A.W.; Preston, D.F.; Anderson, W.H.; Hensley, K.S.; Dwyer, S.J.

    1987-01-01

    The increase in the use of various imaging modalities demands higher quality and more efficacious analog image recordings. Laser electronic recordings with digital array prints of 4,000 x 5,000 x 12 bits obtained using laser-sensitive film or paper are being evaluated. Dry silver paper recordings are being improved and evaluated. High-resolution paper dot printers are being studied to determine their gray-scale capabilities. The authors evaluated the image quality, costs, clinical utilization, and acceptability of CT scans, MR images, digital subtraction angiograms, digital radiographs, and radionuclide scans recorded by seven different printers (three laser, three silver paper, and one dot) and compared the same features in conventional film recording. This exhibit outlines the technical developments and instrumentation of digital laser film and analog paper recorders and presents the results of the study

  20. Brain MR image segmentation using NAMS in pseudo-color.

    Science.gov (United States)

    Li, Hua; Chen, Chuanbo; Fang, Shaohong; Zhao, Shengrong

    2017-12-01

    Image segmentation plays a crucial role in various biomedical applications. In general, the segmentation of brain Magnetic Resonance (MR) images is mainly used to represent the image with several homogeneous regions instead of pixels for surgical analyzing and planning. This paper proposes a new approach for segmenting MR brain images by using pseudo-color based segmentation with Non-symmetry and Anti-packing Model with Squares (NAMS). First of all, the NAMS model is presented. The model can represent the image with sub-patterns to keep the image content and largely reduce the data redundancy. Second, the key idea is proposed that convert the original gray-scale brain MR image into a pseudo-colored image and then segment the pseudo-colored image with NAMS model. The pseudo-colored image can enhance the color contrast in different tissues in brain MR images, which can improve the precision of segmentation as well as directly visual perceptional distinction. Experimental results indicate that compared with other brain MR image segmentation methods, the proposed NAMS based pseudo-color segmentation method performs more excellent in not only segmenting precisely but also saving storage.

  1. CFA-aware features for steganalysis of color images

    Science.gov (United States)

    Goljan, Miroslav; Fridrich, Jessica

    2015-03-01

    Color interpolation is a form of upsampling, which introduces constraints on the relationship between neighboring pixels in a color image. These constraints can be utilized to substantially boost the accuracy of steganography detectors. In this paper, we introduce a rich model formed by 3D co-occurrences of color noise residuals split according to the structure of the Bayer color filter array to further improve detection. Some color interpolation algorithms, AHD and PPG, impose pixel constraints so tight that extremely accurate detection becomes possible with merely eight features eliminating the need for model richification. We carry out experiments on non-adaptive LSB matching and the content-adaptive algorithm WOW on five different color interpolation algorithms. In contrast to grayscale images, in color images that exhibit traces of color interpolation the security of WOW is significantly lower and, depending on the interpolation algorithm, may even be lower than non-adaptive LSB matching.

  2. Optical image security using Stokes polarimetry of spatially variant polarized beam

    Science.gov (United States)

    Fatima, Areeba; Nishchal, Naveen K.

    2018-06-01

    We propose a novel security scheme that uses vector beam characterized by the spatially variant polarization distribution. A vector beam is so generated that its helical components carry tailored phases corresponding to the image/images that is/are to be encrypted. The tailoring of phase has been done by employing the modified Gerchberg-Saxton algorithm for phase retrieval. Stokes parameters for the final vector beam is evaluated and is used to construct the ciphertext and one of the keys. The advantage of the proposed scheme is that it generates real ciphertext and keys which are easier to transmit and store than complex quantities. Moreover, the known plaintext attack is not applicable to this system. As a proof-of-concept, simulation results have been presented for securing single and double gray-scale images.

  3. Steganography: LSB Methodology

    Science.gov (United States)

    2012-08-02

    of LSB steganography in grayscale and color images . In J. Dittmann, K. Nahrstedt, and P. Wohlmacher, editors, Proceedings of the ACM, Special...Fridrich, M. Gojan and R. Du paper titled “Reliable detection of LSB steganography in grayscale and color images ”. From a general perspective Figure 2...REPORT Steganography : LSB Methodology (Progress Report) 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: In computer science, steganography is the science

  4. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    Science.gov (United States)

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  5. Standardizing display conditions of diffusion-weighted images using concurrent b0 images. A multi-vendor multi-institutional study

    International Nuclear Information System (INIS)

    Sasaki, Makoto; Ida, Masahiro; Yamada, Kei; Watanabe, Yoshiyuki; Matsui, Mieko

    2007-01-01

    The purpose of this study was to establish a practical method that uses concurrent b0 images to standardize the display conditions for diffusion-weighted images (DWI) that vary among institutions and interpreters. Using identical parameters, we obtained DWI for 12 healthy volunteers at 4 institutions using 4 MRI scanners from 3 vendors. Three operators manually set the window width for the images equal to the signal intensity of the normal-appearing thalamus on b0 images and set the window level at half and then exported the images to 8-bit gray-scale images. We calculated the mean pixel values of the brain objects in the images and examined the variation among scanners, operators, and subjects. Following our method, the DWI of the 12 subjects obtained using the 4 different scanners had nearly identical contrast and brightness. The mean pixel values of the brain on the exported images among the operators and subjects were not significantly different, but we found a slight, significant difference among the scanners. Determining DWI display conditions by using b0 images is a simple and practical method to standardize window width and level for evaluating diffusion abnormalities and decreasing variation among institutions and operators. (author)

  6. To Image...or Not to Image?

    Science.gov (United States)

    Bruley, Karina

    1996-01-01

    Provides a checklist of considerations for installing document image processing with an electronic document management system. Other topics include scanning; indexing; the image file life cycle; benefits of imaging; document-driven workflow; and planning for workplace changes like postsorting, creating a scanning room, redeveloping job tasks and…

  7. A Proposal on the Quantitative Homogeneity Analysis Method of SEM Images for Material Inspections

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Song Hyun; Kim, Jong Woo; Shin, Chang Ho [Hanyang University, Seoul (Korea, Republic of); Choi, Jung-Hoon; Cho, In-Hak; Park, Hwan Seo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    A scanning electron microscope (SEM) is a method to inspect the surface microstructure of materials. The SEM uses electron beams for imaging high magnifications of material surfaces; therefore, various chemical analyses can be performed from the SEM images. Therefore, it is widely used for the material inspection, chemical characteristic analysis, and biological analysis. For the nuclear criticality analysis field, it is an important parameter to check the homogeneity of the compound material for using it in the nuclear system. In our previous study, the SEM was tried to use for the homogeneity analysis of the materials. In this study, a quantitative homogeneity analysis method of SEM images is proposed for the material inspections. The method is based on the stochastic analysis method with the information of the grayscales of the SEM images.

  8. A Proposal on the Quantitative Homogeneity Analysis Method of SEM Images for Material Inspections

    International Nuclear Information System (INIS)

    Kim, Song Hyun; Kim, Jong Woo; Shin, Chang Ho; Choi, Jung-Hoon; Cho, In-Hak; Park, Hwan Seo

    2015-01-01

    A scanning electron microscope (SEM) is a method to inspect the surface microstructure of materials. The SEM uses electron beams for imaging high magnifications of material surfaces; therefore, various chemical analyses can be performed from the SEM images. Therefore, it is widely used for the material inspection, chemical characteristic analysis, and biological analysis. For the nuclear criticality analysis field, it is an important parameter to check the homogeneity of the compound material for using it in the nuclear system. In our previous study, the SEM was tried to use for the homogeneity analysis of the materials. In this study, a quantitative homogeneity analysis method of SEM images is proposed for the material inspections. The method is based on the stochastic analysis method with the information of the grayscales of the SEM images

  9. Edge detection based on computational ghost imaging with structured illuminations

    Science.gov (United States)

    Yuan, Sheng; Xiang, Dong; Liu, Xuemei; Zhou, Xin; Bing, Pibin

    2018-03-01

    Edge detection is one of the most important tools to recognize the features of an object. In this paper, we propose an optical edge detection method based on computational ghost imaging (CGI) with structured illuminations which are generated by an interference system. The structured intensity patterns are designed to make the edge of an object be directly imaged from detected data in CGI. This edge detection method can extract the boundaries for both binary and grayscale objects in any direction at one time. We also numerically test the influence of distance deviations in the interference system on edge extraction, i.e., the tolerance of the optical edge detection system to distance deviation. Hopefully, it may provide a guideline for scholars to build an experimental system.

  10. TV-L1 optical flow for vector valued images

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau; Roholm, Lars; Nielsen, Mads

    2011-01-01

    The variational TV-L1 framework has become one of the most popular and successful approaches for calculating optical flow. One reason for the popularity is the very appealing properties of the two terms in the energy formulation of the problem, the robust L1-norm of the data fidelity term combined...... with the total variation (TV) regular- ization that smoothes the flow, but preserve strong discontinuities such as edges. Specifically the approach of Zach et al. [1] has provided a very clean and efficient algorithm for calculating TV-L1 optical flows between grayscale images. In this paper we propose...

  11. A state-of-the-art review on segmentation algorithms in intravascular ultrasound (IVUS) images.

    Science.gov (United States)

    Katouzian, Amin; Angelini, Elsa D; Carlier, Stéphane G; Suri, Jasjit S; Navab, Nassir; Laine, Andrew F

    2012-09-01

    Over the past two decades, intravascular ultrasound (IVUS) image segmentation has remained a challenge for researchers while the use of this imaging modality is rapidly growing in catheterization procedures and in research studies. IVUS provides cross-sectional grayscale images of the arterial wall and the extent of atherosclerotic plaques with high spatial resolution in real time. In this paper, we review recently developed image processing methods for the detection of media-adventitia and luminal borders in IVUS images acquired with different transducers operating at frequencies ranging from 20 to 45 MHz. We discuss methodological challenges, lack of diversity in reported datasets, and weaknesses of quantification metrics that make IVUS segmentation still an open problem despite all efforts. In conclusion, we call for a common reference database, validation metrics, and ground-truth definition with which new and existing algorithms could be benchmarked.

  12. Automatic anatomically selective image enhancement in digital chest radiography

    International Nuclear Information System (INIS)

    Sezan, M.I.; Minerbo, G.N.; Schaetzing, R.

    1989-01-01

    The authors develop a technique for automatic anatomically selective enhancement of digital chest radiographs. Anatomically selective enhancement is motivated by the desire to simultaneously meet the different enhancement requirements of the lung field and the mediastinum. A recent peak detection algorithm and a set of rules are applied to the image histogram to determine automatically a gray-level threshold between the lung field and mediastinum. The gray-level threshold facilitates anatomically selective gray-scale modification and/or unsharp masking. Further, in an attempt to suppress possible white-band or black-band artifacts due to unsharp masking at sharp edges, local-contrast adaptivity is incorporated into anatomically selective unsharp masking by designing an anatomy-sensitive emphasis parameter which varies asymmetrically with positive and negative values of the local image contrast

  13. IHE cross-enterprise document sharing for imaging: interoperability testing software

    Directory of Open Access Journals (Sweden)

    Renaud Bérubé

    2010-09-01

    Full Text Available Abstract Background With the deployments of Electronic Health Records (EHR, interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. Results In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. Conclusions EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties.

  14. Digitization of medical documents: an X-Windows application for fast scanning.

    Science.gov (United States)

    Muñoz, A; Salvador, C H; Gonzalez, M A; Dueñas, A

    1992-01-01

    This paper deals with digitization, using a commercial scanner, of medical documents as still images for introduction into a computer-based Information System. Document management involves storing, editing and transmission. This task has usually been approached from the perspective of the difficulties posed by radiologic images because of their indisputable qualitative and quantitative significance. However, healthcare activities require the management of many other types of documents and involve the requirements of numerous users. One key to document management will be the availability of a digitizer to deal with the greatest possible number of different types of documents. This paper describes the relevant aspects of documents and the technical specifications that digitizers must fulfill. The concept of document type is introduced as the ideal set of digitizing parameters for a given document. The use of document type parameters can drastically reduce the time the user spends in scanning sessions. Presentation is made of an application based on Unix, X-Windows and OSF/Motif, with a GPIB interface, implemented around the document type concept. Finally, the results of the evaluation of the application are presented, focusing on the user interface, as well as on the viewing of color images in an X-Windows environment and the use of lossy algorithms in the compression of medical images.

  15. Experimental determination of chosen document elements parameters from raster graphics sources

    Directory of Open Access Journals (Sweden)

    Jiří Rybička

    2010-01-01

    Full Text Available Visual appearance of documents and their formal quality is considered to be as important as the content quality. Formal and typographical quality of documents can be evaluated by an automated system that processes raster images of documents. A document is described by a formal model that treats a page as an object and also as a set of elements, whereas page elements include text and graphic object. All elements are described by their parameters depending on elements’ type. For future evaluation, mainly text objects are important. This paper describes the experimental determination of chosen document elements parameters from raster images. Techniques for image processing are used, where an image is represented as a matrix of dots and parameter values are extracted. Algorithms for parameter extraction from raster images were designed and were aimed mainly at typographical parameters like indentation, alignment, font size or spacing. Algorithms were tested on a set of 100 images of paragraphs or pages and provide very good results. Extracted parameters can be directly used for typographical quality evaluation.

  16. Extremely secure identification documents

    International Nuclear Information System (INIS)

    Tolk, K.M.; Bell, M.

    1997-09-01

    The technology developed in this project uses biometric information printed on the document and public key cryptography to ensure that an adversary cannot issue identification documents to unauthorized individuals or alter existing documents to allow their use by unauthorized individuals. This process can be used to produce many types of identification documents with much higher security than any currently in use. The system is demonstrated using a security badge as an example. This project focused on the technologies requiring development in order to make the approach viable with existing badge printing and laminating technologies. By far the most difficult was the image processing required to verify that the picture on the badge had not been altered. Another area that required considerable work was the high density printed data storage required to get sufficient data on the badge for verification of the picture. The image processing process was successfully tested, and recommendations are included to refine the badge system to ensure high reliability. A two dimensional data array suitable for printing the required data on the badge was proposed, but testing of the readability of the array had to be abandoned due to reallocation of the budgeted funds by the LDRD office

  17. A Pivotal Study of Optoacoustic Imaging to Diagnose Benign and Malignant Breast Masses: A New Evaluation Tool for Radiologists.

    Science.gov (United States)

    Neuschler, Erin I; Butler, Reni; Young, Catherine A; Barke, Lora D; Bertrand, Margaret L; Böhm-Vélez, Marcela; Destounis, Stamatia; Donlan, Pamela; Grobmyer, Stephen R; Katzen, Janine; Kist, Kenneth A; Lavin, Philip T; Makariou, Erini V; Parris, Tchaiko M; Schilling, Kathy J; Tucker, F Lee; Dogan, Basak E

    2018-05-01

    Purpose To compare the diagnostic utility of an investigational optoacoustic imaging device that fuses laser optical imaging (OA) with grayscale ultrasonography (US) to grayscale US alone in differentiating benign and malignant breast masses. Materials and Methods This prospective, 16-site study of 2105 women (study period: 12/21/2012 to 9/9/2015) compared Breast Imaging Reporting and Data System (BI-RADS) categories assigned by seven blinded independent readers to benign and malignant breast masses using OA/US versus US alone. BI-RADS 3, 4, or 5 masses assessed at diagnostic US with biopsy-proven histologic findings and BI-RADS 3 masses stable at 12 months were eligible. Independent readers reviewed US images obtained with the OA/US device, assigned a probability of malignancy (POM) and BI-RADS category, and locked results. The same independent readers then reviewed OA/US images, scored OA features, and assigned OA/US POM and a BI-RADS category. Specificity and sensitivity were calculated for US and OA/US. Benign and malignant mass upgrade and downgrade rates, positive and negative predictive values, and positive and negative likelihood ratios were compared. Results Of 2105 consented subjects with 2191 masses, 100 subjects (103 masses) were analyzed separately as a training population and excluded. An additional 202 subjects (210 masses) were excluded due to technical failures or incomplete imaging, 72 subjects (78 masses) due to protocol deviations, and 41 subjects (43 masses) due to high-risk histologic results. Of 1690 subjects with 1757 masses (1079 [61.4%] benign and 678 [38.6%] malignant masses), OA/US downgraded 40.8% (3078/7535) of benign mass reads, with a specificity of 43.0% (3242/7538, 99% confidence interval [CI]: 40.4%, 45.7%) for OA/US versus 28.1% (2120/7543, 99% CI: 25.8%, 30.5%) for the internal US of the OA/US device. OA/US exceeded US in specificity by 14.9% (P < .0001; 99% CI: 12.9, 16.9%). Sensitivity for biopsied malignant masses was 96

  18. Encryption of Stereo Images after Compression by Advanced Encryption Standard (AES

    Directory of Open Access Journals (Sweden)

    Marwah k Hussien

    2018-04-01

    Full Text Available New partial encryption schemes are proposed, in which a secure encryption algorithm is used to encrypt only part of the compressed data. Partial encryption applied after application of image compression algorithm. Only 0.0244%-25% of the original data isencrypted for two pairs of dif-ferent grayscale imageswiththe size (256 ´ 256 pixels. As a result, we see a significant reduction of time in the stage of encryption and decryption. In the compression step, the Orthogonal Search Algorithm (OSA for motion estimation (the dif-ferent between stereo images is used. The resulting disparity vector and the remaining image were compressed by Discrete Cosine Transform (DCT, Quantization and arithmetic encoding. The image compressed was encrypted by Advanced Encryption Standard (AES. The images were then decoded and were compared with the original images. Experimental results showed good results in terms of Peak Signal-to-Noise Ratio (PSNR, Com-pression Ratio (CR and processing time. The proposed partial encryption schemes are fast, se-cure and do not reduce the compression performance of the underlying selected compression methods

  19. Perceptual distortion analysis of color image VQ-based coding

    Science.gov (United States)

    Charrier, Christophe; Knoblauch, Kenneth; Cherifi, Hocine

    1997-04-01

    It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.

  20. Image storage in coumarin-based copolymer thin films by photoinduced dimerization.

    Science.gov (United States)

    Gindre, Denis; Iliopoulos, Konstantinos; Krupka, Oksana; Champigny, Emilie; Morille, Yohann; Sallé, Marc

    2013-11-15

    We report a technique to encode grayscale digital images in thin films composed of copolymers containing coumarins. A nonlinear microscopy setup was implemented and two nonlinear optical processes were used to store and read information. A third-order process (two-photon absorption) was used to photoinduce a controlled dimer-to-monomer ratio within a defined tiny volume in the material, which corresponds to each recorded bit of data. Moreover, a second-order process (second-harmonic generation) was used to read the stored information, which has been found to be highly dependent upon the monomer-to-dimer ratio.

  1. Web-based document and content management with off-the-shelf software

    International Nuclear Information System (INIS)

    Schuster, J.

    1999-01-01

    This, then, is the current status of the project: Since we made the switch to Intradoc, we are now treating the project as a document and image management system. In reality, it could be considered a document and content management system since we can manage almost any file input to the system such as video or audio. At present, however, we are concentrating on images. As mentioned above, my CRADA funding was only targeted at including thumbnails of images in Intradoc. We still had to modify Intradoc so that it would compress images submitted to the system. All processing of files submitted to Intradoc is handled in what is called the Document Refinery. Even though MrSID created thumbnails in the process of compressing an image, work needed to be done to somehow build this capability into the Document Refinery. Therefore we made the decision to contract the Intradoc Engineering Team to perform this custom development work. To make Intradoc even more capable of handling images, we have also contracted for customization of the Document Refinery to accept Adobe PhotoShop and Illustrator file in their native format

  2. Evaluation of mixed-signal noise effects in photon-counting X-ray image sensor readout circuits

    International Nuclear Information System (INIS)

    Lundgren, Jan; Abdalla, Suliman; O'Nils, Mattias; Oelmann, Bengt

    2006-01-01

    In readout electronics for photon-counting pixel detectors, the tight integration between analog and digital blocks causes the readout electronics to be sensitive to on-chip noise coupling. This noise coupling can result in faulty luminance values in grayscale X-ray images, or as color distortions in a color X-ray imaging system. An exploration of simulating noise coupling in readout circuits is presented which enables the discovery of sensitive blocks at as early a stage as possible, in order to avoid costly design iterations. The photon-counting readout system has been simulated for noise coupling in order to highlight the existing problems of noise coupling in X-ray imaging systems. The simulation results suggest that on-chip noise coupling should be considered and simulated in future readout electronics systems for X-ray detectors

  3. Extraction of Lesion-Partitioned Features and Retrieval of Contrast-Enhanced Liver Images

    Directory of Open Access Journals (Sweden)

    Mei Yu

    2012-01-01

    Full Text Available The most critical step in grayscale medical image retrieval systems is feature extraction. Understanding the interrelatedness between the characteristics of lesion images and corresponding imaging features is crucial for image training, as well as for features extraction. A feature-extraction algorithm is developed based on different imaging properties of lesions and on the discrepancy in density between the lesions and their surrounding normal liver tissues in triple-phase contrast-enhanced computed tomographic (CT scans. The algorithm includes mainly two processes: (1 distance transformation, which is used to divide the lesion into distinct regions and represents the spatial structure distribution and (2 representation using bag of visual words (BoW based on regions. The evaluation of this system based on the proposed feature extraction algorithm shows excellent retrieval results for three types of liver lesions visible on triple-phase scans CT images. The results of the proposed feature extraction algorithm show that although single-phase scans achieve the average precision of 81.9%, 80.8%, and 70.2%, dual- and triple-phase scans achieve 86.3% and 88.0%.

  4. Establishment of the method of surface shaded display for brain PET imaging

    International Nuclear Information System (INIS)

    Zhang Xiangsong; Tang Anwu; He Zuoxiang

    2003-01-01

    Objective: To establish the method of surface shaded display (SSD) for brain PET imaging. Methods: The original brain PET images volume data were transferred to the personal computer by the local area network, and scaled into 256 grayscale values between 0 and 255. An appropriate threshold could be selected with three differential methods: depended on the histogram or maximum percentage of the volume data and the opposite value percentage of the lesion. The list of vertices and triangles describing the contour surface was produced with a high resolution three dimensional (3D) surface construction algorithm. Results: The final software of SSD for brain PET imaging with interactive user interface can produce 3D brain PET images which can be rotated, scaled, and saved or outputted with several image formats. Conclusion: The method of SSD for brain PET imaging can directly and integrally reflect the surface of brain cortex, and be helpful to locate lesions and display the range of lesions, but can not reflect the severity of lesions, nor can display the structure under brain cortex

  5. Reliability of ultrasound grading traditional score and new global OMERACT-EULAR score system (GLOESS): results from an inter- and intra-reading exercise by rheumatologists.

    Science.gov (United States)

    Ventura-Ríos, Lucio; Hernández-Díaz, Cristina; Ferrusquia-Toríz, Diana; Cruz-Arenas, Esteban; Rodríguez-Henríquez, Pedro; Alvarez Del Castillo, Ana Laura; Campaña-Parra, Alfredo; Canul, Efrén; Guerrero Yeo, Gerardo; Mendoza-Ruiz, Juan Jorge; Pérez Cristóbal, Mario; Sicsik, Sandra; Silva Luna, Karina

    2017-12-01

    This study aims to test the reliability of ultrasound to graduate synovitis in static and video images, evaluating separately grayscale and power Doppler (PD), and combined. Thirteen trained rheumatologist ultrasonographers participated in two separate rounds reading 42 images, 15 static and 27 videos, of the 7-joint count [wrist, 2nd and 3rd metacarpophalangeal (MCP), 2nd and 3rd interphalangeal (IPP), 2nd and 5th metatarsophalangeal (MTP) joints]. The images were from six patients with rheumatoid arthritis, performed by one ultrasonographer. Synovitis definition was according to OMERACT. Scoring system in grayscale, PD separately, and combined (GLOESS-Global OMERACT-EULAR Score System) were reviewed before exercise. Reliability intra- and inter-reading was calculated with Cohen's kappa weighted, according to Landis and Koch. Kappa values for inter-reading were good to excellent. The minor kappa was for GLOESS in static images, and the highest was for the same scoring in videos (k 0.59 and 0.85, respectively). Excellent values were obtained for static PD in 5th MTP joint and for PD video in 2nd MTP joint. Results for GLOESS in general were good to moderate. Poor agreement was observed in 3rd MCP and 3rd IPP in all kinds of images. Intra-reading agreement were greater in grayscale and GLOESS in static images than in videos (k 0.86 vs. 0.77 and k 0.86 vs. 0.71, respectively), but PD was greater in videos than in static images (k 1.0 vs. 0.79). The reliability of the synovitis scoring through static images and videos is in general good to moderate when using grayscale and PD separately or combined.

  6. Color reproduction and processing algorithm based on real-time mapping for endoscopic images.

    Science.gov (United States)

    Khan, Tareq H; Mohammed, Shahed K; Imtiaz, Mohammad S; Wahid, Khan A

    2016-01-01

    In this paper, we present a real-time preprocessing algorithm for image enhancement for endoscopic images. A novel dictionary based color mapping algorithm is used for reproducing the color information from a theme image. The theme image is selected from a nearby anatomical location. A database of color endoscopy image for different location is prepared for this purpose. The color map is dynamic as its contents change with the change of the theme image. This method is used on low contrast grayscale white light images and raw narrow band images to highlight the vascular and mucosa structures and to colorize the images. It can also be applied to enhance the tone of color images. The statistic visual representation and universal image quality measures show that the proposed method can highlight the mucosa structure compared to other methods. The color similarity has been verified using Delta E color difference, structure similarity index, mean structure similarity index and structure and hue similarity. The color enhancement was measured using color enhancement factor that shows considerable improvements. The proposed algorithm has low and linear time complexity, which results in higher execution speed than other related works.

  7. Musculoskeletal ultrasound and other imaging modalities in rheumatoid arthritis.

    Science.gov (United States)

    Ohrndorf, Sarah; Werner, Stephanie G; Finzel, Stephanie; Backhaus, Marina

    2013-05-01

    This review refers to the use of musculoskeletal ultrasound in patients with rheumatoid arthritis (RA) both in clinical practice and research. Furthermore, other novel sensitive imaging modalities (high resolution peripheral quantitative computed tomography and fluorescence optical imaging) are introduced in this article. Recently published ultrasound studies presented power Doppler activity by ultrasound highly predictive for later radiographic erosions in patients with RA. Another study presented synovitis detected by ultrasound being predictive of subsequent structural radiographic destruction irrespective of the ultrasound modality (grayscale ultrasound/power Doppler ultrasound). Further studies are currently under way which prove ultrasound findings as imaging biomarkers in the destructive process of RA. Other introduced novel imaging modalities are in the validation process to prove their impact and significance in inflammatory joint diseases. The introduced imaging modalities show different sensitivities and specificities as well as strength and weakness belonging to the assessment of inflammation, differentiation of the involved structures and radiological progression. The review tries to give an answer regarding how to best integrate them into daily clinical practice with the aim to improve the diagnostic algorithms, the daily patient care and, furthermore, the disease's outcome.

  8. Influence of physiologic motion on the appearance of tissue in MR images

    International Nuclear Information System (INIS)

    Ehman, R.L.; McNamara, M.T.; Brasch, R.C.; Felmlee, J.P.; Gray, J.E.; Higgins, C.B.

    1986-01-01

    Studies were performed to determine the possible influence of physiologic motion on the parenchymal intensity of organs in magnetic resonance (MR) images. It is known that periodic motion associated with respiration and cardiac function causes characteristic artifacts in spin-warp images. The present study shows that bulk motion can also cause striking intensity changes at velocities equivalent to the craniocaudal respiratory excursion of organs in the upper abdomen. The magnitude of the effect depends on the velocity and direction of motion with respect to the three orthogonal axes of the imager and on the technical details of the imager and pulse sequence. Large systematic errors in calculated tissue relaxation times are possible due to this phenomenon. The findings have important implications for clinical imaging because motion can cause artifactual changes in the gray-scale relationships among tissues. Some pulse sequences are much less sensitive to these effects. These results provide guidance for selecting MR techniques that reduce the detrimental effect of respiratory and other physiologic motion on examinations of the upper abdomen and thorax

  9. VARIETY OF GRAY-SCALE SONOGRAPHIC APPEARANCE OF UNTREATED LIVER METASTASES ALI HADIDI

    Directory of Open Access Journals (Sweden)

    ALI HADIDI

    1982-07-01

    Full Text Available Encountered wit h b izarre patterns o f l i v e r metas t a s es dec l i ned our accuracy rate s o the humi liation o f mist ake s motivated me to re-assess t he value o f hepatic sonography in patients s u spe c t ed o f having metastatic l i ver neoplasms . 43 pat ients , who had no t recieved any prior the r a phy , had been studied by gray-scale ult r a s ound . The echographic evidence i n accordance with o ur e x p e r ~ e n ce can be categorized as fo l l ows : I l a rge echo g enic or ~c ho poor area , II d iscrete masse s wi t.h high-level e c hoes spreaded t hroughout a lobe o f the l i ve r , III e cho f ree mass with i r regular mar q i.n , r .J d i f f use a lterat ion of the homogeneous e cho pattern of t he liver , V Bull ' s - eye ,VI abscess like,VII sol id echogenic mass vei th a centra l hyperechoic hor izontal l i ne , VIII echogenic mass ".;i t;l two l a t eral hypo e choic marg i n s, rX isodense e chog eni c a r ea bounded by an hypoechoic c i r c l e . The f e at u r e s seen i n l iver ultra sonography of t he entire pat t e r ns , a nd those s e en as new c r i teria are presented ~

  10. Confocal fluorescence microscopy for rapid evaluation of invasive tumor cellularity of inflammatory breast carcinoma core needle biopsies.

    Science.gov (United States)

    Dobbs, Jessica; Krishnamurthy, Savitri; Kyrish, Matthew; Benveniste, Ana Paula; Yang, Wei; Richards-Kortum, Rebecca

    2015-01-01

    Tissue sampling is a problematic issue for inflammatory breast carcinoma, and immediate evaluation following core needle biopsy is needed to evaluate specimen adequacy. We sought to determine if confocal fluorescence microscopy provides sufficient resolution to evaluate specimen adequacy by comparing invasive tumor cellularity estimated from standard histologic images to invasive tumor cellularity estimated from confocal images of breast core needle biopsy specimens. Grayscale confocal fluorescence images of breast core needle biopsy specimens were acquired following proflavine application. A breast-dedicated pathologist evaluated invasive tumor cellularity in histologic images with hematoxylin and eosin staining and in grayscale and false-colored confocal images of cores. Agreement between cellularity estimates was quantified using a kappa coefficient. 23 cores from 23 patients with suspected inflammatory breast carcinoma were imaged. Confocal images were acquired in an average of less than 2 min per core. Invasive tumor cellularity estimated from histologic and grayscale confocal images showed moderate agreement by kappa coefficient: κ = 0.48 ± 0.09 (p confocal images require less than 2 min for acquisition and allow for evaluation of invasive tumor cellularity in breast core needle biopsy specimens with moderate agreement to histologic images. We show that confocal fluorescence microscopy can be performed immediately following specimen acquisition and could indicate the need for additional biopsies at the initial visit.

  11. Learning High-Order Filters for Efficient Blind Deconvolution of Document Photographs

    KAUST Repository

    Xiao, Lei

    2016-09-16

    Photographs of text documents taken by hand-held cameras can be easily degraded by camera motion during exposure. In this paper, we propose a new method for blind deconvolution of document images. Observing that document images are usually dominated by small-scale high-order structures, we propose to learn a multi-scale, interleaved cascade of shrinkage fields model, which contains a series of high-order filters to facilitate joint recovery of blur kernel and latent image. With extensive experiments, we show that our method produces high quality results and is highly efficient at the same time, making it a practical choice for deblurring high resolution text images captured by modern mobile devices. © Springer International Publishing AG 2016.

  12. Analysis of image acquisition, post-processing and documentation in adolescents with spine injuries. Comparison before and after referral to a university hospital

    International Nuclear Information System (INIS)

    Lemburg, S.P.; Roggenland, D.; Nicolas, V.; Heyer, C.M.

    2012-01-01

    Purpose: Systematic evaluation of imaging situation and standards in acute spinal injuries of adolescents. Materials and Methods: Retrospective analysis of imaging studies of transferred adolescents with spinal injuries and survey of transferring hospitals (TH) with respect to the availability of modalities and radiological expertise and post-processing and documentation of CT studies were performed. Repetitions of imaging studies and cumulative effective dose (CED) were noted. Results: 33 of 43 patients (77 %) treated in our hospital (MA 17.2 years, 52 % male) and 25 of 32 TH (78 %) were evaluated. 24-hr availability of conventional radiography and CT was present in 96 % and 92 % of TH, whereas MRI was available in only 36 %. In 64 % of TH, imaging expertise was guaranteed by an on-staff radiologist. During off-hours radiological service was provided on an on-call basis in 56 % of TH. Neuroradiologic and pediatric radiology expertise was not available in 44 % and 60 % of TH, respectively. CT imaging including post-processing and documentation matched our standards in 36 % and 32 % of cases. The repetition rate of CT studies was 39 % (CED 116.08 mSv). Conclusion: With frequent CT repetitions, two-thirds of re-examined patients revealed a different clinical estimation of trauma severity and insufficient CT quality as possible causes for re-examination. A standardization of initial clinical evaluation and CT imaging could possibly reduce the need for repeat examinations. (orig.)

  13. Astronomical Image Compression Techniques Based on ACC and KLT Coder

    Directory of Open Access Journals (Sweden)

    J. Schindler

    2011-01-01

    Full Text Available This paper deals with a compression of image data in applications in astronomy. Astronomical images have typical specific properties — high grayscale bit depth, size, noise occurrence and special processing algorithms. They belong to the class of scientific images. Their processing and compression is quite different from the classical approach of multimedia image processing. The database of images from BOOTES (Burst Observer and Optical Transient Exploring System has been chosen as a source of the testing signal. BOOTES is a Czech-Spanish robotic telescope for observing AGN (active galactic nuclei and the optical transient of GRB (gamma ray bursts searching. This paper discusses an approach based on an analysis of statistical properties of image data. A comparison of two irrelevancy reduction methods is presented from a scientific (astrometric and photometric point of view. The first method is based on a statistical approach, using the Karhunen-Loeve transform (KLT with uniform quantization in the spectral domain. The second technique is derived from wavelet decomposition with adaptive selection of used prediction coefficients. Finally, the comparison of three redundancy reduction methods is discussed. Multimedia format JPEG2000 and HCOMPRESS, designed especially for astronomical images, are compared with the new Astronomical Context Coder (ACC coder based on adaptive median regression.

  14. Electronic document management systems: an overview.

    Science.gov (United States)

    Kohn, Deborah

    2002-08-01

    For over a decade, most health care information technology (IT) professionals erroneously learned that document imaging, which is one of the many component technologies of an electronic document management system (EDMS), is the only technology of an EDMS. In addition, many health care IT professionals erroneously believed that EDMSs have either a limited role or no place in IT environments. As a result, most health care IT professionals do not understand documents and unstructured data and their value as structured data partners in most aspects of transaction and information processing systems.

  15. Ultrasound analysis of gray-scale median value of carotid plaques is a useful reference index for cerebro-cardiovascular events in patients with type 2 diabetes.

    Science.gov (United States)

    Ariyoshi, Kyoko; Okuya, Shigeru; Kunitsugu, Ichiro; Matsunaga, Kimie; Nagao, Yuko; Nomiyama, Ryuta; Takeda, Komei; Tanizawa, Yukio

    2015-01-01

    Measurements of plaque echogenicity, the gray-scale median (GSM), were shown to correlate inversely with risk factors for cerebro-cardiovascular disease (CVD). The eicosapentaenoic acid (EPA)/arachidonic acid (AA) ratio is a potential predictor of CVD risk. In the present study, we assessed the usefulness of carotid plaque GSM values and EPA/AA ratios in atherosclerotic diabetics. A total of 84 type 2 diabetics with carotid artery plaques were enrolled. On admission, platelet aggregation and lipid profiles, including EPA and AA, were examined. Using ultrasound, mean intima media thickness and plaque score were measured in carotid arteries. Plaque echogenicity was evaluated using computer-assisted quantification of GSM. The patients were then further observed for approximately 3 years. Gray-scale median was found to be a good marker of CVD events. On multivariate logistic regression analysis, GSM <32 and plaque score ≥5 were significantly associated with past history and onset of CVD during the follow-up period, the odds ratios being 7.730 (P = 0.014) and 4.601 (P = 0.046), respectively. EPA/AA showed a significant correlation with GSM (P = 0.012) and high-density lipoprotein cholesterol (P = 0.039), and an inverse correlation with platelet aggregation (P = 0.046) and triglyceride (P = 0.020). Although most patients with CVD had both low GSM and low EPA/AA values, an association of EPA/AA with CVD events could not be statistically confirmed. The present results suggest the GSM value to be useful as a reference index for CVD events in high-risk atherosclerotic diabetics. Associations of the EPA/AA ratio with known CVD risk factors warrant a larger and more extensive study to show the usefulness of this parameter.

  16. Electronic Document Imaging and Optical Storage Systems for Local Governments: An Introduction. Local Government Records Technical Information Series. Number 21.

    Science.gov (United States)

    Schwartz, Stanley F.

    This publication introduces electronic document imaging systems and provides guidance for local governments in New York in deciding whether such systems should be adopted for their own records and information management purposes. It advises local governments on how to develop plans for using such technology by discussing its advantages and…

  17. Wiener discrete cosine transform-based image filtering

    Science.gov (United States)

    Pogrebnyak, Oleksiy; Lukin, Vladimir V.

    2012-10-01

    A classical problem of additive white (spatially uncorrelated) Gaussian noise suppression in grayscale images is considered. The main attention is paid to discrete cosine transform (DCT)-based denoising, in particular, to image processing in blocks of a limited size. The efficiency of DCT-based image filtering with hard thresholding is studied for different sizes of overlapped blocks. A multiscale approach that aggregates the outputs of DCT filters having different overlapped block sizes is proposed. Later, a two-stage denoising procedure that presumes the use of the multiscale DCT-based filtering with hard thresholding at the first stage and a multiscale Wiener DCT-based filtering at the second stage is proposed and tested. The efficiency of the proposed multiscale DCT-based filtering is compared to the state-of-the-art block-matching and three-dimensional filter. Next, the potentially reachable multiscale filtering efficiency in terms of output mean square error (MSE) is studied. The obtained results are of the same order as those obtained by Chatterjee's approach based on nonlocal patch processing. It is shown that the ideal Wiener DCT-based filter potential is usually higher when noise variance is high.

  18. Using color management in color document processing

    Science.gov (United States)

    Nehab, Smadar

    1995-04-01

    Color Management Systems have been used for several years in Desktop Publishing (DTP) environments. While this development hasn't matured yet, we are already experiencing the next generation of the color imaging revolution-Device Independent Color for the small office/home office (SOHO) environment. Though there are still open technical issues with device independent color matching, they are not the focal point of this paper. This paper discusses two new and crucial aspects in using color management in color document processing: the management of color objects and their associated color rendering methods; a proposal for a precedence order and handshaking protocol among the various software components involved in color document processing. As color peripherals become affordable to the SOHO market, color management also becomes a prerequisite for common document authoring applications such as word processors. The first color management solutions were oriented towards DTP environments whose requirements were largely different. For example, DTP documents are image-centric, as opposed to SOHO documents that are text and charts centric. To achieve optimal reproduction on low-cost SOHO peripherals, it is critical that different color rendering methods are used for the different document object types. The first challenge in using color management of color document processing is the association of rendering methods with object types. As a result of an evolutionary process, color matching solutions are now available as application software, as driver embedded software and as operating system extensions. Consequently, document processing faces a new challenge, the correct selection of the color matching solution while avoiding duplicate color corrections.

  19. Automated correlation and classification of secondary ion mass spectrometry images using a k-means cluster method.

    Science.gov (United States)

    Konicek, Andrew R; Lefman, Jonathan; Szakal, Christopher

    2012-08-07

    We present a novel method for correlating and classifying ion-specific time-of-flight secondary ion mass spectrometry (ToF-SIMS) images within a multispectral dataset by grouping images with similar pixel intensity distributions. Binary centroid images are created by employing a k-means-based custom algorithm. Centroid images are compared to grayscale SIMS images using a newly developed correlation method that assigns the SIMS images to classes that have similar spatial (rather than spectral) patterns. Image features of both large and small spatial extent are identified without the need for image pre-processing, such as normalization or fixed-range mass-binning. A subsequent classification step tracks the class assignment of SIMS images over multiple iterations of increasing n classes per iteration, providing information about groups of images that have similar chemistry. Details are discussed while presenting data acquired with ToF-SIMS on a model sample of laser-printed inks. This approach can lead to the identification of distinct ion-specific chemistries for mass spectral imaging by ToF-SIMS, as well as matrix-assisted laser desorption ionization (MALDI), and desorption electrospray ionization (DESI).

  20. Documenting Bronze Age Akrotiri on Thera Using Laser Scanning, Image-Based Modelling and Geophysical Prospection

    Science.gov (United States)

    Trinks, I.; Wallner, M.; Kucera, M.; Verhoeven, G.; Torrejón Valdelomar, J.; Löcker, K.; Nau, E.; Sevara, C.; Aldrian, L.; Neubauer, E.; Klein, M.

    2017-02-01

    The excavated architecture of the exceptional prehistoric site of Akrotiri on the Greek island of Thera/Santorini is endangered by gradual decay, damage due to accidents, and seismic shocks, being located on an active volcano in an earthquake-prone area. Therefore, in 2013 and 2014 a digital documentation project has been conducted with support of the National Geographic Society in order to generate a detailed digital model of Akrotiri's architecture using terrestrial laser scanning and image-based modeling. Additionally, non-invasive geophysical prospection has been tested in order to investigate its potential to explore and map yet buried archaeological remains. This article describes the project and the generated results.

  1. Cellular automata rule characterization and classification using texture descriptors

    Science.gov (United States)

    Machicao, Jeaneth; Ribas, Lucas C.; Scabini, Leonardo F. S.; Bruno, Odermir M.

    2018-05-01

    The cellular automata (CA) spatio-temporal patterns have attracted the attention from many researchers since it can provide emergent behavior resulting from the dynamics of each individual cell. In this manuscript, we propose an approach of texture image analysis to characterize and classify CA rules. The proposed method converts the CA spatio-temporal patterns into a gray-scale image. The gray-scale is obtained by creating a binary number based on the 8-connected neighborhood of each dot of the CA spatio-temporal pattern. We demonstrate that this technique enhances the CA rule characterization and allow to use different texture image analysis algorithms. Thus, various texture descriptors were evaluated in a supervised training approach aiming to characterize the CA's global evolution. Our results show the efficiency of the proposed method for the classification of the elementary CA (ECAs), reaching a maximum of 99.57% of accuracy rate according to the Li-Packard scheme (6 classes) and 94.36% for the classification of the 88 rules scheme. Moreover, within the image analysis context, we found a better performance of the method by means of a transformation of the binary states to a gray-scale.

  2. A new clustering algorithm for scanning electron microscope images

    Science.gov (United States)

    Yousef, Amr; Duraisamy, Prakash; Karim, Mohammad

    2016-04-01

    A scanning electron microscope (SEM) is a type of electron microscope that produces images of a sample by scanning it with a focused beam of electrons. The electrons interact with the sample atoms, producing various signals that are collected by detectors. The gathered signals contain information about the sample's surface topography and composition. The electron beam is generally scanned in a raster scan pattern, and the beam's position is combined with the detected signal to produce an image. The most common configuration for an SEM produces a single value per pixel, with the results usually rendered as grayscale images. The captured images may be produced with insufficient brightness, anomalous contrast, jagged edges, and poor quality due to low signal-to-noise ratio, grained topography and poor surface details. The segmentation of the SEM images is a tackling problems in the presence of the previously mentioned distortions. In this paper, we are stressing on the clustering of these type of images. In that sense, we evaluate the performance of the well-known unsupervised clustering and classification techniques such as connectivity based clustering (hierarchical clustering), centroid-based clustering, distribution-based clustering and density-based clustering. Furthermore, we propose a new spatial fuzzy clustering technique that works efficiently on this type of images and compare its results against these regular techniques in terms of clustering validation metrics.

  3. Automatic recognition of damaged town buildings caused by earthquake using remote sensing information: Taking the 2001 Bhuj, India, earthquake and the 1976 Tangshan, China, earthquake as examples

    Science.gov (United States)

    Liu, Jia-Hang; Shan, Xin-Jian; Yin, Jing-Yuan

    2004-11-01

    In the high-resolution images, the undamaged buildings generally show a natural textural feature, while the damaged or semi-damaged buildings always exhibit some low-grayscale blocks because of their coarsely damaged sections. If we use a proper threshold to classify the grayscale of image, some independent holes will appear in the damaged regions. By using such statistical information as the number of holes in every region, or the ratio between the area of holes and that of the region, etc, the damaged buildings can be separated from the undamaged, thus automatic detection of damaged buildings can be realized. Based on these characteristics, a new method to automatically detect the damage buildings by using regional structure and statistical information of texture is presented in the paper. In order to test its validity, 1-m-resolution iKonos merged image of the 2001 Bhuj earthquake and grayscale aerial photos of the 1976 Tangshan earthquake are selected as two examples to automatically detect the damaged buildings. Satisfied results are obtained.

  4. Script-independent text line segmentation in freestyle handwritten documents.

    Science.gov (United States)

    Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi

    2008-08-01

    Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.

  5. Image Transmission through OFDM System under the Influence of AWGN Channel

    Science.gov (United States)

    Krishna, Dharavathu; Anuradha, M. S., Dr.

    2017-08-01

    OFDM system is one among the modern techniques which is most abundantly used in next generation wireless communication networks for transmitting many forms of digital data in efficient manner than compared with other existing traditional techniques. In this paper, one such kind of a digital data corresponding to a two dimensional (2D) gray-scale image is used to evaluate the functionality and overall performance of an OFDM system under the influence of modeled AWGN channel in MATLAB simulation environment. Within the OFDM system, different configurations of notable modulation techniques such as M-PSK and M-QAM are considered for evaluation of the system and necessary valid conclusions are made from the comparison of several observed MATLAB simulation results.

  6. An Implementation of Document Image Reconstruction System on a Smart Device Using a 1D Histogram Calibration Algorithm

    Directory of Open Access Journals (Sweden)

    Lifeng Zhang

    2014-01-01

    Full Text Available In recent years, the smart devices equipped with imaging functions are widely spreading for consumer application. It is very convenient for people to record information using these devices. For example, people can photo one page of a book in a library or they can capture an interesting piece of news on the bulletin board when walking on the street. But sometimes, one shot full area image cannot give a sufficient resolution for OCR soft or for human visual recognition. Therefore, people would prefer to take several partial character images of a readable size and then stitch them together in an efficient way. In this study, we propose a print document acquisition method using a device with a video camera. A one-dimensional histogram based self-calibration algorithm is developed for calibration. Because the calculation cost is low, it can be installed on a smartphone. The simulation result shows that the calibration and stitching are well performed.

  7. HIDING TEXT IN DIGITAL IMAGES USING PERMUTATION ORDERING AND COMPACT KEY BASED DICTIONARY

    Directory of Open Access Journals (Sweden)

    Nagalinga Rajan

    2017-05-01

    Full Text Available Digital image steganography is an emerging technique in secure communication for the modern connected world. It protects the content of the message without arousing suspicion in a passive observer. A novel steganography method is presented to hide text in digital images. A compact dictionary is designed to efficiently communicate all types of secret messages. The sorting order of pixels in image blocks are chosen as the carrier of embedded information. The high correlation in image pixel values means reordering within image blocks do not cause high distortion. The image is divided into blocks and perturbed to create non repeating sequences of intensity values. These values are then sorted according to the message. At the receiver end, the message is read from the sorting order of the pixels in image blocks. Only those image blocks with standard deviation lesser than a given threshold are chosen for embedding to alleviate visual distortion. Information Security is provided by shuffling the dictionary according to a shared key. Experimental Results and Analysis show that the method is capable of hiding text with more than 4000 words in a 512×512 grayscale image with a peak signal to noise ratio above 40 decibels.

  8. "Cyt/Nuc," a Customizable and Documenting ImageJ Macro for Evaluation of Protein Distributions Between Cytosol and Nucleus.

    Science.gov (United States)

    Grune, Tilman; Kehm, Richard; Höhn, Annika; Jung, Tobias

    2018-05-01

    Large amounts of data from multi-channel, high resolution, fluorescence microscopic images require tools that provide easy, customizable, and reproducible high-throughput analysis. The freeware "ImageJ" has become one of the standard tools for scientific image analysis. Since ImageJ offers recording of "macros," even a complex multi-step process can be easily applied fully automated to large numbers of images, saving both time and reducing human subjective evaluation. In this work, we present "Cyt/Nuc," an ImageJ macro, able to recognize and to compare the nuclear and cytosolic areas of tissue samples, in order to investigate distributions of immunostained proteins between both compartments, while it documents in detail the whole process of evaluation and pattern recognition. As practical example, the redistribution of the 20S proteasome, the main intracellular protease in mammalian cells, is investigated in NZO-mouse liver after feeding the animals different diets. A significant shift in proteasomal distribution between cytosol and nucleus in response to metabolic stress was revealed using "Cyt/Nuc" via automatized quantification of thousands of nuclei within minutes. "Cyt/Nuc" is easy to use and highly customizable, matches the precision of careful manual evaluation and bears the potential for quick detection of any shift in intracellular protein distribution. © 2018 The Authors. Biotechnology Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  9. SU-F-J-84: Comparison of Quantitative Deformable Image Registration Evaluation Tools: Application to Prostate IGART

    Energy Technology Data Exchange (ETDEWEB)

    Dogan, N [University of Miami, Miami, FL (United States); Weiss, E [Virginia Commonwealth University, Richmond, Virginia (United States); Sleeman, W; Williamson, J [Virginia Commonwealth University, Richmond, VA (United States); Christensen, G [University of Iowa, Iowa City, IA (United States); Ford, J [University of Miami Miller School of Medicine, Miami, FL (United States)

    2016-06-15

    Purpose: Errors in displacement vector fields (DVFs) generated by Deformable Image Registration (DIR) algorithms can give rise to significant uncertainties in contour propagation and dose accumulation in Image-Guided Adaptive Radiotherapy (IGART). The purpose of this work is to assess the accuracy of two DIR algorithms using a variety of quality metrics for prostate IGART. Methods: Pelvic CT images were selected from an anonymized database of nineteen prostate patients who underwent 8–12 serial scans during radiotherapy. Prostate, bladder, and rectum were contoured on 34 image-sets for three patients by the same physician. The planning CT was deformably-registered to daily CT using three variants of the Small deformation Inverse Consistent Linear Elastic (SICLE) algorithm: Grayscale-driven (G), Contour-driven (C, which utilizes segmented structures to drive DIR), combined (G+C); and also grayscale ITK demons (Gd). The accuracy of G, C, G+C SICLE and Gd registrations were evaluated using a new metric Edge Gradient Distance to Agreement (EGDTA) and other commonly-used metrics such as Pearson Correlation Coefficient (PCC), Dice Similarity Index (DSI) and Hausdorff Distance (HD). Results: C and G+C demonstrated much better performance at organ boundaries, revealing the lowest HD and highest DSI, in prostate, bladder and rectum. G+C demonstrated the lowest mean EGDTA (1.14 mm), which corresponds to highest registration quality, compared to G and C DVFs (1.16 and 2.34 mm). However, demons DIR showed the best overall performance, revealing lowest EGDTA (0.73 mm) and highest PCC (0.85). Conclusion: As expected, both C- and C+G SICLE more accurately reproduce manually-contoured target datasets than G-SICLE or Gd using HD and DSI metrics. In general, the Gd appears to have difficulty reproducing large daily position and shape changes in the rectum and bladder. However, Gd outperforms SICLE in terms of EGDTA and PCC metrics, possibly at the expense of topological quality of

  10. Binarization and Segmentation Framework for Sundanese Ancient Documents

    Directory of Open Access Journals (Sweden)

    Erick Paulus

    2017-11-01

    Full Text Available Binarization and segmentation process are two first important methods for optical character recognition system. For ancient document image which is written by human, binarization process remains a major challenge.In general, it is occurring because the image quality is badly degraded image and has various different noises in the non-text area.After binarization process, segmentation based on line is conducted in separate text-line from the others. We proposedanovel frameworkof binarization and segmentation process that enhance the performance of Niblackbinarization method and implementthe minimum of energy function to find the path of the separator line between two text-line.For experiments, we use the 22 images that come from the Sundanese ancient documents on Kropak 18 and Kropak22. The evaluation matrix show that our proposed binarization succeeded to improve F-measure 20%for Kropak 22 and 50% for Kropak 18 from original Niblack method.Then, we present the influence of various input images both true color and binary image to text-line segmentation. In line segmentation process, binarized image from our proposed framework can producethe number of line-text as same as the number of target lines. Overall, our proposed framework produce promised results so it can be used as input images for the next OCR process.

  11. Modeling Documents with Event Model

    Directory of Open Access Journals (Sweden)

    Longhui Wang

    2015-08-01

    Full Text Available Currently deep learning has made great breakthroughs in visual and speech processing, mainly because it draws lessons from the hierarchical mode that brain deals with images and speech. In the field of NLP, a topic model is one of the important ways for modeling documents. Topic models are built on a generative model that clearly does not match the way humans write. In this paper, we propose Event Model, which is unsupervised and based on the language processing mechanism of neurolinguistics, to model documents. In Event Model, documents are descriptions of concrete or abstract events seen, heard, or sensed by people and words are objects in the events. Event Model has two stages: word learning and dimensionality reduction. Word learning is to learn semantics of words based on deep learning. Dimensionality reduction is the process that representing a document as a low dimensional vector by a linear mode that is completely different from topic models. Event Model achieves state-of-the-art results on document retrieval tasks.

  12. Generalized image contrast enhancement technique based on Heinemann contrast discrimination model

    Science.gov (United States)

    Liu, Hong; Nodine, Calvin F.

    1994-03-01

    This paper presents a generalized image contrast enhancement technique which equalizes perceived brightness based on the Heinemann contrast discrimination model. This is a modified algorithm which presents an improvement over the previous study by Mokrane in its mathematically proven existence of a unique solution and in its easily tunable parameterization. The model uses a log-log representation of contrast luminosity between targets and the surround in a fixed luminosity background setting. The algorithm consists of two nonlinear gray-scale mapping functions which have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of gray scale distribution of the image, and can be uniquely determined once the previous three are given. Tests have been carried out to examine the effectiveness of the algorithm for increasing the overall contrast of images. It can be demonstrated that the generalized algorithm provides better contrast enhancement than histogram equalization. In fact, the histogram equalization technique is a special case of the proposed mapping.

  13. Natural-pose hand detection in low-resolution images

    Directory of Open Access Journals (Sweden)

    Nyan Bo Bo1

    2009-07-01

    Full Text Available Robust real-time hand detection and tracking in video sequences would enable many applications in areas as diverse ashuman-computer interaction, robotics, security and surveillance, and sign language-based systems. In this paper, we introducea new approach for detecting human hands that works on single, cluttered, low-resolution images. Our prototype system, whichis primarily intended for security applications in which the images are noisy and low-resolution, is able to detect hands as smallas 2424 pixels in cluttered scenes. The system uses grayscale appearance information to classify image sub-windows as eithercontaining or not containing a human hand very rapidly at the cost of a high false positive rate. To improve on the false positiverate of the main classifier without affecting its detection rate, we introduce a post-processor system that utilizes the geometricproperties of skin color blobs. When we test our detector on a test image set containing 106 hands, 92 of those hands aredetected (86.8% detection rate, with an average false positive rate of 1.19 false positive detections per image. The rapiddetection speed, the high detection rate of 86.8%, and the low false positive rate together ensure that our system is useable asthe main detector in a diverse variety of applications requiring robust hand detection and tracking in low-resolution, clutteredscenes.

  14. Research on hyperspectral dynamic scene and image sequence simulation

    Science.gov (United States)

    Sun, Dandan; Liu, Fang; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyperspectral dynamic scene and image sequence for hyperspectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyperspectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyperspectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyperspectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyperspectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyperspectral images are consistent with the theoretical analysis results.

  15. The role of records management professionals in optical disk-based document imaging systems in the petroleum industry

    International Nuclear Information System (INIS)

    Cisco, S.L.

    1992-01-01

    Analyses of the data indicated that nearly one third of the 83 companies in this study had implemented one or more document imaging systems. Companies with imaging systems mostly were large (more than 1,001 employees), and mostly were international in scope. Although records management professionals traditionally were delegated responsibility for acquiring, designing, implementing, and maintaining paper-based information systems and the records therein, when records were converted to optical disks, responsibility for acquiring, designing, implementing, and maintaining optical disk-based information systems and the records therein, was delegated more frequently to end user departments and IS/MIS/DP professionals than to records professionals. Records management professionals assert that the need of an organization for a comprehensive records management program is not served best when individuals who are not professional records managers are responsible for the records stored in optical disk-based information systems

  16. Deferred slanted-edge analysis: a unified approach to spatial frequency response measurement on distorted images and color filter array subsets.

    Science.gov (United States)

    van den Bergh, F

    2018-03-01

    The slanted-edge method of spatial frequency response (SFR) measurement is usually applied to grayscale images under the assumption that any distortion of the expected straight edge is negligible. By decoupling the edge orientation and position estimation step from the edge spread function construction step, it is shown in this paper that the slanted-edge method can be extended to allow it to be applied to images suffering from significant geometric distortion, such as produced by equiangular fisheye lenses. This same decoupling also allows the slanted-edge method to be applied directly to Bayer-mosaicked images so that the SFR of the color filter array subsets can be measured directly without the unwanted influence of demosaicking artifacts. Numerical simulation results are presented to demonstrate the efficacy of the proposed deferred slanted-edge method in relation to existing methods.

  17. Visual properties and memorising scenes: Effects of image-space sparseness and uniformity.

    Science.gov (United States)

    Lukavský, Jiří; Děchtěrenko, Filip

    2017-10-01

    Previous studies have demonstrated that humans have a remarkable capacity to memorise a large number of scenes. The research on memorability has shown that memory performance can be predicted by the content of an image. We explored how remembering an image is affected by the image properties within the context of the reference set, including the extent to which it is different from its neighbours (image-space sparseness) and if it belongs to the same category as its neighbours (uniformity). We used a reference set of 2,048 scenes (64 categories), evaluated pairwise scene similarity using deep features from a pretrained convolutional neural network (CNN), and calculated the image-space sparseness and uniformity for each image. We ran three memory experiments, varying the memory workload with experiment length and colour/greyscale presentation. We measured the sensitivity and criterion value changes as a function of image-space sparseness and uniformity. Across all three experiments, we found separate effects of 1) sparseness on memory sensitivity, and 2) uniformity on the recognition criterion. People better remembered (and correctly rejected) images that were more separated from others. People tended to make more false alarms and fewer miss errors in images from categorically uniform portions of the image-space. We propose that both image-space properties affect human decisions when recognising images. Additionally, we found that colour presentation did not yield better memory performance over grayscale images.

  18. Pramana – Journal of Physics | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Based on the inverse transformation, an appropriate pre-processing scheme for electrically addressed input gray-scale images, particularly important in several optical processing and imaging applications, is suggested. Further, the necessity to compensate the SLM image nonlinearities in a volume holographic data ...

  19. Evaluation of moisture content distribution in wood by soft X-ray imaging

    International Nuclear Information System (INIS)

    Tanaka, T.; Avramidis, S.; Shida, S.

    2009-01-01

    A technique for nondestructive evaluation of moisture content distribution of Japanese cedar (sugi) during drying using a newly developed soft X-ray digital microscope was investigated. Radial, tangential, and cross-sectional samples measuring 100 x 100 x 10 mm were cut from green sugi wood. Each sample was dried in several steps in an oven and upon completion of each step, the mass was recorded and a soft X-ray image was taken. The relationship between moisture content and the average grayscale value of the soft X-ray image at each step was linear. In addition, the linear regressions overlapped each other regardless of the sample sections. These results showed that soft X-ray images could accurately estimate the moisture content. Applying this relationship to a small section of each sample, the moisture content distribution was estimated from the image differential between the soft X-ray pictures obtained from the sample in question and the same sample in the oven-dried condition. Moisture content profiles for 10-mm-wide parts at the centers of the samples were also obtained. The shapes of the profiles supported the evaluation method used in this study

  20. Methods of filtering the graph images of the functions

    Directory of Open Access Journals (Sweden)

    Олександр Григорович Бурса

    2017-06-01

    Full Text Available The theoretical aspects of cleaning raster images of scanned graphs of functions from digital, chromatic and luminance distortions by using computer graphics techniques have been considered. The basic types of distortions characteristic of graph images of functions have been stated. To suppress the distortion several methods, providing for high-quality of the resulting images and saving their topological features, were suggested. The paper describes the techniques developed and improved by the authors: the method of cleaning the image of distortions by means of iterative contrasting, based on the step-by-step increase in image contrast in the graph by 1%; the method of small entities distortion restoring, based on the thinning of the known matrix of contrast increase filter (the allowable dimensions of the nucleus dilution radius convolution matrix, which provide for the retention of the graph lines have been established; integration technique of the noise reduction method by means of contrasting and distortion restoring method of small entities with known σ-filter. Each method in the complex has been theoretically substantiated. The developed methods involve treatment of graph images as the entire image (global processing and its fragments (local processing. The metrics assessing the quality of the resulting image with the global and local processing have been chosen, the substantiation of the choice as well as the formulas have been given. The proposed complex methods of cleaning the graphs images of functions from grayscale image distortions is adaptive to the form of an image carrier, the distortion level in the image and its distribution. The presented results of testing the developed complex of methods for a representative sample of images confirm its effectiveness

  1. [Determination of the daily changes curve of nitrogen oxides in the atmosphere by digital imaging colorimetry method].

    Science.gov (United States)

    Yang, Chuan-Xiao; Sun, Xiang-Ying; Liu, Bin

    2009-06-01

    From the digital images of the red complex which resulted in the interaction of nitrite with N-(1-naphthyl) ethylenediamine dihydrochloride and P-Aminobenzene sulfonic acid, it could be seen that the solution colors obviously increased with increasing the concentration of nitrite ion. The JPEG format of the digital images was transformed into gray-scale format by origin 7.0 software, and the gray values were measured with scion image software. It could be seen that the gray values of the digital image obviously increased with increasing the concentration of nitrite ion, too. Thus a novel digital imaging colorimetric (DIC) method to determine nitrogen oxides (NO(x)) contents in air was developed. Based on the red, green and blue (RGB) tricolor theory, the principle of the digital imaging colorimetric method and the influential factors on digital imaging were discussed. The present method was successfully applied to the determination of the daily changes curve of nitrogen oxides in the atmosphere and NO2- in synthetic samples with the recovery of 97.3%-104.0%, and the relative standard deviation (RSD) was less than 5.0%. The results of the determination were consistent with those obtained by spectrophotometric method.

  2. Color-filter-free spatial visible light communication using RGB-LED and mobile-phone camera.

    Science.gov (United States)

    Chen, Shih-Hao; Chow, Chi-Wai

    2014-12-15

    A novel color-filter-free visible-light communication (VLC) system using red-green-blue (RGB) light emitting diode (LED) and mobile-phone camera is proposed and demonstrated for the first time. A feature matching method, which is based on the scale-invariant feature transform (SIFT) algorithm for the received grayscale image is used instead of the chromatic information decoding method. The proposed method is simple and saves the computation complexity. The signal processing is based on the grayscale image computation; hence neither color-filter nor chromatic channel information is required. A proof-of-concept experiment is performed and high performance channel recognition is achieved.

  3. Fusion of lens-free microscopy and mobile-phone microscopy images for high-color-accuracy and high-resolution pathology imaging

    Science.gov (United States)

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2017-03-01

    Digital pathology and telepathology require imaging tools with high-throughput, high-resolution and accurate color reproduction. Lens-free on-chip microscopy based on digital in-line holography is a promising technique towards these needs, as it offers a wide field of view (FOV >20 mm2) and high resolution with a compact, low-cost and portable setup. Color imaging has been previously demonstrated by combining reconstructed images at three discrete wavelengths in the red, green and blue parts of the visible spectrum, i.e., the RGB combination method. However, this RGB combination method is subject to color distortions. To improve the color performance of lens-free microscopy for pathology imaging, here we present a wavelet-based color fusion imaging framework, termed "digital color fusion microscopy" (DCFM), which digitally fuses together a grayscale lens-free microscope image taken at a single wavelength and a low-resolution and low-magnification color-calibrated image taken by a lens-based microscope, which can simply be a mobile phone based cost-effective microscope. We show that the imaging results of an H&E stained breast cancer tissue slide with the DCFM technique come very close to a color-calibrated microscope using a 40x objective lens with 0.75 NA. Quantitative comparison showed 2-fold reduction in the mean color distance using the DCFM method compared to the RGB combination method, while also preserving the high-resolution features of the lens-free microscope. Due to the cost-effective and field-portable nature of both lens-free and mobile-phone microscopy techniques, their combination through the DCFM framework could be useful for digital pathology and telepathology applications, in low-resource and point-of-care settings.

  4. Dealing with extreme data diversity: extraction and fusion from the growing types of document formats

    Science.gov (United States)

    David, Peter; Hansen, Nichole; Nolan, James J.; Alcocer, Pedro

    2015-05-01

    The growth in text data available online is accompanied by a growth in the diversity of available documents. Corpora with extreme heterogeneity in terms of file formats, document organization, page layout, text style, and content are common. The absence of meaningful metadata describing the structure of online and open-source data leads to text extraction results that contain no information about document structure and are cluttered with page headers and footers, web navigation controls, advertisements, and other items that are typically considered noise. We describe an approach to document structure and metadata recovery that uses visual analysis of documents to infer the communicative intent of the author. Our algorithm identifies the components of documents such as titles, headings, and body content, based on their appearance. Because it operates on an image of a document, our technique can be applied to any type of document, including scanned images. Our approach to document structure recovery considers a finer-grained set of component types than prior approaches. In this initial work, we show that a machine learning approach to document structure recovery using a feature set based on the geometry and appearance of images of documents achieves a 60% greater F1- score than a baseline random classifier.

  5. Standardized cine-loop documentation in abdominal ultrasound facilitates offline image interpretation.

    Science.gov (United States)

    Dormagen, Johann Baptist; Gaarder, Mario; Drolsum, Anders

    2015-01-01

    One of the main disadvantages of conventional ultrasound is its operator dependency, which might impede the reproducibility of the sonographic findings. A new approach with cine-loops and standardized scan protocols can overcome this drawback. To compare abdominal ultrasound findings of immediate bedside reading by performing radiologist with offline reading by a non-performing radiologist, using standardized cine-loop sequences. Over a 6-month period, three radiologists performed 140 dynamic ultrasound organ-based examinations in 43 consecutive outpatients. Examination protocols were standardized and included predefined probe position and sequences of short cine-loops of the liver, gallbladder, pancreas, kidneys, and urine bladder, covering the organs completely in two planes. After bedside examinations, the studies were reviewed and read out immediately by the performing radiologist. Image quality was registered from 1 (no diagnostic value) to 5 (excellent cine-loop quality). Offline reading was performed blinded by a radiologist who had not performed the examination. Bedside and offline reading were compared with each other and with consensus results. In 140 examinations, consensus reading revealed 21 cases with renal disorders, 17 cases with liver and bile pathology, and four cases with bladder pathology. Overall inter-observer agreement was 0.73 (95% CI 0.61-0.91), with lowest agreement for findings of the urine bladder (0.36) and highest agreement in liver examinations (0.90). Disagreements between the two readings were seen in nine kidneys, three bladder examinations, one pancreas and bile system examinations each, and in one liver, giving a total number of mismatches of 11%. Nearly all cases of mismatch were of minor clinical significance. The median image quality was 3 (range, 2-5) with most examinations deemed a quality of 3. Compared to consensus reading, overall accuracy was 96% for bedside reading and 94% for offline reading. Standardized cine

  6. Transfer Learning with Convolutional Neural Networks for Classification of Abdominal Ultrasound Images.

    Science.gov (United States)

    Cheng, Phillip M; Malhi, Harshawn S

    2017-04-01

    The purpose of this study is to evaluate transfer learning with deep convolutional neural networks for the classification of abdominal ultrasound images. Grayscale images from 185 consecutive clinical abdominal ultrasound studies were categorized into 11 categories based on the text annotation specified by the technologist for the image. Cropped images were rescaled to 256 × 256 resolution and randomized, with 4094 images from 136 studies constituting the training set, and 1423 images from 49 studies constituting the test set. The fully connected layers of two convolutional neural networks based on CaffeNet and VGGNet, previously trained on the 2012 Large Scale Visual Recognition Challenge data set, were retrained on the training set. Weights in the convolutional layers of each network were frozen to serve as fixed feature extractors. Accuracy on the test set was evaluated for each network. A radiologist experienced in abdominal ultrasound also independently classified the images in the test set into the same 11 categories. The CaffeNet network classified 77.3% of the test set images accurately (1100/1423 images), with a top-2 accuracy of 90.4% (1287/1423 images). The larger VGGNet network classified 77.9% of the test set accurately (1109/1423 images), with a top-2 accuracy of VGGNet was 89.7% (1276/1423 images). The radiologist classified 71.7% of the test set images correctly (1020/1423 images). The differences in classification accuracies between both neural networks and the radiologist were statistically significant (p convolutional neural networks may be used to construct effective classifiers for abdominal ultrasound images.

  7. Colorization and automated segmentation of human T2 MR brain images for characterization of soft tissues.

    Directory of Open Access Journals (Sweden)

    Muhammad Attique

    Full Text Available Characterization of tissues like brain by using magnetic resonance (MR images and colorization of the gray scale image has been reported in the literature, along with the advantages and drawbacks. Here, we present two independent methods; (i a novel colorization method to underscore the variability in brain MR images, indicative of the underlying physical density of bio tissue, (ii a segmentation method (both hard and soft segmentation to characterize gray brain MR images. The segmented images are then transformed into color using the above-mentioned colorization method, yielding promising results for manual tracing. Our color transformation incorporates the voxel classification by matching the luminance of voxels of the source MR image and provided color image by measuring the distance between them. The segmentation method is based on single-phase clustering for 2D and 3D image segmentation with a new auto centroid selection method, which divides the image into three distinct regions (gray matter (GM, white matter (WM, and cerebrospinal fluid (CSF using prior anatomical knowledge. Results have been successfully validated on human T2-weighted (T2 brain MR images. The proposed method can be potentially applied to gray-scale images from other imaging modalities, in bringing out additional diagnostic tissue information contained in the colorized image processing approach as described.

  8. A review of technology and trends in document delivery services

    Energy Technology Data Exchange (ETDEWEB)

    Bourne, C P [DIALOG Information Services, Inc., Palo Alto, CA (United States)

    1990-05-01

    This paper reviews the major lines of technical development being pursued to extend or replace traditional inter-library loan and photocopy service and to facilitate the delivery of source documents to individual end users. Examples of technical approaches discussed are: (1) the inclusion of full text and image data in central online systems; (2) image workstations such as the ADONIS and UMI systems; and (3) the use of electronic networks for document ordering and delivery. Some consideration is given to the policy implications for libraries and information systems. (author). 11 tabs.

  9. A review of technology and trends in document delivery services

    International Nuclear Information System (INIS)

    Bourne, C.P.

    1990-05-01

    This paper reviews the major lines of technical development being pursued to extend or replace traditional inter-library loan and photocopy service and to facilitate the delivery of source documents to individual end users. Examples of technical approaches discussed are: 1) the inclusion of full text and image data in central online systems; 2) image workstations such as the ADONIS and UMI systems; and 3) the use of electronic networks for document ordering and delivery. Some consideration is given to the policy implications for libraries and information systems. (author). 11 tabs

  10. Infrared image background modeling based on improved Susan filtering

    Science.gov (United States)

    Yuehua, Xia

    2018-02-01

    When SUSAN filter is used to model the infrared image, the Gaussian filter lacks the ability of direction filtering. After filtering, the edge information of the image cannot be preserved well, so that there are a lot of edge singular points in the difference graph, increase the difficulties of target detection. To solve the above problems, the anisotropy algorithm is introduced in this paper, and the anisotropic Gauss filter is used instead of the Gauss filter in the SUSAN filter operator. Firstly, using anisotropic gradient operator to calculate a point of image's horizontal and vertical gradient, to determine the long axis direction of the filter; Secondly, use the local area of the point and the neighborhood smoothness to calculate the filter length and short axis variance; And then calculate the first-order norm of the difference between the local area of the point's gray-scale and mean, to determine the threshold of the SUSAN filter; Finally, the built SUSAN filter is used to convolution the image to obtain the background image, at the same time, the difference between the background image and the original image is obtained. The experimental results show that the background modeling effect of infrared image is evaluated by Mean Squared Error (MSE), Structural Similarity (SSIM) and local Signal-to-noise Ratio Gain (GSNR). Compared with the traditional filtering algorithm, the improved SUSAN filter has achieved better background modeling effect, which can effectively preserve the edge information in the image, and the dim small target is effectively enhanced in the difference graph, which greatly reduces the false alarm rate of the image.

  11. Image Quality Characteristics of Handheld Display Devices for Medical Imaging

    Science.gov (United States)

    Yamazaki, Asumi; Liu, Peter; Cheng, Wei-Chung; Badano, Aldo

    2013-01-01

    Handheld devices such as mobile phones and tablet computers have become widespread with thousands of available software applications. Recently, handhelds are being proposed as part of medical imaging solutions, especially in emergency medicine, where immediate consultation is required. However, handheld devices differ significantly from medical workstation displays in terms of display characteristics. Moreover, the characteristics vary significantly among device types. We investigate the image quality characteristics of various handheld devices with respect to luminance response, spatial resolution, spatial noise, and reflectance. We show that the luminance characteristics of the handheld displays are different from those of workstation displays complying with grayscale standard target response suggesting that luminance calibration might be needed. Our results also demonstrate that the spatial characteristics of handhelds can surpass those of medical workstation displays particularly for recent generation devices. While a 5 mega-pixel monochrome workstation display has horizontal and vertical modulation transfer factors of 0.52 and 0.47 at the Nyquist frequency, the handheld displays released after 2011 can have values higher than 0.63 at the respective Nyquist frequencies. The noise power spectra for workstation displays are higher than 1.2×10−5 mm2 at 1 mm−1, while handheld displays have values lower than 3.7×10−6 mm2. Reflectance measurements on some of the handheld displays are consistent with measurements for workstation displays with, in some cases, low specular and diffuse reflectance coefficients. The variability of the characterization results among devices due to the different technological features indicates that image quality varies greatly among handheld display devices. PMID:24236113

  12. Efficient document-image super-resolution using convolutional ...

    Indian Academy of Sciences (India)

    Ram Krishna Pandey

    2018-03-06

    Mar 6, 2018 ... of almost 43%, 45% and 57% on 75 dpi Tamil, English and Kannada images, respectively. Keywords. ... In our work, we have used a basic CNN with rectified linear unit (ReLU) and .... 4.3 Dataset used for the study. Since the ...

  13. Clustering document fragments using background color and texture information

    Science.gov (United States)

    Chanda, Sukalpa; Franke, Katrin; Pal, Umapada

    2012-01-01

    Forensic analysis of questioned documents sometimes can be extensively data intensive. A forensic expert might need to analyze a heap of document fragments and in such cases to ensure reliability he/she should focus only on relevant evidences hidden in those document fragments. Relevant document retrieval needs finding of similar document fragments. One notion of obtaining such similar documents could be by using document fragment's physical characteristics like color, texture, etc. In this article we propose an automatic scheme to retrieve similar document fragments based on visual appearance of document paper and texture. Multispectral color characteristics using biologically inspired color differentiation techniques are implemented here. This is done by projecting document color characteristics to Lab color space. Gabor filter-based texture analysis is used to identify document texture. It is desired that document fragments from same source will have similar color and texture. For clustering similar document fragments of our test dataset we use a Self Organizing Map (SOM) of dimension 5×5, where the document color and texture information are used as features. We obtained an encouraging accuracy of 97.17% from 1063 test images.

  14. Phase-Image Encryption Based on 3D-Lorenz Chaotic System and Double Random Phase Encoding

    Science.gov (United States)

    Sharma, Neha; Saini, Indu; Yadav, AK; Singh, Phool

    2017-12-01

    In this paper, an encryption scheme for phase-images based on 3D-Lorenz chaotic system in Fourier domain under the 4f optical system is presented. The encryption scheme uses a random amplitude mask in the spatial domain and a random phase mask in the frequency domain. Its inputs are phase-images, which are relatively more secure as compared to the intensity images because of non-linearity. The proposed scheme further derives its strength from the use of 3D-Lorenz transform in the frequency domain. Although the experimental setup for optical realization of the proposed scheme has been provided, the results presented here are based on simulations on MATLAB. It has been validated for grayscale images, and is found to be sensitive to the encryption parameters of the Lorenz system. The attacks analysis shows that the key-space is large enough to resist brute-force attack, and the scheme is also resistant to the noise and occlusion attacks. Statistical analysis and the analysis based on correlation distribution of adjacent pixels have been performed to test the efficacy of the encryption scheme. The results have indicated that the proposed encryption scheme possesses a high level of security.

  15. Fuzzy Reasoning to More Accurately Determine Void Areas on Optical Micrographs of Composite Structures

    Science.gov (United States)

    Dominquez, Jesus A.; Tate, Lanetra C.; Wright, M. Clara; Caraccio, Anne

    2013-01-01

    Accomplishing the best-performing composite matrix (resin) requires that not only the processing method but also the cure cycle generate low-void-content structures. If voids are present, the performance of the composite matrix will be significantly reduced. This is usually noticed by significant reductions in matrix-dominated properties, such as compression and shear strength. Voids in composite materials are areas that are absent of the composite components: matrix and fibers. The characteristics of the voids and their accurate estimation are critical to determine for high performance composite structures. One widely used method of performing void analysis on a composite structure sample is acquiring optical micrographs or Scanning Electron Microscope (SEM) images of lateral sides of the sample and retrieving the void areas within the micrographs/images using an image analysis technique. Segmentation for the retrieval and subsequent computation of void areas within the micrographs/images is challenging as the gray-scaled values of the void areas are close to the gray-scaled values of the matrix leading to the need of manually performing the segmentation based on the histogram of the micrographs/images to retrieve the void areas. The use of an algorithm developed by NASA and based on Fuzzy Reasoning (FR) proved to overcome the difficulty of suitably differentiate void and matrix image areas with similar gray-scaled values leading not only to a more accurate estimation of void areas on composite matrix micrographs but also to a faster void analysis process as the algorithm is fully autonomous.

  16. Edge detection of optical subaperture image based on improved differential box-counting method

    Science.gov (United States)

    Li, Yi; Hui, Mei; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin

    2018-01-01

    Optical synthetic aperture imaging technology is an effective approach to improve imaging resolution. Compared with monolithic mirror system, the image of optical synthetic aperture system is often more complex at the edge, and as a result of the existence of gap between segments, which makes stitching becomes a difficult problem. So it is necessary to extract the edge of subaperture image for achieving effective stitching. Fractal dimension as a measure feature can describe image surface texture characteristics, which provides a new approach for edge detection. In our research, an improved differential box-counting method is used to calculate fractal dimension of image, then the obtained fractal dimension is mapped to grayscale image to detect edges. Compared with original differential box-counting method, this method has two improvements as follows: by modifying the box-counting mechanism, a box with a fixed height is replaced by a box with adaptive height, which solves the problem of over-counting the number of boxes covering image intensity surface; an image reconstruction method based on super-resolution convolutional neural network is used to enlarge small size image, which can solve the problem that fractal dimension can't be calculated accurately under the small size image, and this method may well maintain scale invariability of fractal dimension. The experimental results show that the proposed algorithm can effectively eliminate noise and has a lower false detection rate compared with the traditional edge detection algorithms. In addition, this algorithm can maintain the integrity and continuity of image edge in the case of retaining important edge information.

  17. Transcription of Spanish Historical Handwritten Documents with Deep Neural Networks

    Directory of Open Access Journals (Sweden)

    Emilio Granell

    2018-01-01

    Full Text Available The digitization of historical handwritten document images is important for the preservation of cultural heritage. Moreover, the transcription of text images obtained from digitization is necessary to provide efficient information access to the content of these documents. Handwritten Text Recognition (HTR has become an important research topic in the areas of image and computational language processing that allows us to obtain transcriptions from text images. State-of-the-art HTR systems are, however, far from perfect. One difficulty is that they have to cope with image noise and handwriting variability. Another difficulty is the presence of a large amount of Out-Of-Vocabulary (OOV words in ancient historical texts. A solution to this problem is to use external lexical resources, but such resources might be scarce or unavailable given the nature and the age of such documents. This work proposes a solution to avoid this limitation. It consists of associating a powerful optical recognition system that will cope with image noise and variability, with a language model based on sub-lexical units that will model OOV words. Such a language modeling approach reduces the size of the lexicon while increasing the lexicon coverage. Experiments are first conducted on the publicly available Rodrigo dataset, which contains the digitization of an ancient Spanish manuscript, with a recognizer based on Hidden Markov Models (HMMs. They show that sub-lexical units outperform word units in terms of Word Error Rate (WER, Character Error Rate (CER and OOV word accuracy rate. This approach is then applied to deep net classifiers, namely Bi-directional Long-Short Term Memory (BLSTMs and Convolutional Recurrent Neural Nets (CRNNs. Results show that CRNNs outperform HMMs and BLSTMs, reaching the lowest WER and CER for this image dataset and significantly improving OOV recognition.

  18. A fast color image enhancement algorithm based on Max Intensity Channel

    Science.gov (United States)

    Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui

    2014-03-01

    In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details.

  19. Comprehensive Non-Destructive Conservation Documentation of Lunar Samples Using High-Resolution Image-Based 3D Reconstructions and X-Ray CT Data

    Science.gov (United States)

    Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K.; Zeigler, R. A.; Hanna, R. D.; Ketcham, R. A.

    2015-01-01

    Established contemporary conservation methods within the fields of Natural and Cultural Heritage encourage an interdisciplinary approach to preservation of heritage material (both tangible and intangible) that holds "Outstanding Universal Value" for our global community. NASA's lunar samples were acquired from the moon for the primary purpose of intensive scientific investigation. These samples, however, also invoke cultural significance, as evidenced by the millions of people per year that visit lunar displays in museums and heritage centers around the world. Being both scientifically and culturally significant, the lunar samples require a unique conservation approach. Government mandate dictates that NASA's Astromaterials Acquisition and Curation Office develop and maintain protocols for "documentation, preservation, preparation and distribution of samples for research, education and public outreach" for both current and future collections of astromaterials. Documentation, considered the first stage within the conservation methodology, has evolved many new techniques since curation protocols for the lunar samples were first implemented, and the development of new documentation strategies for current and future astromaterials is beneficial to keeping curation protocols up to date. We have developed and tested a comprehensive non-destructive documentation technique using high-resolution image-based 3D reconstruction and X-ray CT (XCT) data in order to create interactive 3D models of lunar samples that would ultimately be served to both researchers and the public. These data enhance preliminary scientific investigations including targeted sample requests, and also provide a new visual platform for the public to experience and interact with the lunar samples. We intend to serve these data as they are acquired on NASA's Astromaterials Acquisistion and Curation website at http://curator.jsc.nasa.gov/. Providing 3D interior and exterior documentation of astromaterial

  20. Ultrasound Imaging Techniques for Spatiotemporal Characterization of Composition, Microstructure, and Mechanical Properties in Tissue Engineering.

    Science.gov (United States)

    Deng, Cheri X; Hong, Xiaowei; Stegemann, Jan P

    2016-08-01

    Ultrasound techniques are increasingly being used to quantitatively characterize both native and engineered tissues. This review provides an overview and selected examples of the main techniques used in these applications. Grayscale imaging has been used to characterize extracellular matrix deposition, and quantitative ultrasound imaging based on the integrated backscatter coefficient has been applied to estimating cell concentrations and matrix morphology in tissue engineering. Spectral analysis has been employed to characterize the concentration and spatial distribution of mineral particles in a construct, as well as to monitor mineral deposition by cells over time. Ultrasound techniques have also been used to measure the mechanical properties of native and engineered tissues. Conventional ultrasound elasticity imaging and acoustic radiation force imaging have been applied to detect regions of altered stiffness within tissues. Sonorheometry and monitoring of steady-state excitation and recovery have been used to characterize viscoelastic properties of tissue using a single transducer to both deform and image the sample. Dual-mode ultrasound elastography uses separate ultrasound transducers to produce a more potent deformation force to microscale characterization of viscoelasticity of hydrogel constructs. These ultrasound-based techniques have high potential to impact the field of tissue engineering as they are further developed and their range of applications expands.

  1. Tenosynovitis Evaluation Using Image Fusion and B-Flow - A Pilot Study on New Imaging Techniques in Rheumatoid Arthritis Patients

    DEFF Research Database (Denmark)

    Ammitzbøll-Danielsen, Mads; Glinatsi, Daniel; Torp-Pedersen, Søren

    2017-01-01

    .40) for tendon sheaths. No statistically significant difference was found between US tendon area and MRI tendon area 2 (Wilcoxon's test; p = 0.47). Overall, the agreement between grayscale and color Doppler (CD) US and MRI tenosynovitis visualization and scoring was good, but not between CD and BFI. Conclusion...

  2. Color model comparative analysis for breast cancer diagnosis using H and E stained images

    Science.gov (United States)

    Li, Xingyu; Plataniotis, Konstantinos N.

    2015-03-01

    Digital cancer diagnosis is a research realm where signal processing techniques are used to analyze and to classify color histopathology images. Different from grayscale image analysis of magnetic resonance imaging or X-ray, colors in histopathology images convey large amount of histological information and thus play significant role in cancer diagnosis. Though color information is widely used in histopathology works, as today, there is few study on color model selections for feature extraction in cancer diagnosis schemes. This paper addresses the problem of color space selection for digital cancer classification using H and E stained images, and investigates the effectiveness of various color models (RGB, HSV, CIE L*a*b*, and stain-dependent H and E decomposition model) in breast cancer diagnosis. Particularly, we build a diagnosis framework as a comparison benchmark and take specific concerns of medical decision systems into account in evaluation. The evaluation methodologies include feature discriminate power evaluation and final diagnosis performance comparison. Experimentation on a publicly accessible histopathology image set suggests that the H and E decomposition model outperforms other assessed color spaces. For reasons behind various performance of color spaces, our analysis via mutual information estimation demonstrates that color components in the H and E model are less dependent, and thus most feature discriminate power is collected in one channel instead of spreading out among channels in other color spaces.

  3. Method used to test the imaging consistency of binocular camera's left-right optical system

    Science.gov (United States)

    Liu, Meiying; Wang, Hu; Liu, Jie; Xue, Yaoke; Yang, Shaodong; Zhao, Hui

    2016-09-01

    To binocular camera, the consistency of optical parameters of the left and the right optical system is an important factor that will influence the overall imaging consistency. In conventional testing procedure of optical system, there lacks specifications suitable for evaluating imaging consistency. In this paper, considering the special requirements of binocular optical imaging system, a method used to measure the imaging consistency of binocular camera is presented. Based on this method, a measurement system which is composed of an integrating sphere, a rotary table and a CMOS camera has been established. First, let the left and the right optical system capture images in normal exposure time under the same condition. Second, a contour image is obtained based on the multiple threshold segmentation result and the boundary is determined using the slope of contour lines near the pseudo-contour line. Third, the constraint of gray level based on the corresponding coordinates of left-right images is established and the imaging consistency could be evaluated through standard deviation σ of the imaging grayscale difference D (x, y) between the left and right optical system. The experiments demonstrate that the method is suitable for carrying out the imaging consistency testing for binocular camera. When the standard deviation 3σ distribution of imaging gray difference D (x, y) between the left and right optical system of the binocular camera does not exceed 5%, it is believed that the design requirements have been achieved. This method could be used effectively and paves the way for the imaging consistency testing of the binocular camera.

  4. Applied learning-based color tone mapping for face recognition in video surveillance system

    Science.gov (United States)

    Yew, Chuu Tian; Suandi, Shahrel Azmin

    2012-04-01

    In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.

  5. SAR and Infrared Image Fusion in Complex Contourlet Domain Based on Joint Sparse Representation

    Directory of Open Access Journals (Sweden)

    Wu Yiquan

    2017-08-01

    Full Text Available To investigate the problems of the large grayscale difference between infrared and Synthetic Aperture Radar (SAR images and their fusion image not being fit for human visual perception, we propose a fusion method for SAR and infrared images in the complex contourlet domain based on joint sparse representation. First, we perform complex contourlet decomposition of the infrared and SAR images. Then, we employ the KSingular Value Decomposition (K-SVD method to obtain an over-complete dictionary of the low-frequency components of the two source images. Using a joint sparse representation model, we then generate a joint dictionary. We obtain the sparse representation coefficients of the low-frequency components of the source images in the joint dictionary by the Orthogonal Matching Pursuit (OMP method and select them using the selection maximization strategy. We then reconstruct these components to obtain the fused low-frequency components and fuse the high-frequency components using two criteria——the coefficient of visual sensitivity and the degree of energy matching. Finally, we obtain the fusion image by the inverse complex contourlet transform. Compared with the three classical fusion methods and recently presented fusion methods, e.g., that based on the Non-Subsampled Contourlet Transform (NSCT and another based on sparse representation, the method we propose in this paper can effectively highlight the salient features of the two source images and inherit their information to the greatest extent.

  6. Automated detection of a prostate Ni-Ti stent in electronic portal images.

    Science.gov (United States)

    Carl, Jesper; Nielsen, Henning; Nielsen, Jane; Lund, Bente; Larsen, Erik Hoejkjaer

    2006-12-01

    Planning target volumes (PTV) in fractionated radiotherapy still have to be outlined with wide margins to the clinical target volume due to uncertainties arising from daily shift of the prostate position. A recently proposed new method of visualization of the prostate is based on insertion of a thermo-expandable Ni-Ti stent. The current study proposes a new detection algorithm for automated detection of the Ni-Ti stent in electronic portal images. The algorithm is based on the Ni-Ti stent having a cylindrical shape with a fixed diameter, which was used as the basis for an automated detection algorithm. The automated method uses enhancement of lines combined with a grayscale morphology operation that looks for enhanced pixels separated with a distance similar to the diameter of the stent. The images in this study are all from prostate cancer patients treated with radiotherapy in a previous study. Images of a stent inserted in a humanoid phantom demonstrated a localization accuracy of 0.4-0.7 mm which equals the pixel size in the image. The automated detection of the stent was compared to manual detection in 71 pairs of orthogonal images taken in nine patients. The algorithm was successful in 67 of 71 pairs of images. The method is fast, has a high success rate, good accuracy, and has a potential for unsupervised localization of the prostate before radiotherapy, which would enable automated repositioning before treatment and allow for the use of very tight PTV margins.

  7. Automated detection of a prostate Ni-Ti stent in electronic portal images

    International Nuclear Information System (INIS)

    Carl, Jesper; Nielsen, Henning; Nielsen, Jane; Lund, Bente; Larsen, Erik Hoejkjaer

    2006-01-01

    Planning target volumes (PTV) in fractionated radiotherapy still have to be outlined with wide margins to the clinical target volume due to uncertainties arising from daily shift of the prostate position. A recently proposed new method of visualization of the prostate is based on insertion of a thermo-expandable Ni-Ti stent. The current study proposes a new detection algorithm for automated detection of the Ni-Ti stent in electronic portal images. The algorithm is based on the Ni-Ti stent having a cylindrical shape with a fixed diameter, which was used as the basis for an automated detection algorithm. The automated method uses enhancement of lines combined with a grayscale morphology operation that looks for enhanced pixels separated with a distance similar to the diameter of the stent. The images in this study are all from prostate cancer patients treated with radiotherapy in a previous study. Images of a stent inserted in a humanoid phantom demonstrated a localization accuracy of 0.4-0.7 mm which equals the pixel size in the image. The automated detection of the stent was compared to manual detection in 71 pairs of orthogonal images taken in nine patients. The algorithm was successful in 67 of 71 pairs of images. The method is fast, has a high success rate, good accuracy, and has a potential for unsupervised localization of the prostate before radiotherapy, which would enable automated repositioning before treatment and allow for the use of very tight PTV margins

  8. Bilateral and pseudobilateral tonsilloliths: Three dimensional imaging with cone-beam computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Misirlioglu, Melda; Adisen, Mehmet Zahit; Yardimci, Selmi [Dept. of Oral and Maxillofacial Radiology, Faculty of Dentistry, Kirikkale University, Kirikkale (Turkmenistan); Nalcaci, Rana [Dept. of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara (Turkmenistan)

    2013-09-15

    Tonsilloliths are calcifications found in the crypts of the palatal tonsils and can be detected on routine panoramic examinations. This study was performed to highlight the benefits of cone-beam computed tomography (CBCT) in the diagnosis of tonsilloliths appearing bilaterally on panoramic radiographs. The sample group consisted of 7 patients who had bilateral radiopaque lesions at the area of the ascending ramus on panoramic radiographs. CBCT images for every patient were obtained from both sides of the jaw to determine the exact locations of the lesions and to rule out other calcifications. The calcifications were evaluated on the CBCT images using Ez3D2009 software. Additionally, the obtained images in DICOM format were transferred to ITK SNAP 2.4.0 pc software for semiautomatic segmentation. Segmentation was performed using contrast differences between the soft tissues and calcifications on grayscale images, and the volume in mm{sup 3} of the segmented three dimensional models were obtained. CBCT scans revealed that what appeared on panoramic radiographs as bilateral images were in fact unilateral lesions in 2 cases. The total volume of the calcifications ranged from 7.92 to 302.5mm{sup 3}. The patients with bilaterally multiple and large calcifications were found to be symptomatic. The cases provided the evidence that tonsilloliths should be considered in the differential diagnosis of radiopaque masses involving the mandibular ramus, and they highlight the need for a CBCT scan to differentiate pseudo- or ghost images from true bilateral pathologies.

  9. Clinical use of intracoronary imaging. Part 1: guidance and optimization of coronary interventions. An expert consensus document of the European Association of Percutaneous Cardiovascular Interventions: Endorsed by the Chinese Society of Cardiology.

    Science.gov (United States)

    Räber, Lorenz; Mintz, Gary S; Koskinas, Konstantinos C; Johnson, Thomas W; Holm, Niels R; Onuma, Yoshinubo; Radu, Maria D; Joner, Michael; Yu, Bo; Jia, Haibo; Menevau, Nicolas; de la Torre Hernandez, Jose M; Escaned, Javier; Hill, Jonathan; Prati, Francesco; Colombo, Antonio; di Mario, Carlo; Regar, Evelyn; Capodanno, Davide; Wijns, William; Byrne, Robert A; Guagliumi, Giulio

    2018-05-22

    This Consensus Document is the first of two reports summarizing the views of an expert panel organized by the European Association of Percutaneous Cardiovascular Interventions (EAPCI) on the clinical use of intracoronary imaging including intravascular ultrasound (IVUS) and optical coherence tomography (OCT). The first document appraises the role of intracoronary imaging to guide percutaneous coronary interventions (PCIs) in clinical practice. Current evidence regarding the impact of intracoronary imaging guidance on cardiovascular outcomes is summarized, and patients or lesions most likely to derive clinical benefit from an imaging-guided intervention are identified. The relevance of the use of IVUS or OCT prior to PCI for optimizing stent sizing (stent length and diameter) and planning the procedural strategy is discussed. Regarding post-implantation imaging, the consensus group recommends key parameters that characterize an optimal PCI result and provides cut-offs to guide corrective measures and optimize the stenting result. Moreover, routine performance of intracoronary imaging in patients with stent failure (restenosis or stent thrombosis) is recommended. Finally, strengths and limitations of IVUS and OCT for guiding PCI and assessing stent failures and areas that warrant further research are critically discussed.

  10. A novel concept for a reorganised picture and data documentation

    International Nuclear Information System (INIS)

    Spitz, J.

    1989-01-01

    The described optimised procedure for the documentation of results of diagnostic imaging methods has been developed in line with the technical improvements now available with video imagers, gamma camera systems, developing systems, and electronic control. The procedure basically relies on the use of a video imager which is directly connected to a developing system. The various video signals of the imaging devices are controlled to uniform level. A multitask control unit selects the video signals and puts them through to the video imager by a simple push button command from the work station. The processed image is automatically put out for subsequent evaluation. (orig./HP) [de

  11. Feasibility Study of Low-Cost Image-Based Heritage Documentation in Nepal

    Science.gov (United States)

    Dhonju, H. K.; Xiao, W.; Sarhosis, V.; Mills, J. P.; Wilkinson, S.; Wang, Z.; Thapa, L.; Panday, U. S.

    2017-02-01

    Cultural heritage structural documentation is of great importance in terms of historical preservation, tourism, educational and spiritual values. Cultural heritage across the world, and in Nepal in particular, is at risk from various natural hazards (e.g. earthquakes, flooding, rainfall etc), poor maintenance and preservation, and even human destruction. This paper evaluates the feasibility of low-cost photogrammetric modelling cultural heritage sites, and explores the practicality of using photogrammetry in Nepal. The full pipeline of 3D modelling for heritage documentation and conservation, including visualisation, reconstruction, and structure analysis, is proposed. In addition, crowdsourcing is discussed as a method of data collection of growing prominence.

  12. Identification needs in developing, documenting, and indexing WSDOT photographs : research report, February 2010.

    Science.gov (United States)

    2010-02-01

    Over time, the Department of Transportation has accumulated image collections, which document important : aspects of the transportation infrastructure in the Pacific Northwest, project status and construction details. These : images range from paper ...

  13. LFNet: A Novel Bidirectional Recurrent Convolutional Neural Network for Light-Field Image Super-Resolution.

    Science.gov (United States)

    Wang, Yunlong; Liu, Fei; Zhang, Kunbo; Hou, Guangqi; Sun, Zhenan; Tan, Tieniu

    2018-09-01

    The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.

  14. Document clustering methods, document cluster label disambiguation methods, document clustering apparatuses, and articles of manufacture

    Science.gov (United States)

    Sanfilippo, Antonio [Richland, WA; Calapristi, Augustin J [West Richland, WA; Crow, Vernon L [Richland, WA; Hetzler, Elizabeth G [Kennewick, WA; Turner, Alan E [Kennewick, WA

    2009-12-22

    Document clustering methods, document cluster label disambiguation methods, document clustering apparatuses, and articles of manufacture are described. In one aspect, a document clustering method includes providing a document set comprising a plurality of documents, providing a cluster comprising a subset of the documents of the document set, using a plurality of terms of the documents, providing a cluster label indicative of subject matter content of the documents of the cluster, wherein the cluster label comprises a plurality of word senses, and selecting one of the word senses of the cluster label.

  15. Oblique aerial images and their use in cultural heritage documentation

    DEFF Research Database (Denmark)

    Höhle, Joachim

    2013-01-01

    on automatically derived point clouds of high density. Each point will be supplemented with colour and other attributes. The problems experienced in these processes and the solutions to these problems are presented. The applied tools are a combination of professional tools, free software, and of own software...... developments. Special attention is given to the quality of input images. Investigations are carried out on edges in the images. The combination of oblique and nadir images enables new possibilities in the processing. The use of the near-infrared channel besides the red, green, and blue channel of the applied...

  16. The role and design of screen images in software documentation.

    NARCIS (Netherlands)

    van der Meij, Hans

    2000-01-01

    Software documentation for the novice user typically must try to achieve at least three goals: to support basic knowledge and skills development; to prevent or support the handling of mistakes, and to support the joint handling of manual, input device and screen. This paper concentrates on the

  17. Automated segmentation and isolation of touching cell nuclei in cytopathology smear images of pleural effusion using distance transform watershed method

    Science.gov (United States)

    Win, Khin Yadanar; Choomchuay, Somsak; Hamamoto, Kazuhiko

    2017-06-01

    The automated segmentation of cell nuclei is an essential stage in the quantitative image analysis of cell nuclei extracted from smear cytology images of pleural fluid. Cell nuclei can indicate cancer as the characteristics of cell nuclei are associated with cells proliferation and malignancy in term of size, shape and the stained color. Nevertheless, automatic nuclei segmentation has remained challenging due to the artifacts caused by slide preparation, nuclei heterogeneity such as the poor contrast, inconsistent stained color, the cells variation, and cells overlapping. In this paper, we proposed a watershed-based method that is capable to segment the nuclei of the variety of cells from cytology pleural fluid smear images. Firstly, the original image is preprocessed by converting into the grayscale image and enhancing by adjusting and equalizing the intensity using histogram equalization. Next, the cell nuclei are segmented using OTSU thresholding as the binary image. The undesirable artifacts are eliminated using morphological operations. Finally, the distance transform based watershed method is applied to isolate the touching and overlapping cell nuclei. The proposed method is tested with 25 Papanicolaou (Pap) stained pleural fluid images. The accuracy of our proposed method is 92%. The method is relatively simple, and the results are very promising.

  18. Switching non-local median filter

    Science.gov (United States)

    Matsuoka, Jyohei; Koga, Takanori; Suetake, Noriaki; Uchino, Eiji

    2015-06-01

    This paper describes a novel image filtering method for removal of random-valued impulse noise superimposed on grayscale images. Generally, it is well known that switching-type median filters are effective for impulse noise removal. In this paper, we propose a more sophisticated switching-type impulse noise removal method in terms of detail-preserving performance. Specifically, the noise detector of the proposed method finds out noise-corrupted pixels by focusing attention on the difference between the value of a pixel of interest (POI) and the median of its neighboring pixel values, and on the POI's isolation tendency from the surrounding pixels. Furthermore, the removal of the detected noise is performed by the newly proposed median filter based on non-local processing, which has superior detail-preservation capability compared to the conventional median filter. The effectiveness and the validity of the proposed method are verified by some experiments using natural grayscale images.

  19. Classification of natural circulation two-phase flow patterns using fuzzy inference on image analysis

    International Nuclear Information System (INIS)

    Mesquita, R.N. de; Masotti, P.H.F.; Penha, R.M.L.; Andrade, D.A.; Sabundjian, G.; Torres, W.M.

    2012-01-01

    Highlights: ► A fuzzy classification system for two-phase flow instability patterns is developed. ► Flow patterns are classified based on images of natural circulation experiments. ► Fuzzy inference is optimized to use single grayscale profiles as input. - Abstract: Two-phase flow on natural circulation phenomenon has been an important theme on recent studies related to nuclear reactor designs. The accuracy of heat transfer estimation has been improved with new models that require precise prediction of pattern transitions of flow. In this work, visualization of natural circulation cycles is used to study two-phase flow patterns associated with phase transients and static instabilities of flow. A Fuzzy Flow-type Classification System (FFCS) was developed to classify these patterns based only on image extracted features. Image acquisition and temperature measurements were simultaneously done. Experiments in natural circulation facility were adjusted to generate a series of characteristic two-phase flow instability periodic cycles. The facility is composed of a loop of glass tubes, a heat source using electrical heaters, a cold source using a helicoidal heat exchanger, a visualization section and thermocouples positioned over different loop sections. The instability cyclic period is estimated based on temperature measurements associated with the detection of a flow transition image pattern. FFCS shows good results provided that adequate image acquisition parameters and pre-processing adjustments are used.

  20. Automatic crack detection and classification method for subway tunnel safety monitoring.

    Science.gov (United States)

    Zhang, Wenyu; Zhang, Zhenjiang; Qi, Dapeng; Liu, Yun

    2014-10-16

    Cracks are an important indicator reflecting the safety status of infrastructures. This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring. With the application of high-speed complementary metal-oxide-semiconductor (CMOS) industrial cameras, the tunnel surface can be captured and stored in digital images. In a next step, the local dark regions with potential crack defects are segmented from the original gray-scale images by utilizing morphological image processing techniques and thresholding operations. In the feature extraction process, we present a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects. Along with other features, the classification results successfully remove over 90% misidentified objects. Also, compared with the original gray-scale images, over 90% of the crack length is preserved in the last output binary images. The proposed approach was tested on the safety monitoring for Beijing Subway Line 1. The experimental results revealed the rules of parameter settings and also proved that the proposed approach is effective and efficient for automatic crack detection and classification.

  1. Automatic Crack Detection and Classification Method for Subway Tunnel Safety Monitoring

    Directory of Open Access Journals (Sweden)

    Wenyu Zhang

    2014-10-01

    Full Text Available Cracks are an important indicator reflecting the safety status of infrastructures. This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring. With the application of high-speed complementary metal-oxide-semiconductor (CMOS industrial cameras, the tunnel surface can be captured and stored in digital images. In a next step, the local dark regions with potential crack defects are segmented from the original gray-scale images by utilizing morphological image processing techniques and thresholding operations. In the feature extraction process, we present a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects. Along with other features, the classification results successfully remove over 90% misidentified objects. Also, compared with the original gray-scale images, over 90% of the crack length is preserved in the last output binary images. The proposed approach was tested on the safety monitoring for Beijing Subway Line 1. The experimental results revealed the rules of parameter settings and also proved that the proposed approach is effective and efficient for automatic crack detection and classification.

  2. New algorithm for detecting smaller retinal blood vessels in fundus images

    Science.gov (United States)

    LeAnder, Robert; Bidari, Praveen I.; Mohammed, Tauseef A.; Das, Moumita; Umbaugh, Scott E.

    2010-03-01

    About 4.1 million Americans suffer from diabetic retinopathy. To help automatically diagnose various stages of the disease, a new blood-vessel-segmentation algorithm based on spatial high-pass filtering was developed to automatically segment blood vessels, including the smaller ones, with low noise. Methods: Image database: Forty, 584 x 565-pixel images were collected from the DRIVE image database. Preprocessing: Green-band extraction was used to obtain better contrast, which facilitated better visualization of retinal blood vessels. A spatial highpass filter of mask-size 11 was applied. A histogram stretch was performed to enhance contrast. A median filter was applied to mitigate noise. At this point, the gray-scale image was converted to a binary image using a binary thresholding operation. Then, a NOT operation was performed by gray-level value inversion between 0 and 255. Postprocessing: The resulting image was AND-ed with its corresponding ring mask to remove the outer-ring (lens-edge) artifact. At this point, the above algorithm steps had extracted most of the major and minor vessels, with some intersections and bifurcations missing. Vessel segments were reintegrated using the Hough transform. Results: After applying the Hough transform, both the average peak SNR and the RMS error improved by 10%. Pratt's Figure of Merit (PFM) was decreased by 6%. Those averages were better than [1] by 10-30%. Conclusions: The new algorithm successfully preserved the details of smaller blood vessels and should prove successful as a segmentation step for automatically identifying diseases that affect retinal blood vessels.

  3. Neural network post-processing of grayscale optical correlator

    Science.gov (United States)

    Lu, Thomas T; Hughlett, Casey L.; Zhoua, Hanying; Chao, Tien-Hsin; Hanan, Jay C.

    2005-01-01

    In this paper we present the use of a radial basis function neural network (RBFNN) as a post-processor to assist the optical correlator to identify the objects and to reject false alarms. Image plane features near the correlation peaks are extracted and fed to the neural network for analysis. The approach is capable of handling large number of object variations and filter sets. Preliminary experimental results are presented and the performance is analyzed.

  4. scikit-image: image processing in Python.

    Science.gov (United States)

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  5. scikit-image: image processing in Python

    Directory of Open Access Journals (Sweden)

    Stéfan van der Walt

    2014-06-01

    Full Text Available scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  6. Automatic detection of diabetic retinopathy features in ultra-wide field retinal images

    Science.gov (United States)

    Levenkova, Anastasia; Sowmya, Arcot; Kalloniatis, Michael; Ly, Angelica; Ho, Arthur

    2017-03-01

    Diabetic retinopathy (DR) is a major cause of irreversible vision loss. DR screening relies on retinal clinical signs (features). Opportunities for computer-aided DR feature detection have emerged with the development of Ultra-WideField (UWF) digital scanning laser technology. UWF imaging covers 82% greater retinal area (200°), against 45° in conventional cameras3 , allowing more clinically relevant retinopathy to be detected4 . UWF images also provide a high resolution of 3078 x 2702 pixels. Currently DR screening uses 7 overlapping conventional fundus images, and the UWF images provide similar results1,4. However, in 40% of cases, more retinopathy was found outside the 7-field ETDRS) fields by UWF and in 10% of cases, retinopathy was reclassified as more severe4 . This is because UWF imaging allows examination of both the central retina and more peripheral regions, with the latter implicated in DR6 . We have developed an algorithm for automatic recognition of DR features, including bright (cotton wool spots and exudates) and dark lesions (microaneurysms and blot, dot and flame haemorrhages) in UWF images. The algorithm extracts features from grayscale (green "red-free" laser light) and colour-composite UWF images, including intensity, Histogram-of-Gradient and Local binary patterns. Pixel-based classification is performed with three different classifiers. The main contribution is the automatic detection of DR features in the peripheral retina. The method is evaluated by leave-one-out cross-validation on 25 UWF retinal images with 167 bright lesions, and 61 other images with 1089 dark lesions. The SVM classifier performs best with AUC of 94.4% / 95.31% for bright / dark lesions.

  7. Automatic registration of Iphone images to LASER point clouds of the urban structures using shape features

    Directory of Open Access Journals (Sweden)

    B. Sirmacek

    2013-10-01

    Full Text Available Fusion of 3D airborne laser (LIDAR data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.

  8. Content Documents Management

    Science.gov (United States)

    Muniz, R.; Hochstadt, J.; Boelke J.; Dalton, A.

    2011-01-01

    The Content Documents are created and managed under the System Software group with. Launch Control System (LCS) project. The System Software product group is lead by NASA Engineering Control and Data Systems branch (NEC3) at Kennedy Space Center. The team is working on creating Operating System Images (OSI) for different platforms (i.e. AIX, Linux, Solaris and Windows). Before the OSI can be created, the team must create a Content Document which provides the information of a workstation or server, with the list of all the software that is to be installed on it and also the set where the hardware belongs. This can be for example in the LDS, the ADS or the FR-l. The objective of this project is to create a User Interface Web application that can manage the information of the Content Documents, with all the correct validations and filters for administrator purposes. For this project we used one of the most excellent tools in agile development applications called Ruby on Rails. This tool helps pragmatic programmers develop Web applications with Rails framework and Ruby programming language. It is very amazing to see how a student can learn about OOP features with the Ruby language, manage the user interface with HTML and CSS, create associations and queries with gems, manage databases and run a server with MYSQL, run shell commands with command prompt and create Web frameworks with Rails. All of this in a real world project and in just fifteen weeks!

  9. Diagnostic imaging of cervical intraepithelial neoplasia based on hematoxylin and eosin fluorescence.

    Science.gov (United States)

    Castellanos, Mario R; Szerszen, Anita; Gundry, Stephen; Pirog, Edyta C; Maiman, Mitchell; Rajupet, Sritha; Gomez, John Paul; Davidov, Adi; Debata, Priya Ranjan; Banerjee, Probal; Fata, Jimmie E

    2015-07-25

    Pathological classification of cervical intraepithelial neoplasia (CIN) is problematic as it relies on subjective criteria. We developed an imaging method that uses spectroscopy to assess the fluorescent intensity of cervical biopsies derived directly from hematoxylin and eosin (H&E) stained tissues. Archived H&E slides were identified containing normal cervical tissue, CIN I, and CIN III cases, from a Community Hospital and an Academic Medical Center. Cases were obtained by consensus review of at least 2 senior pathologists. Images from H&E slides were captured first with bright field illumination and then with fluorescent illumination. We used a Zeiss Axio Observer Z1 microscope and an AxioVision 4.6.3-AP1 camera at excitation wavelength of 450-490 nm with emission captured at 515-565 nm. The 32-bit grayscale fluorescence images were used for image analysis. We reviewed 108 slides: 46 normal, 33 CIN I and 29 CIN III. Fluorescent intensity increased progressively in normal epithelial tissue as cells matured and advanced from the basal to superficial regions of the epithelium. In CIN I cases this change was less prominent as compared to normal. In high grade CIN lesions, there was a slight or no increase in fluorescent intensity. All groups examined were statistically different. Presently, there are no markers to help in classification of CIN I-III lesions. Our imaging method may complement standard H&E pathological review and provide objective criteria to support the CIN diagnosis.

  10. Towards Mobile OCR: How To Take a Good Picture of a Document Without Sight.

    Science.gov (United States)

    Cutter, Michael; Manduchi, Roberto

    The advent of mobile OCR (optical character recognition) applications on regular smartphones holds great promise for enabling blind people to access printed information. Unfortunately, these systems suffer from a problem: in order for OCR output to be meaningful, a well-framed image of the document needs to be taken, something that is difficult to do without sight. This contribution presents an experimental investigation of how blind people position and orient a camera phone while acquiring document images. We developed experimental software to investigate if verbal guidance aids in the acquisition of OCR-readable images without sight. We report on our participant's feedback and performance before and after assistance from our software.

  11. Electronic Document Management Systems: Where Are They Today?

    Science.gov (United States)

    Koulopoulos, Thomas M.; Frappaolo, Carl

    1993-01-01

    Discusses developments in document management systems based on a survey of over 400 corporations and government agencies. Text retrieval and imaging markets, architecture and integration, purchasing plans, and vendor market leaders are covered. Five graphs present data on user preferences for improvements. A sidebar article reviews the development…

  12. Accuracy of Gray-scale and Three-dimensional Power Doppler ...

    African Journals Online (AJOL)

    and Gynecology, Ahmadi Kuwait Oil Company Hospital, Ahmadi, Kuwait ... Subjects and Methods: Fifty pregnant women ≥28 weeks' gestation with suspected MAP were ... and build upon the work non‑commercially, as long as the author is credited and the ... 3D power Doppler images were analyzed using virtual organ.

  13. Computer-aided pulmonary image analysis in small animal models

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Ziyue; Mansoor, Awais; Mollura, Daniel J. [Center for Infectious Disease Imaging (CIDI), Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, Maryland 32892 (United States); Bagci, Ulas, E-mail: ulasbagci@gmail.com [Center for Research in Computer Vision (CRCV), University of Central Florida (UCF), Orlando, Florida 32816 (United States); Kramer-Marek, Gabriela [The Institute of Cancer Research, London SW7 3RP (United Kingdom); Luna, Brian [Microfluidic Laboratory Automation, University of California-Irvine, Irvine, California 92697-2715 (United States); Kubler, Andre [Department of Medicine, Imperial College London, London SW7 2AZ (United Kingdom); Dey, Bappaditya; Jain, Sanjay [Center for Tuberculosis Research, Johns Hopkins University School of Medicine, Baltimore, Maryland 21231 (United States); Foster, Brent [Department of Biomedical Engineering, University of California-Davis, Davis, California 95817 (United States); Papadakis, Georgios Z. [Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, Maryland 32892 (United States); Camp, Jeremy V. [Department of Microbiology and Immunology, University of Louisville, Louisville, Kentucky 40202 (United States); Jonsson, Colleen B. [National Institute for Mathematical and Biological Synthesis, University of Tennessee, Knoxville, Tennessee 37996 (United States); Bishai, William R. [Howard Hughes Medical Institute, Chevy Chase, Maryland 20815 and Center for Tuberculosis Research, Johns Hopkins University School of Medicine, Baltimore, Maryland 21231 (United States); Udupa, Jayaram K. [Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)

    2015-07-15

    Purpose: To develop an automated pulmonary image analysis framework for infectious lung diseases in small animal models. Methods: The authors describe a novel pathological lung and airway segmentation method for small animals. The proposed framework includes identification of abnormal imaging patterns pertaining to infectious lung diseases. First, the authors’ system estimates an expected lung volume by utilizing a regression function between total lung capacity and approximated rib cage volume. A significant difference between the expected lung volume and the initial lung segmentation indicates the presence of severe pathology, and invokes a machine learning based abnormal imaging pattern detection system next. The final stage of the proposed framework is the automatic extraction of airway tree for which new affinity relationships within the fuzzy connectedness image segmentation framework are proposed by combining Hessian and gray-scale morphological reconstruction filters. Results: 133 CT scans were collected from four different studies encompassing a wide spectrum of pulmonary abnormalities pertaining to two commonly used small animal models (ferret and rabbit). Sensitivity and specificity were greater than 90% for pathological lung segmentation (average dice similarity coefficient > 0.9). While qualitative visual assessments of airway tree extraction were performed by the participating expert radiologists, for quantitative evaluation the authors validated the proposed airway extraction method by using publicly available EXACT’09 data set. Conclusions: The authors developed a comprehensive computer-aided pulmonary image analysis framework for preclinical research applications. The proposed framework consists of automatic pathological lung segmentation and accurate airway tree extraction. The framework has high sensitivity and specificity; therefore, it can contribute advances in preclinical research in pulmonary diseases.

  14. Quantitative evaluation of pairs and RS steganalysis

    Science.gov (United States)

    Ker, Andrew D.

    2004-06-01

    We give initial results from a new project which performs statistically accurate evaluation of the reliability of image steganalysis algorithms. The focus here is on the Pairs and RS methods, for detection of simple LSB steganography in grayscale bitmaps, due to Fridrich et al. Using libraries totalling around 30,000 images we have measured the performance of these methods and suggest changes which lead to significant improvements. Particular results from the project presented here include notes on the distribution of the RS statistic, the relative merits of different "masks" used in the RS algorithm, the effect on reliability when previously compressed cover images are used, and the effect of repeating steganalysis on the transposed image. We also discuss improvements to the Pairs algorithm, restricting it to spatially close pairs of pixels, which leads to a substantial performance improvement, even to the extent of surpassing the RS statistic which was previously thought superior for grayscale images. We also describe some of the questions for a general methodology of evaluation of steganalysis, and potential pitfalls caused by the differences between uncompressed, compressed, and resampled cover images.

  15. Outpatients flow management and ophthalmic electronic medical records system in university hospital using Yahgee Document View.

    Science.gov (United States)

    Matsuo, Toshihiko; Gochi, Akira; Hirakawa, Tsuyoshi; Ito, Tadashi; Kohno, Yoshihisa

    2010-10-01

    General electronic medical records systems remain insufficient for ophthalmology outpatient clinics from the viewpoint of dealing with many ophthalmic examinations and images in a large number of patients. Filing systems for documents and images by Yahgee Document View (Yahgee, Inc.) were introduced on the platform of general electronic medical records system (Fujitsu, Inc.). Outpatients flow management system and electronic medical records system for ophthalmology were constructed. All images from ophthalmic appliances were transported to Yahgee Image by the MaxFile gateway system (P4 Medic, Inc.). The flow of outpatients going through examinations such as visual acuity testing were monitored by the list "Ophthalmology Outpatients List" by Yahgee Workflow in addition to the list "Patients Reception List" by Fujitsu. Patients' identification number was scanned with bar code readers attached to ophthalmic appliances. Dual monitors were placed in doctors' rooms to show Fujitsu Medical Records on the left-hand monitor and ophthalmic charts of Yahgee Document on the right-hand monitor. The data of manually-inputted visual acuity, automatically-exported autorefractometry and non-contact tonometry on a new template, MaxFile ED, were again automatically transported to designated boxes on ophthalmic charts of Yahgee Document. Images such as fundus photographs, fluorescein angiograms, optical coherence tomographic and ultrasound scans were viewed by Yahgee Image, and were copy-and-pasted to assigned boxes on the ophthalmic charts. Ordering such as appointments, drug prescription, fees and diagnoses input, central laboratory tests, surgical theater and ward room reservations were placed by functions of the Fujitsu electronic medical records system. The combination of the Fujitsu electronic medical records and Yahgee Document View systems enabled the University Hospital to examine the same number of outpatients as prior to the implementation of the computerized filing system.

  16. Segmentation-driven compound document coding based on H.264/AVC-INTRA.

    Science.gov (United States)

    Zaghetto, Alexandre; de Queiroz, Ricardo L

    2007-07-01

    In this paper, we explore H.264/AVC operating in intraframe mode to compress a mixed image, i.e., composed of text, graphics, and pictures. Even though mixed contents (compound) documents usually require the use of multiple compressors, we apply a single compressor for both text and pictures. For that, distortion is taken into account differently between text and picture regions. Our approach is to use a segmentation-driven adaptation strategy to change the H.264/AVC quantization parameter on a macroblock by macroblock basis, i.e., we deviate bits from pictorial regions to text in order to keep text edges sharp. We show results of a segmentation driven quantizer adaptation method applied to compress documents. Our reconstructed images have better text sharpness compared to straight unadapted coding, at negligible visual losses on pictorial regions. Our results also highlight the fact that H.264/AVC-INTRA outperforms coders such as JPEG-2000 as a single coder for compound images.

  17. Omega-3 chicken egg detection system using a mobile-based image processing segmentation method

    Science.gov (United States)

    Nurhayati, Oky Dwi; Kurniawan Teguh, M.; Cintya Amalia, P.

    2017-02-01

    An Omega-3 chicken egg is a chicken egg produced through food engineering technology. It is produced by hen fed with high omega-3 fatty acids. So, it has fifteen times nutrient content of omega-3 higher than Leghorn's. Visually, its shell has the same shape and colour as Leghorn's. Each egg can be distinguished by breaking the egg's shell and testing the egg yolk's nutrient content in a laboratory. But, those methods were proven not effective and efficient. Observing this problem, the purpose of this research is to make an application to detect the type of omega-3 chicken egg by using a mobile-based computer vision. This application was built in OpenCV computer vision library to support Android Operating System. This experiment required some chicken egg images taken using an egg candling box. We used 60 omega-3 chicken and Leghorn eggs as samples. Then, using an Android smartphone, image acquisition of the egg was obtained. After that, we applied several steps using image processing methods such as Grab Cut, convert RGB image to eight bit grayscale, median filter, P-Tile segmentation, and morphology technique in this research. The next steps were feature extraction which was used to extract feature values via mean, variance, skewness, and kurtosis from each image. Finally, using digital image measurement, some chicken egg images were classified. The result showed that omega-3 chicken egg and Leghorn egg had different values. This system is able to provide accurate reading around of 91%.

  18. BMC Ecology Image Competition 2016: the winning images.

    Science.gov (United States)

    Simundza, Julia; Palmer, Matthew; Settele, Josef; Jacobus, Luke M; Hughes, David P; Mazzi, Dominique; Blanchet, Simon

    2016-08-09

    The 2016 BMC Ecology Image Competition marked another celebration of the astounding biodiversity, natural beauty, and biological interactions documented by talented ecologists worldwide. For our fourth annual competition, we welcomed guest judge Dr. Matthew Palmer of Columbia University, who chose the winning image from over 140 entries. In this editorial, we highlight the award winning images along with a selection of highly commended honorable mentions.

  19. TU-B-19A-01: Image Registration II: TG132-Quality Assurance for Image Registration

    International Nuclear Information System (INIS)

    Brock, K; Mutic, S

    2014-01-01

    AAPM Task Group 132 was charged with a review of the current approaches and solutions for image registration in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. As the results of image registration are always used as the input of another process for planning or delivery, it is important for the user to understand and document the uncertainty associate with the algorithm in general and the Result of a specific registration. The recommendations of this task group, which at the time of abstract submission are currently being reviewed by the AAPM, include the following components. The user should understand the basic image registration techniques and methods of visualizing image fusion. The disclosure of basic components of the image registration by commercial vendors is critical in this respect. The physicists should perform end-to-end tests of imaging, registration, and planning/treatment systems if image registration is performed on a stand-alone system. A comprehensive commissioning process should be performed and documented by the physicist prior to clinical use of the system. As documentation is important to the safe implementation of this process, a request and report system should be integrated into the clinical workflow. Finally, a patient specific QA practice should be established for efficient evaluation of image registration results. The implementation of these recommendations will be described and illustrated during this educational session. Learning Objectives: Highlight the importance of understanding the image registration techniques used in their clinic. Describe the end-to-end tests needed for stand-alone registration systems. Illustrate a comprehensive commissioning program using both phantom data and clinical images. Describe a request and report system to ensure communication and documentation. Demonstrate an clinically-efficient patient QA practice for efficient evaluation of image

  20. Multi-spectral confocal microendoscope for in-vivo imaging

    Science.gov (United States)

    Rouse, Andrew Robert

    The concept of in-vivo multi-spectral confocal microscopy is introduced. A slit-scanning multi-spectral confocal microendoscope (MCME) was built to demonstrate the technique. The MCME employs a flexible fiber-optic catheter coupled to a custom built slit-scan confocal microscope fitted with a custom built imaging spectrometer. The catheter consists of a fiber-optic imaging bundle linked to a miniature objective and focus assembly. The design and performance of the miniature objective and focus assembly are discussed. The 3mm diameter catheter may be used on its own or routed though the instrument channel of a commercial endoscope. The confocal nature of the system provides optical sectioning with 3mum lateral resolution and 30mum axial resolution. The prism based multi-spectral detection assembly is typically configured to collect 30 spectral samples over the visible chromatic range. The spectral sampling rate varies from 4nm/pixel at 490nm to 8nm/pixel at 660nm and the minimum resolvable wavelength difference varies from 7nm to 18nm over the same spectral range. Each of these characteristics are primarily dictated by the dispersive power of the prism. The MCME is designed to examine cellular structures during optical biopsy and to exploit the diagnostic information contained within the spectral domain. The primary applications for the system include diagnosis of disease in the gastro-intestinal tract and female reproductive system. Recent data from the grayscale imaging mode are presented. Preliminary multi-spectral results from phantoms, cell cultures, and excised human tissue are presented to demonstrate the potential of in-vivo multi-spectral imaging.

  1. A Proposal for Updated Standards of Photographic Documentation in Aesthetic Medicine.

    Science.gov (United States)

    Prantl, Lukas; Brandl, Dirk; Ceballos, Patricia

    2017-08-01

    In 1998, DiBernardo et al. published a very helpful standardization of comparative (before and after) photographic documentation. These standards prevail to this day. Although most of them are useful for objective documentation of aesthetic results, there are at least 3 reasons why an update is necessary at this time: First, DiBernardo et al. focused on the prevalent standards of medical photography at that time. From a modern perspective, these standards are antiquated and not always correct. Second, silver-based analog photography has mutated into digital photography. Digitalization offers virtually unlimited potential for image manipulation using a vast array of digital Apps and tools including, but not limited to, image editing software like Photoshop. Digitalization has given rise to new questions, particularly regarding appropriate use of editing techniques to maximize or increase objectivity. Third, we suggest changes to a very small number of their medical standards in the interest of obtaining a better or more objective documentation of aesthetic results. This article is structured into 3 sections and is intended as a new proposal for photographic and medical standards for the documentation of aesthetic interventions: 1. The photographic standards. 2. The medical standards. 3. Description of editing tools which should be used to increase objectivity.

  2. Microcomputer Software Engineering, Documentation and Evaluation

    Science.gov (United States)

    1981-03-31

    local dealer or call for complete specificalons. eAUTOMATIC INC To proceed step by step, we need toUe G T A TOMA IC NC. know where we are going and a...MICROPROCESSOR normal sequence that should be DIRECT MEMORY ACCESS preserved in the documentation. For INTRODUCTION 2.2 DRIVE CONTROLS example, you...with linear, sequential logic (like a computer). It is also the verbal side and controls language. The right side specializes in images, music, pictures

  3. Coronal in vivo forward-imaging of rat brain morphology with an ultra-small optical coherence tomography fiber probe

    Science.gov (United States)

    Xie, Yijing; Bonin, Tim; Löffler, Susanne; Hüttmann, Gereon; Tronnier, Volker; Hofmann, Ulrich G.

    2013-02-01

    A well-established navigation method is one of the key conditions for successful brain surgery: it should be accurate, safe and online operable. Recent research shows that optical coherence tomography (OCT) is a potential solution for this application by providing a high resolution and small probe dimension. In this study a fiber-based spectral-domain OCT system utilizing a super-luminescent-diode with the center wavelength of 840 nm providing 14.5 μm axial resolution was used. A composite 125 μm diameter detecting probe with a gradient index (GRIN) fiber fused to a single mode fiber was employed. Signals were reconstructed into grayscale images by horizontally aligning A-scans from the same trajectory with different depths. The reconstructed images can display brain morphology along the entire trajectory. For scans of typical white matter, the signals showed a higher reflection of light intensity with lower penetration depth as well as a steeper attenuation rate compared to the scans typical for gray matter. Micro-structures such as axon bundles (70 μm) in the caudate nucleus are visible in the reconstructed images. This study explores the potential of OCT to be a navigation modality in brain surgery.

  4. CNNs flag recognition preprocessing scheme based on gray scale stretching and local binary pattern

    Science.gov (United States)

    Gong, Qian; Qu, Zhiyi; Hao, Kun

    2017-07-01

    Flag is a rather special recognition target in image recognition because of its non-rigid features with the location, scale and rotation characteristics. The location change can be handled well by the depth learning algorithm Convolutional Neural Networks (CNNs), but the scale and rotation changes are quite a challenge for CNNs. Since it has good rotation and gray scale invariance, the local binary pattern (LBP) is combined with grayscale stretching and CNNs to make LBP and grayscale stretching as CNNs pretreatment, which can not only significantly improve the efficiency of flag recognition, but can also evaluate the recognition effect through ROC, accuracy, MSE and quality factor.

  5. Essential issues in the design of shared document/image libraries

    Science.gov (United States)

    Gladney, Henry M.; Mantey, Patrick E.

    1990-08-01

    We consider what is needed to create electronic document libraries which mimic physical collections of books, papers, and other media. The quantitative measures of merit for personal workstations-cost, speed, size of volatile and persistent storage-will improve by at least an order ofmagnitude in the next decade. Every professional worker will be able to afford a very powerful machine, but databases and libraries are not really economical and useful unless they are shared. We therefore see a two-tier world emerging, in which custodians of information make it available to network-attached workstations. A client-server model is the natural description of this world. In collaboration with several state governments, we have considered what would be needed to replace paper-based record management for a dozen different applications. We find that a professional worker can anticipate most data needs and that (s)he is interested in each clump of data for a period of days to months. We further find that only a small fraction of any collection will be used in any period. Given expected bandwidths, data sizes, search times and costs, and other such parameters, an effective strategy to support user interaction is to bring large clumps from their sources, to transform them into convenient representations, and only then start whatever investigation is intended. A system-managed hierarchy of caches and archives is indicated. Each library is a combination of a catalog and a collection, and each stored item has a primary instance which is the standard by which the correctness of any copy is judged. Catalog records mostly refer to 1 to 3 stored items. Weighted by the number of bytes to be stored, immutable data dominate collections. These characteristics affect how consistency, currency, and access control of replicas distributed in the network should be managed. We present the large features of a design for network docun1ent/image library services. A prototype is being built for

  6. Cloning of neuraminidase (NA) gene and identification of its antiviral ...

    African Journals Online (AJOL)

    user

    2012-06-12

    Jun 12, 2012 ... pGEX-NA was transformed into E. coli DH5α, and cultured at LB. (Amp+) solid medium ... observed under fluorescence inverted microscope. Immunofluorescence ... Gel image software BandScan5.0 was used to do grayscale ...

  7. IMAGE Programming Guidelines

    Energy Technology Data Exchange (ETDEWEB)

    Stehfest, E; De Waal, L.

    2010-09-15

    This document describes the requirements and guidelines for the software of the IMAGE system. The motivation for this report was a substantial restructuring of the source code for IMAGE version 2.5. The requirements and guidelines relate to design considerations as well as to aspects of maintainability and portability. The design considerations determine guidelines about subjects, such as program structure, model hierarchy, the use of data modules, and the error message system. Maintainability and portability aspects determine the guidelines on, for example, the Fortran 90 standard, naming conventions, code lay-out, and internal documentation.

  8. Graphics-Printing Program For The HP Paintjet Printer

    Science.gov (United States)

    Atkins, Victor R.

    1993-01-01

    IMPRINT utility computer program developed to print graphics specified in raster files by use of Hewlett-Packard Paintjet(TM) color printer. Reads bit-mapped images from files on UNIX-based graphics workstation and prints out three different types of images: wire-frame images, solid-color images, and gray-scale images. Wire-frame images are in continuous tone or, in case of low resolution, in random gray scale. In case of color images, IMPRINT also prints by use of default palette of solid colors. Written in C language.

  9. A novel algorithm to detect glaucoma risk using texton and local configuration pattern features extracted from fundus images.

    Science.gov (United States)

    Acharya, U Rajendra; Bhat, Shreya; Koh, Joel E W; Bhandary, Sulatha V; Adeli, Hojjat

    2017-09-01

    Glaucoma is an optic neuropathy defined by characteristic damage to the optic nerve and accompanying visual field deficits. Early diagnosis and treatment are critical to prevent irreversible vision loss and ultimate blindness. Current techniques for computer-aided analysis of the optic nerve and retinal nerve fiber layer (RNFL) are expensive and require keen interpretation by trained specialists. Hence, an automated system is highly desirable for a cost-effective and accurate screening for the diagnosis of glaucoma. This paper presents a new methodology and a computerized diagnostic system. Adaptive histogram equalization is used to convert color images to grayscale images followed by convolution of these images with Leung-Malik (LM), Schmid (S), and maximum response (MR4 and MR8) filter banks. The basic microstructures in typical images are called textons. The convolution process produces textons. Local configuration pattern (LCP) features are extracted from these textons. The significant features are selected using a sequential floating forward search (SFFS) method and ranked using the statistical t-test. Finally, various classifiers are used for classification of images into normal and glaucomatous classes. A high classification accuracy of 95.8% is achieved using six features obtained from the LM filter bank and the k-nearest neighbor (kNN) classifier. A glaucoma integrative index (GRI) is also formulated to obtain a reliable and effective system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. IMAGES OF DECOLONIZATION / IMAGES DE LA DECOLONISATION

    OpenAIRE

    Ganapathy-Doré , Geetha; Olinga , Michel; Crowley , Cornelius; Naumann , Michel; Le Boulicaut , Yannick; Coulardeau , Jacques; Taouchichet , Sofiane; Éric Owono Zambo , Claude; Dosoruth , Sonia; Vilar , Fernanda; Griffin , Patrick

    2013-01-01

    Il s'agit d'un document avec références.; International audience; This collected anthology of essays on the Images of Decolonization follows in the footsteps of an earlier SARI publication on Changing Images of India and Africa (Paris: L'Harmattan, 2011). It approaches the idea of decolonization from the point of view of the politics of representation with articles on the gaze of colonial and postcolonial photographers, the fantasized images of indigenous women (Pocahontas in the USA and La M...

  11. Application for internal dosimetry using biokinetic distribution of photons based on nuclear medicine images.

    Science.gov (United States)

    Leal Neto, Viriato; Vieira, José Wilson; Lima, Fernando Roberto de Andrade

    2014-01-01

    This article presents a way to obtain estimates of dose in patients submitted to radiotherapy with basis on the analysis of regions of interest on nuclear medicine images. A software called DoRadIo (Dosimetria das Radiações Ionizantes [Ionizing Radiation Dosimetry]) was developed to receive information about source organs and target organs, generating graphical and numerical results. The nuclear medicine images utilized in the present study were obtained from catalogs provided by medical physicists. The simulations were performed with computational exposure models consisting of voxel phantoms coupled with the Monte Carlo EGSnrc code. The software was developed with the Microsoft Visual Studio 2010 Service Pack and the project template Windows Presentation Foundation for C# programming language. With the mentioned tools, the authors obtained the file for optimization of Monte Carlo simulations using the EGSnrc; organization and compaction of dosimetry results with all radioactive sources; selection of regions of interest; evaluation of grayscale intensity in regions of interest; the file of weighted sources; and, finally, all the charts and numerical results. The user interface may be adapted for use in clinical nuclear medicine as a computer-aided tool to estimate the administered activity.

  12. Targeting youth and concerned smokers: evidence from Canadian tobacco industry documents.

    Science.gov (United States)

    Pollay, R W

    2000-06-01

    To provide an understanding of the targeting strategies of cigarette marketing, and the functions and importance of the advertising images chosen. Analysis of historical corporate documents produced by affiliates of British American Tobacco (BAT) and RJ Reynolds (RJR) in Canadian litigation challenging tobacco advertising regulation, the Tobacco Products Control Act (1987): Imperial Tobacco Limitee & RJR-Macdonald Inc c. Le Procurer General du Canada. Careful and extensive research has been employed in all stages of the process of conceiving, developing, refining, and deploying cigarette advertising. Two segments commanding much management attention are "starters" and "concerned smokers". To recruit starters, brand images communicate independence, freedom and (sometimes) peer acceptance. These advertising images portray smokers as attractive and autonomous, accepted and admired, athletic and at home in nature. For "lighter" brands reassuring health concerned smokers, lest they quit, advertisements provide imagery conveying a sense of well being, harmony with nature, and a consumer's self image as intelligent. The industry's steadfast assertions that its advertising influences only brand loyalty and switching in both its intent and effect is directly contradicted by their internal documents and proven false. So too is the justification of cigarette advertising as a medium creating better informed consumers, since visual imagery, not information, is the means of advertising influence.

  13. Rapid turn-around mapping of wildfires and disasters with airborne infrared imagery fro the new FireMapper® 2.0 and Oilmapper systems

    Science.gov (United States)

    James W. Hoffman; Lloyd L. Coulter; Philip J Riggan

    2005-01-01

    The new FireMapper® 2.0 and OilMapper airborne, infrared imaging systems operate in a "snapshot" mode. Both systems feature the real time display of single image frames, in any selected spectral band, on a daylight readable tablet PC. These single frames are displayed to the operator with full temperature calibration in color or grayscale renditions. A rapid...

  14. A New Scrambling Evaluation Scheme Based on Spatial Distribution Entropy and Centroid Difference of Bit-Plane

    Science.gov (United States)

    Zhao, Liang; Adhikari, Avishek; Sakurai, Kouichi

    Watermarking is one of the most effective techniques for copyright protection and information hiding. It can be applied in many fields of our society. Nowadays, some image scrambling schemes are used as one part of the watermarking algorithm to enhance the security. Therefore, how to select an image scrambling scheme and what kind of the image scrambling scheme may be used for watermarking are the key problems. Evaluation method of the image scrambling schemes can be seen as a useful test tool for showing the property or flaw of the image scrambling method. In this paper, a new scrambling evaluation system based on spatial distribution entropy and centroid difference of bit-plane is presented to obtain the scrambling degree of image scrambling schemes. Our scheme is illustrated and justified through computer simulations. The experimental results show (in Figs. 6 and 7) that for the general gray-scale image, the evaluation degree of the corresponding cipher image for the first 4 significant bit-planes selection is nearly the same as that for the 8 bit-planes selection. That is why, instead of taking 8 bit-planes of a gray-scale image, it is sufficient to take only the first 4 significant bit-planes for the experiment to find the scrambling degree. This 50% reduction in the computational cost makes our scheme efficient.

  15. Improved document image segmentation algorithm using multiresolution morphology

    Science.gov (United States)

    Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.

  16. Non-contact Real-time heart rate measurements based on high speed circuit technology research

    Science.gov (United States)

    Wu, Jizhe; Liu, Xiaohua; Kong, Lingqin; Shi, Cong; Liu, Ming; Hui, Mei; Dong, Liquan; Zhao, Yuejin

    2015-08-01

    In recent years, morbidity and mortality of the cardiovascular or cerebrovascular disease, which threaten human health greatly, increased year by year. Heart rate is an important index of these diseases. To address this status, the paper puts forward a kind of simple structure, easy operation, suitable for large populations of daily monitoring non-contact heart rate measurement. In the method we use imaging equipment video sensitive areas. The changes of light intensity reflected through the image grayscale average. The light change is caused by changes in blood volume. We video the people face which include the sensitive areas (ROI), and use high-speed processing circuit to save the video as AVI format into memory. After processing the whole video of a period of time, we draw curve of each color channel with frame number as horizontal axis. Then get heart rate from the curve. We use independent component analysis (ICA) to restrain noise of sports interference, realized the accurate extraction of heart rate signal under the motion state. We design an algorithm, based on high-speed processing circuit, for face recognition and tracking to automatically get face region. We do grayscale average processing to the recognized image, get RGB three grayscale curves, and extract a clearer pulse wave curves through independent component analysis, and then we get the heart rate under the motion state. At last, by means of compare our system with Fingertip Pulse Oximeter, result show the system can realize a more accurate measurement, the error is less than 3 pats per minute.

  17. Assessing the activity of perianal Crohn's disease: comparison of clinical indices and computer-assisted anal ultrasound.

    Science.gov (United States)

    Losco, Alessandra; Viganò, Chiara; Conte, Dario; Cesana, Bruno Mario; Basilisco, Guido

    2009-05-01

    Assessing perianal disease activity is important for the treatment and prognosis of Crohn's disease (CD) patients, but the diagnostic accuracy of the activity indices has not yet been established. The aim of this study was to determine the accuracy and agreement of the Fistula Drainage Assessment (FDA), Perianal Disease Activity Index (PDAI), and computer-assisted anal ultrasound imaging (AUS). Sixty-two consecutive patients with CD and perianal fistulae underwent clinical, FDA, PDAI, and AUS evaluation. Perianal disease was considered active in the presence of visible fistula drainage and/or signs of local inflammation (induration and pain at digital compression) upon clinical examination. The AUS images were analyzed by calculating the mean gray-scale tone of the lesion. The PDAI and gray-scale tone values discriminating active and inactive perianal disease were defined using receiver operating characteristics statistics. Perianal disease was active in 46 patients. The accuracy of the FDA was 87% (confidence interval [CI]: 76%-94%). A PDAI of >4 and a mean gray-scale tone value of 117 maximized sensitivity and specificity; their diagnostic accuracy was, respectively, 87% (CI: 76%-94%) and 81% (CI: 69%-90%). The agreement of the 3 evaluations was fair to moderate. The addition of AUS to the PDAI or FDA increased their diagnostic accuracy to respectively 95% and 98%. The diagnostic accuracy of the FDA, PDAI, and computer-assisted AUS imaging was good in assessing perianal disease activity in patients with CD. The agreement between the techniques was fair to moderate. Overall accuracy can be increased by combining the FDA or PDAI with AUS.

  18. A framework for interactive visualization of digital medical images.

    Science.gov (United States)

    Koehring, Andrew; Foo, Jung Leng; Miyano, Go; Lobe, Thom; Winer, Eliot

    2008-10-01

    The visualization of medical images obtained from scanning techniques such as computed tomography and magnetic resonance imaging is a well-researched field. However, advanced tools and methods to manipulate these data for surgical planning and other tasks have not seen widespread use among medical professionals. Radiologists have begun using more advanced visualization packages on desktop computer systems, but most physicians continue to work with basic two-dimensional grayscale images or not work directly with the data at all. In addition, new display technologies that are in use in other fields have yet to be fully applied in medicine. It is our estimation that usability is the key aspect in keeping this new technology from being more widely used by the medical community at large. Therefore, we have a software and hardware framework that not only make use of advanced visualization techniques, but also feature powerful, yet simple-to-use, interfaces. A virtual reality system was created to display volume-rendered medical models in three dimensions. It was designed to run in many configurations, from a large cluster of machines powering a multiwalled display down to a single desktop computer. An augmented reality system was also created for, literally, hands-on interaction when viewing models of medical data. Last, a desktop application was designed to provide a simple visualization tool, which can be run on nearly any computer at a user's disposal. This research is directed toward improving the capabilities of medical professionals in the tasks of preoperative planning, surgical training, diagnostic assistance, and patient education.

  19. Annotating image ROIs with text descriptions for multimodal biomedical document retrieval

    Science.gov (United States)

    You, Daekeun; Simpson, Matthew; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-01-01

    Regions of interest (ROIs) that are pointed to by overlaid markers (arrows, asterisks, etc.) in biomedical images are expected to contain more important and relevant information than other regions for biomedical article indexing and retrieval. We have developed several algorithms that localize and extract the ROIs by recognizing markers on images. Cropped ROIs then need to be annotated with contents describing them best. In most cases accurate textual descriptions of the ROIs can be found from figure captions, and these need to be combined with image ROIs for annotation. The annotated ROIs can then be used to, for example, train classifiers that separate ROIs into known categories (medical concepts), or to build visual ontologies, for indexing and retrieval of biomedical articles. We propose an algorithm that pairs visual and textual ROIs that are extracted from images and figure captions, respectively. This algorithm based on dynamic time warping (DTW) clusters recognized pointers into groups, each of which contains pointers with identical visual properties (shape, size, color, etc.). Then a rule-based matching algorithm finds the best matching group for each textual ROI mention. Our method yields a precision and recall of 96% and 79%, respectively, when ground truth textual ROI data is used.

  20. Lattice algebra approach to multispectral analysis of ancient documents.

    Science.gov (United States)

    Valdiviezo-N, Juan C; Urcid, Gonzalo

    2013-02-01

    This paper introduces a lattice algebra procedure that can be used for the multispectral analysis of historical documents and artworks. Assuming the presence of linearly mixed spectral pixels captured in a multispectral scene, the proposed method computes the scaled min- and max-lattice associative memories to determine the purest pixels that best represent the spectra of single pigments. The estimation of fractional proportions of pure spectra at each image pixel is used to build pigment abundance maps that can be used for subsequent restoration of damaged parts. Application examples include multispectral images acquired from the Archimedes Palimpsest and a Mexican pre-Hispanic codex.

  1. Sadhana | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    ... for grayscale and RGB images, respectively, using linear SVM classifier. The DWTFOSLBP-HF features selected with mRMR method has also established superiority amongst the DWT based hybrid texture feature extraction techniques for randomly divided database into different proportions of training and test datasets.

  2. Joint image reconstruction method with correlative multi-channel prior for x-ray spectral computed tomography

    Science.gov (United States)

    Kazantsev, Daniil; Jørgensen, Jakob S.; Andersen, Martin S.; Lionheart, William R. B.; Lee, Peter D.; Withers, Philip J.

    2018-06-01

    Rapid developments in photon-counting and energy-discriminating detectors have the potential to provide an additional spectral dimension to conventional x-ray grayscale imaging. Reconstructed spectroscopic tomographic data can be used to distinguish individual materials by characteristic absorption peaks. The acquired energy-binned data, however, suffer from low signal-to-noise ratio, acquisition artifacts, and frequently angular undersampled conditions. New regularized iterative reconstruction methods have the potential to produce higher quality images and since energy channels are mutually correlated it can be advantageous to exploit this additional knowledge. In this paper, we propose a novel method which jointly reconstructs all energy channels while imposing a strong structural correlation. The core of the proposed algorithm is to employ a variational framework of parallel level sets to encourage joint smoothing directions. In particular, the method selects reference channels from which to propagate structure in an adaptive and stochastic way while preferring channels with a high data signal-to-noise ratio. The method is compared with current state-of-the-art multi-channel reconstruction techniques including channel-wise total variation and correlative total nuclear variation regularization. Realistic simulation experiments demonstrate the performance improvements achievable by using correlative regularization methods.

  3. American Society of Radiation Oncology Recommendations for Documenting Intensity-Modulated Radiation Therapy Treatments

    International Nuclear Information System (INIS)

    Holmes, Timothy; Das, Rupak; Low, Daniel; Yin Fangfang; Balter, James; Palta, Jatinder; Eifel, Patricia

    2009-01-01

    Despite the widespread use of intensity-modulated radiation therapy (IMRT) for approximately a decade, a lack of adequate guidelines for documenting these treatments persists. Proper IMRT treatment documentation is necessary for accurate reconstruction of prior treatments when a patient presents with a marginal recurrence. This is especially crucial when the follow-up care is managed at a second treatment facility not involved in the initial IMRT treatment. To address this issue, an American Society for Radiation Oncology (ASTRO) workgroup within the American ASTRO Radiation Physics Committee was formed at the request of the ASTRO Research Council to develop a set of recommendations for documenting IMRT treatments. This document provides a set of comprehensive recommendations for documenting IMRT treatments, as well as image-guidance procedures, with example forms provided.

  4. The image of urachus adenocarcinoma on Doppler ultrasonography

    Energy Technology Data Exchange (ETDEWEB)

    Oyar, Orhan E-mail: o_oyar@hotmail.com; Yesildag, Ahmet; Gulsoy, Ufuk Kemal; Perk, Hakki

    2002-10-01

    Malignant urachal lesions are exceedingly rare and occur predominantly in adult life. In this case report, an adult patient with urachal carcinoma is presented with abdominal plain film, intravenous urography, gray-scale ultrasonography (US), Doppler US, and computed tomography (CT). Doppler US successfully showed the neovascularity with low resistive index value in the urachus tumor. We believe that Doppler US examination is helpful in the differential diagnosis of urachal carcinoma.

  5. FEASIBILITY STUDY OF LOW-COST IMAGE-BASED HERITAGE DOCUMENTATION IN NEPAL

    OpenAIRE

    Dhonju, H. K.; Xiao, W.; Sarhosis, V.; Mills, J. P.; Wilkinson, S.; Wang, Z.; Thapa, L.; Panday, U. S.

    2017-01-01

    Cultural heritage structural documentation is of great importance in terms of historical preservation, tourism, educational and spiritual values. Cultural heritage across the world, and in Nepal in particular, is at risk from various natural hazards (e.g. earthquakes, flooding, rainfall etc), poor maintenance and preservation, and even human destruction. This paper evaluates the feasibility of low-cost photogrammetric modelling cultural heritage sites, and explores the practicality o...

  6. A short introduction to image analysis - Matlab exercises

    DEFF Research Database (Denmark)

    Hansen, Michael Adsetts Edberg

    2000-01-01

    This document contain a short introduction to Image analysis. In addition small exercises has been prepared in order to support the theoretical understanding.......This document contain a short introduction to Image analysis. In addition small exercises has been prepared in order to support the theoretical understanding....

  7. Spontaneous involution of keratoacanthoma, iconographic documentation and similarity with volcanoes of nature.

    Science.gov (United States)

    Enei Gahona, Maria Leonor; Machado Filho, Carlos d' Aparecida Santos

    2012-01-01

    Through iconography, we show a case of keratoacanthoma (KA) on the nasal dorsum at two different stages of evolution (maturation and regression) and its similarity with images of the Mount St. Helens volcano and the Orcus Patera crater. Using these illustrations, we highlight why the crateriform aspect of this tumor is included in its classic clinical description. Moreover, we photographically documented the self-involuting tendency of KA, an aspect that is seldom documented in the literature.

  8. HWNet v2: An Efficient Word Image Representation for Handwritten Documents

    OpenAIRE

    Krishnan, Praveen; Jawahar, C. V.

    2018-01-01

    We present a framework for learning efficient holistic representation for handwritten word images. The proposed method uses a deep convolutional neural network with traditional classification loss. The major strengths of our work lie in: (i) the efficient usage of synthetic data to pre-train a deep network, (ii) an adapted version of ResNet-34 architecture with region of interest pooling (referred as HWNet v2) which learns discriminative features with variable sized word images, and (iii) rea...

  9. Photographic documentation, a practical guide for non professional forensic photography.

    Science.gov (United States)

    Ozkalipci, Onder; Volpellier, Muriel

    2010-01-01

    Forensic photography is essential for documentation of evidence of torture. Consent of the alleged victim should be sought in all cases. The article gives information about when and how to take pictures of what as well as image authentication, audit trail, storage, faulty pictures and the kind of camera to use.

  10. Integration of holography into the design of bank notes and security documents

    Science.gov (United States)

    Dunn, Paul

    2000-10-01

    The use of holograms and other diffractive optically variable devices have been used successfully in the fight against counterfeiting of security documents for several years. More recently they have become globally accepted as a key security feature on banknotes as reflected in their prime use on the Euronotes to be issues in 2002. The success of the design and origination of these images depends upon their strong visual appeal, their overt and covert content and the ability to offer unique features that provides an extremely difficult barrier for the would be counterfeiter to overcome. The basic design principles both for banknote and general security print application are discussed in this review document. TO be effective as a security device the image must be fit for the purpose. This means that the image must contain the level of overt and covert features that are easy to recognize, containing high level security features and form part of an educational program aimed at the product user and specifically trained security personnel. More specifically it must meet a clearly defined performance criteria.

  11. Image segmentation evaluation for very-large datasets

    Science.gov (United States)

    Reeves, Anthony P.; Liu, Shuang; Xie, Yiting

    2016-03-01

    With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.

  12. Management Documentation: Indicators & Good Practice at Cultural Heritage Places

    Science.gov (United States)

    Eppich, R.; Garcia Grinda, J. L.

    2015-08-01

    Documentation for cultural heritage places usually refers to describing the physical attributes, surrounding context, condition or environment; most of the time with images, graphics, maps or digital 3D models in their various forms with supporting textural information. Just as important as this type of information is the documentation of managerial attributes. How do managers of cultural heritage places collect information related to financial or economic well-being? How are data collected over time measured, and what are significant indicators for improvement? What quality of indicator is good enough? Good management of cultural heritage places is essential for conservation longevity, preservation of values and enjoyment by the public. But how is management documented? The paper will describe the research methodology, selection and description of attributes or indicators related to good management practice. It will describe the criteria for indicator selection and why they are important, how and when they are collected, by whom, and the difficulties in obtaining this information. As importantly it will describe how this type of documentation directly contributes to improving conservation practice. Good practice summaries will be presented that highlight this type of documentation including Pamplona and Ávila, Spain and Valletta, Malta. Conclusions are drawn with preliminary recommendations for improvement of this important aspect of documentation. Documentation of this nature is not typical and presents a unique challenge to collect, measure and communicate easily. However, it is an essential category that is often ignored yet absolutely essential in order to conserve cultural heritage places.

  13. A 2D eye gaze estimation system with low-resolution webcam images

    Directory of Open Access Journals (Sweden)

    Kim Jin

    2011-01-01

    Full Text Available Abstract In this article, a low-cost system for 2D eye gaze estimation with low-resolution webcam images is presented. Two algorithms are proposed for this purpose, one for the eye-ball detection with stable approximate pupil-center and the other one for the eye movements' direction detection. Eyeball is detected using deformable angular integral search by minimum intensity (DAISMI algorithm. Deformable template-based 2D gaze estimation (DTBGE algorithm is employed as a noise filter for deciding the stable movement decisions. While DTBGE employs binary images, DAISMI employs gray-scale images. Right and left eye estimates are evaluated separately. DAISMI finds the stable approximate pupil-center location by calculating the mass-center of eyeball border vertices to be employed for initial deformable template alignment. DTBGE starts running with initial alignment and updates the template alignment with resulting eye movements and eyeball size frame by frame. The horizontal and vertical deviation of eye movements through eyeball size is considered as if it is directly proportional with the deviation of cursor movements in a certain screen size and resolution. The core advantage of the system is that it does not employ the real pupil-center as a reference point for gaze estimation which is more reliable against corneal reflection. Visual angle accuracy is used for the evaluation and benchmarking of the system. Effectiveness of the proposed system is presented and experimental results are shown.

  14. Use of freehand sketching: Documenting heritage buildings, Gamal Abdel Nasser Street (1830–1930, Alexandria, Egypt

    Directory of Open Access Journals (Sweden)

    Menna M. Imam

    2016-09-01

    Full Text Available Freehand sketching – as one of the methods to discover cities – plays main role to understand the image of the heritage city. It is considered as an important tool to analyze and document heritage buildings. The image produced by freehand sketching heightens awareness of these buildings. Previous studies conducted on the methods of architectural heritage documentation revealed that very little researches have been undertaken to use freehand sketching in the process of data survey and documentation. These researches break the deadlock in the textual documents and express the human experience clearly unlike other methods of documentation. Therefore, this article intends to document ten heritage buildings in the study area, Gamal Abdel Nasser street, Alexandria city, Egypt. – One of the most ancient streets in Alexandria that still maintain its old character and identity – using analytical freehand sketches, the sketches and analysis was created by the researcher. The results showed that these buildings seem homogeneous and have a common character despite their different architectural styles which means it respects each other within building regulations. Briefly, the results help to find out how freehand sketching successfully analyzes these buildings that assist the architects and planners to understand the design principles used in ancient times.

  15. Teaching with Documents: A Cartoonist's View of the Eisenhower Years.

    Science.gov (United States)

    Mueller, Jean West; Schamel, Wynell Burroughs

    1990-01-01

    Illustrates how to teach U.S. history through the use of original documents such as Charles Nickerson's cartoon, "Images of the Fifties from Disneyland to Suez." States that the original artwork for this cartoon, which portrays the Eisenhower years, is in the Dwight D. Eisenhower Library in Abilene, Kansas. Provides a pretest, teaching…

  16. Psychophysical analysis of monitor display functions affecting observer diagnostic performance of CT image on liquid crystal display monitors

    International Nuclear Information System (INIS)

    Yamaguchi, M.; Fujita, H.; Asai, Y.; Uemura, M.; Ookura, Y.; Matsumoto, M.; Johkoh, T.

    2005-01-01

    The aim of the present study was to propose suitable display functions for CT image representation on liquid crystal display (LCD) monitors by analyzing the characteristics of the monitor's typical display functions using psychophysical analysis. The luminance of the LCD monitor was adjusted to a maximum of 275 cd/m 2 and 480 cd/m 2 . Three types of postcalibrated display functions (i.e., GSDF, CIELAB, and Exponential γ 2.2) were evaluated. Luminance calculation of a new grayscale test pattern (NGTP) was done for the conversion of the digital driving level (DDL) into the CT value. The psychophysical gradient δ of display functions for the CT value was evaluated and compared via statistical analysis. The δ value of GSDF and CIE decreased exponentially; however, the δ value of Exponential γ 2.2 showed a convex curve with a peak at a specific point. There was a statistically significant difference among the δ values of the three types of display functions on the 480 cd/m 2 maximum via Kruskal Wallis test (P<0.001). The GSDF was suitable for observation of abdominal and lung CT images; however, the display function combined the Exponential γ 2.2 and the GSDF functions and was ideal for observation of brain CT images by psychophysical analysis. (orig.)

  17. Built Heritage Documentation and Management: AN Integrated Conservation Approach in Bagan

    Science.gov (United States)

    Mezzino, D.; Chan, L.; Santana Quintero, M.; Esponda, M.; Lee, S.; Min, A.; Pwint, M.

    2017-08-01

    Good practices in heritage conservation are based on accurate information about conditions, materials, and transformation of built heritage sites. Therefore, heritage site documentation and its analysis are essential parts for their conservation. In addition, the devastating effects of recent catastrophic events in different geographical areas have highly affected cultural heritage places. Such areas include and are not limited to South Europe, South East Asia, and Central America. Within this framework, appropriate acquisition of information can effectively provide tools for the decision-making process and management. Heritage documentation is growing in innovation, providing dynamic opportunities for effectively responding to the alarming rate of destruction by natural events, conflicts, and negligence. In line with these considerations, a multidisciplinary team - including students and faculty members from Carleton University and Yangon Technological University, as well as staff from the Department of Archaeology, National Museum and Library (DoA) and professionals from the CyArk foundation - developed a coordinated strategy to document four temples in the site of Bagan (Myanmar). On-field work included capacity-building activities to train local emerging professionals in the heritage field (graduate and undergraduate students from the Yangon Technological University) and to increase the technical knowledge of the local DoA staff in the digital documentation field. Due to the short time of the on-field activity and the need to record several monuments, a variety of documentation techniques, including image and non-image based ones, were used. Afterwards, the information acquired during the fieldwork was processed to develop a solid base for the conservation and monitoring of the four documented temples. The relevance of developing this kind of documentation in Bagan is related to the vulnerability of the site, often affected by natural seismic events and

  18. BUILT HERITAGE DOCUMENTATION AND MANAGEMENT: AN INTEGRATED CONSERVATION APPROACH IN BAGAN

    Directory of Open Access Journals (Sweden)

    D. Mezzino

    2017-08-01

    Full Text Available Good practices in heritage conservation are based on accurate information about conditions, materials, and transformation of built heritage sites. Therefore, heritage site documentation and its analysis are essential parts for their conservation. In addition, the devastating effects of recent catastrophic events in different geographical areas have highly affected cultural heritage places. Such areas include and are not limited to South Europe, South East Asia, and Central America. Within this framework, appropriate acquisition of information can effectively provide tools for the decision-making process and management. Heritage documentation is growing in innovation, providing dynamic opportunities for effectively responding to the alarming rate of destruction by natural events, conflicts, and negligence. In line with these considerations, a multidisciplinary team – including students and faculty members from Carleton University and Yangon Technological University, as well as staff from the Department of Archaeology, National Museum and Library (DoA and professionals from the CyArk foundation – developed a coordinated strategy to document four temples in the site of Bagan (Myanmar. On-field work included capacity-building activities to train local emerging professionals in the heritage field (graduate and undergraduate students from the Yangon Technological University and to increase the technical knowledge of the local DoA staff in the digital documentation field. Due to the short time of the on-field activity and the need to record several monuments, a variety of documentation techniques, including image and non-image based ones, were used. Afterwards, the information acquired during the fieldwork was processed to develop a solid base for the conservation and monitoring of the four documented temples. The relevance of developing this kind of documentation in Bagan is related to the vulnerability of the site, often affected by natural

  19. Archaeological Documentation of a Defunct Iraqi Town

    Science.gov (United States)

    Šedina, J.; Pavelka, K.; Housarová, E.

    2016-06-01

    The subject of this article is the possibilities of the documentation of a defunct town from the Pre-Islamic period to Early Islamic period. This town is located near the town Makhmur in Iraq. The Czech archaeological mission has worked at this dig site. This Cultural Heritage site is threatened by war because in the vicinity are positions of ISIS. For security reasons, the applicability of Pleiades satellite data has been tested. Moreover, this area is a no-fly zone. However, the DTM created from stereo-images was insufficient for the desired application in archeology. The subject of this paper is the testing of the usability of RPAS technology and terrestrial photogrammetry for documentation of the remains of buildings. RPAS is a very fast growing technology that combines the advantages of aerial photogrammetry and terrestrial photogrammetry. A probably defunct church is a sample object.

  20. ARCHAEOLOGICAL DOCUMENTATION OF A DEFUNCT IRAQI TOWN

    Directory of Open Access Journals (Sweden)

    J. Šedina

    2016-06-01

    Full Text Available The subject of this article is the possibilities of the documentation of a defunct town from the Pre-Islamic period to Early Islamic period. This town is located near the town Makhmur in Iraq. The Czech archaeological mission has worked at this dig site. This Cultural Heritage site is threatened by war because in the vicinity are positions of ISIS. For security reasons, the applicability of Pleiades satellite data has been tested. Moreover, this area is a no-fly zone. However, the DTM created from stereo-images was insufficient for the desired application in archeology. The subject of this paper is the testing of the usability of RPAS technology and terrestrial photogrammetry for documentation of the remains of buildings. RPAS is a very fast growing technology that combines the advantages of aerial photogrammetry and terrestrial photogrammetry. A probably defunct church is a sample object.

  1. Scrolling forward, second edition making sense of documents in the digital age

    CERN Document Server

    Levy, David M

    2015-01-01

    A fascinating, insightful, and wonderfully written exploration of the document. Like Henry Petroski's The Pencil, David Levy's Scrolling Forward takes a common, everyday object, the document, and illuminates what it reveals about us, both in the past and in the digital age. We are surrounded daily by documents of all kinds—letters and credit card receipts, business memos and books, television images and web pages—yet we rarely stop to reflect on their significance. Now, in this period of digital transition, our written forms as well as our reading and writing habits are being disturbed and transformed by new technologies and practices. An expert on information and written forms, and a former researcher for the document pioneer Xerox, Levy masterfully navigates these concerns, offering reassurance while sharing his own excitement about many of the new kinds of emerging documents. He demonstrates how today's technologies, particularly the personal computer and the World Wide Web, are having analogous effects ...

  2. Detection of Coronal Mass Ejections Using Multiple Features and Space-Time Continuity

    Science.gov (United States)

    Zhang, Ling; Yin, Jian-qin; Lin, Jia-ben; Feng, Zhi-quan; Zhou, Jin

    2017-07-01

    Coronal Mass Ejections (CMEs) release tremendous amounts of energy in the solar system, which has an impact on satellites, power facilities and wireless transmission. To effectively detect a CME in Large Angle Spectrometric Coronagraph (LASCO) C2 images, we propose a novel algorithm to locate the suspected CME regions, using the Extreme Learning Machine (ELM) method and taking into account the features of the grayscale and the texture. Furthermore, space-time continuity is used in the detection algorithm to exclude the false CME regions. The algorithm includes three steps: i) define the feature vector which contains textural and grayscale features of a running difference image; ii) design the detection algorithm based on the ELM method according to the feature vector; iii) improve the detection accuracy rate by using the decision rule of the space-time continuum. Experimental results show the efficiency and the superiority of the proposed algorithm in the detection of CMEs compared with other traditional methods. In addition, our algorithm is insensitive to most noise.

  3. Computerising documentation

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    The nuclear power generation industry is faced with public concern and government pressures over safety, efficiency and risk. Operators throughout the industry are addressing these issues with the aid of a new technology - technical document management systems (TDMS). Used for strategic and tactical advantage, the systems enable users to scan, archive, retrieve, store, edit, distribute worldwide and manage the huge volume of documentation (paper drawings, CAD data and film-based information) generated in building, maintaining and ensuring safety in the UK's power plants. The power generation industry has recognized that the management and modification of operation critical information is vital to the safety and efficiency of its power plants. Regulatory pressure from the Nuclear Installations Inspectorate (NII) to operate within strict safety margins or lose Site Licences has prompted the need for accurate, up-to-data documentation. A document capture and management retrieval system provides a powerful cost-effective solution, giving rapid access to documentation in a tightly controlled environment. The computerisation of documents and plans is discussed in this article. (Author)

  4. Evaluation of the mutual information cost function for registration of SPET and MRI images of the brain

    International Nuclear Information System (INIS)

    Taleb, M.; McKay, E.

    1999-01-01

    Full text: Any strategy for image registration requires some method (a cost function) by which two images may be compared The mutual information (MI) between images is one such cost function. MI measures the structural similarity between pairs of gray-scale images and performs cross-modality image registration with minimal image pre-processing. This project compares the performance of MI vs the sum of absolute differences (SAD) 'gold standard' in monomodality image registration problems. It also examines the precision of cross-modality registration based on MI, using a human observer to decide whether registration is accurate. Thirteen paired brain SPET scans were registered using SAD as a cost function. Registration was repeated using MI and differences from the SAD results were recorded. Ten paired MRI and SPET brain scans registered using the MI cost function. Registration was repeated three times for each pair, varying the SPET position or orientation each time. Comparing MI to SAD, the median values of translation error were 2.85, 4.63 and 2.56 mm in the x, y and z axis and 0.5 j , 1.1 j and 1.0 j around the x, y and z axis respectively. For the cross-modality problems, the mean standard deviation (MSD) observed in x, y and z positioning was 0.18, 0.28 and 0.16 mm respectively. The MSD of orientation was 5.35 j , 1.95 j and 2.48 j around the x, y and z axis respectively. MI performed as well as SAD for monomodality registration. Unlike SAD, MI is also useful for cross-modality image registration tasks, producing visually acceptable results with minimal preprocessing

  5. Cultural Heritage: An example of graphical documentation with automated photogrammetric systems

    Science.gov (United States)

    Giuliano, M. G.

    2014-06-01

    In the field of Cultural Heritage, the use of automated photogrammetric systems, based on Structure from Motion techniques (SfM), is widely used, in particular for the study and for the documentation of the ancient ruins. This work has been carried out during the PhD cycle that was produced the "Carta Archeologica del territorio intorno al monte Massico". The study suggests the archeological documentation of the mausoleum "Torre del Ballerino" placed in the south-west area of Falciano del Massico, along the Via Appia. The graphic documentation has been achieved by using photogrammetric system (Image Based Modeling) and by the classical survey with total station, Nikon Nivo C. The data acquisition was carried out through digital camera Canon EOS 5D Mark II with Canon EF 17-40 mm f/4L USM @ 20 mm with images snapped in RAW and corrected in Adobe Lightroom. During the data processing, the camera calibration and orientation was carried out by the software Agisoft Photoscans and the final result has allowed to achieve a scaled 3D model of the monument, imported in software MeshLab for the different view. Three orthophotos in jpg format were extracted by the model, and then were imported in AutoCAD obtaining façade's surveys.

  6. Towards fraud-proof ID documents using multiple data hiding technologies and biometrics

    Science.gov (United States)

    Picard, Justin; Vielhauer, Claus; Thorwirth, Niels

    2004-06-01

    Identity documents, such as ID cards, passports, and driver's licenses, contain textual information, a portrait of the legitimate holder, and eventually some other biometric characteristics such as a fingerprint or handwritten signature. As prices for digital imaging technologies fall, making them more widely available, we have seen an exponential increase in the ease and the number of counterfeiters that can effectively forge documents. Today, with only limited knowledge of technology and a small amount of money, a counterfeiter can effortlessly replace a photo or modify identity information on a legitimate document to the extent that it is very diffcult to differentiate from the original. This paper proposes a virtually fraud-proof ID document based on a combination of three different data hiding technologies: digital watermarking, 2-D bar codes, and Copy Detection Pattern, plus additional biometric protection. As will be shown, that combination of data hiding technologies protects the document against any forgery, in principle without any requirement for other security features. To prevent a genuine document to be used by an illegitimate user,biometric information is also covertly stored in the ID document, to be used for identification at the detector.

  7. Carotid artery dissection on non-contrast CT: Does color improve the diagnostic confidence?

    Energy Technology Data Exchange (ETDEWEB)

    Saba, Luca, E-mail: lucasaba@tiscali.it [Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), di Cagliari – Polo di Monserrato, s.s. 554 Monserrato, Cagliari 09045 (Italy); Argiolas, Giovanni Maria [Department of Radiology, Azienda Ospedaliero Brotzu (A.O.B.), di Cagliari, Cagliari 09100 (Italy); Raz, Eytan [Department of Radiology, New York University School of Medicine, New York (United States); Department of Neurology and Psychiatry, Sapienza University of Rome (Italy); Sannia, Stefano [Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), di Cagliari – Polo di Monserrato, s.s. 554 Monserrato, Cagliari 09045 (Italy); Suri, Jasjit S. [Diagnostic and Monitoring Division, AtheroPointTM LLC, Roseville, CA (United States); Electrical Engineering Department (Aff.), Idaho State University, ID (United States); Siotto, Paolo [Department of Radiology, Azienda Ospedaliero Brotzu (A.O.B.), di Cagliari, Cagliari 09100 (Italy); Sanfilippo, Roberto; Montisci, Roberto [Department of Vascular Surgery, Azienda Ospedaliero Universitaria (A.O.U.), di Cagliari – Polo di Monserrato, s.s. 554 Monserrato, Cagliari 09045 (Italy); Piga, Mario [Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), di Cagliari – Polo di Monserrato, s.s. 554 Monserrato, Cagliari 09045 (Italy); Wintermark, Max [Department of Radiology, Neuroradiology Division, University of Virginia, Box 800170, Charlottesville, VA, 22908 (United States)

    2014-12-15

    Highlights: • The use of a color scale to display the non-contrast CT images in lieu of the classic grayscale improves the diagnostic confidence of the readers. • Radiologists should consider the use of a color scale, rather than the conventional grayscale, to assess non-contrast CT studies for possible carotid artery dissection. - Abstract: Purpose: The purpose of this work was to evaluate if the use of color maps, instead of conventional grayscale images, would improve the observer's diagnostic confidence in the non-contrast CT evaluation of internal carotid artery dissection (ICAD). Materials and methods: One hundred patients (61 men, 39 women; mean age, 51 years; range, 25–78 years), 40 with and 60 without ICAD, underwent non-contrast CT and were included in this the retrospective study. In this study, three groups of patients were considered: patients with MR confirmation of ICAD, n = 40; patients with MR confirmation of ICAD absence, n = 20; patients who underwent CT of the carotid arteries because of atherosclerotic disease, n = 40. Four blinded observers with different levels of expertise (expert, intermediate A, intermediate B and trainee) analyzed the non-contrast CT datasets using a cross model (one case grayscale and the following case using the color scale). The presence of ICAD was scored on a 5-point scale in order to assess the observer's diagnostic confidence. After 3 months the four observers evaluated the same datasets by using the same cross-model for the alternate readings (one case color scale and the following case using the grayscale). Statistical analysis included receiver operating characteristics (ROC) curve analysis, the Cohen weighted test and sensitivity, specificity, PPV, NPV, accuracy, LR+ and LR−. Results: The ROC curve analysis showed that, for all observers, the use of color scale resulted in an improved diagnostic confidence with AUC values increasing from 0.896 to 0.936, 0.823 to 0.849, 0.84 to 0.909 and 0

  8. Comparison analysis between filtered back projection and algebraic reconstruction technique on microwave imaging

    Science.gov (United States)

    Ramadhan, Rifqi; Prabowo, Rian Gilang; Aprilliyani, Ria; Basari

    2018-02-01

    Victims of acute cancer and tumor are growing each year and cancer becomes one of the causes of human deaths in the world. Cancers or tumor tissue cells are cells that grow abnormally and turn to take over and damage the surrounding tissues. At the beginning, cancers or tumors do not have definite symptoms in its early stages, and can even attack the tissues inside of the body. This phenomena is not identifiable under visual human observation. Therefore, an early detection system which is cheap, quick, simple, and portable is essensially required to anticipate the further development of cancer or tumor. Among all of the modalities, microwave imaging is considered to be a cheaper, simple, and portable system method. There are at least two simple image reconstruction algorithms i.e. Filtered Back Projection (FBP) and Algebraic Reconstruction Technique (ART), which have been adopted in some common modalities. In this paper, both algorithms will be compared by reconstructing the image from an artificial tissue model (i.e. phantom), which has two different dielectric distributions. We addressed two performance comparisons, namely quantitative and qualitative analysis. Qualitative analysis includes the smoothness of the image and also the success in distinguishing dielectric differences by observing the image with human eyesight. In addition, quantitative analysis includes Histogram, Structural Similarity Index (SSIM), Mean Squared Error (MSE), and Peak Signal-to-Noise Ratio (PSNR) calculation were also performed. As a result, quantitative parameters of FBP might show better values than the ART. However, ART is likely more capable to distinguish two different dielectric value than FBP, due to higher contrast in ART and wide distribution grayscale level.

  9. Television system for verification and documentation of treatment fields during intraoperative radiation therapy

    International Nuclear Information System (INIS)

    Fraass, B.A.; Harrington, F.S.; Kinsella, T.J.; Sindelar, W.F.

    1983-01-01

    Intraoperative radiation therapy (IORT) involves direct treatment of tumors or tumor beds with large single doses of radiation. The verification of the area to be treated before irradiation and the documentation of the treated area are critical for IORT, just as for other types of radiation therapy. A television system which allows the target area to be directly imaged immediately before irradiation has been developed. Verification and documentation of treatment fields has made the IORT television system indispensable

  10. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Yuchou Chang

    2008-02-01

    Full Text Available Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  11. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Hong Yi

    2008-01-01

    Full Text Available Abstract Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  12. SFM TECHNIQUE AND FOCUS STACKING FOR DIGITAL DOCUMENTATION OF ARCHAEOLOGICAL ARTIFACTS

    Directory of Open Access Journals (Sweden)

    P. Clini

    2016-06-01

    Full Text Available Digital documentation and high-quality 3D representation are always more requested in many disciplines and areas due to the large amount of technologies and data available for fast, detailed and quick documentation. This work aims to investigate the area of medium and small sized artefacts and presents a fast and low cost acquisition system that guarantees the creation of 3D models with an high level of detail, making the digitalization of cultural heritage a simply and fast procedure. The 3D models of the artefacts are created with the photogrammetric technique Structure From Motion that makes it possible to obtain, in addition to three-dimensional models, high-definition images for a deepened study and understanding of the artefacts. For the survey of small objects (only few centimetres it is used a macro lens and the focus stacking, a photographic technique that consists in capturing a stack of images at different focus planes for each camera pose so that is possible to obtain a final image with a higher depth of field. The acquisition with focus stacking technique has been finally validated with an acquisition with laser triangulation scanner Minolta that demonstrates the validity compatible with the allowable error in relation to the expected precision.

  13. Documentation Service

    International Nuclear Information System (INIS)

    Charnay, J.; Chosson, L.; Croize, M.; Ducloux, A.; Flores, S.; Jarroux, D.; Melka, J.; Morgue, D.; Mottin, C.

    1998-01-01

    This service assures the treatment and diffusion of the scientific information and the management of the scientific production of the institute as well as the secretariat operation for the groups and services of the institute. The report on documentation-library section mentions: the management of the documentation funds, search in international databases (INIS, Current Contents, Inspects), Pret-Inter service which allows accessing documents through DEMOCRITE network of IN2P3. As realizations also mentioned are: the setup of a video, photo database, the Web home page of the institute's library, follow-up of digitizing the document funds by integrating the CD-ROMs and diskettes, electronic archiving of the scientific production, etc

  14. Application for internal dosimetry using biokinetic distribution of photons based on nuclear medicine images*

    Science.gov (United States)

    Leal Neto, Viriato; Vieira, José Wilson; Lima, Fernando Roberto de Andrade

    2014-01-01

    Objective This article presents a way to obtain estimates of dose in patients submitted to radiotherapy with basis on the analysis of regions of interest on nuclear medicine images. Materials and Methods A software called DoRadIo (Dosimetria das Radiações Ionizantes [Ionizing Radiation Dosimetry]) was developed to receive information about source organs and target organs, generating graphical and numerical results. The nuclear medicine images utilized in the present study were obtained from catalogs provided by medical physicists. The simulations were performed with computational exposure models consisting of voxel phantoms coupled with the Monte Carlo EGSnrc code. The software was developed with the Microsoft Visual Studio 2010 Service Pack and the project template Windows Presentation Foundation for C# programming language. Results With the mentioned tools, the authors obtained the file for optimization of Monte Carlo simulations using the EGSnrc; organization and compaction of dosimetry results with all radioactive sources; selection of regions of interest; evaluation of grayscale intensity in regions of interest; the file of weighted sources; and, finally, all the charts and numerical results. Conclusion The user interface may be adapted for use in clinical nuclear medicine as a computer-aided tool to estimate the administered activity. PMID:25741101

  15. Application for internal dosimetry using biokinetic distribution of photons based on nuclear medicine images

    International Nuclear Information System (INIS)

    Leal Neto, Viriato; Vieira, Jose Wilson; Lima, Fernando Roberto de Andrade

    2014-01-01

    Objective: this article presents a way to obtain estimates of dose in patients submitted to radiotherapy with basis on the analysis of regions of interest on nuclear medicine images. Materials and methods: a software called DoRadIo (Dosimetria das Radiacoes Ionizantes [Ionizing Radiation Dosimetry]) was developed to receive information about source organs and target organs, generating graphical and numerical results. The nuclear medicine images utilized in the present study were obtained from catalogs provided by medical physicists. The simulations were performed with computational exposure models consisting of voxel phantoms coupled with the Monte Carlo EGSnrc code. The software was developed with the Microsoft Visual Studio 2010 Service Pack and the project template Windows Presentation Foundation for C ⧣ programming language. Results: with the mentioned tools, the authors obtained the file for optimization of Monte Carlo simulations using the EGSnrc; organization and compaction of dosimetry results with all radioactive sources; selection of regions of interest; evaluation of grayscale intensity in regions of interest; the file of weighted sources; and, finally, all the charts and numerical results. Conclusion: the user interface may be adapted for use in clinical nuclear medicine as a computer-aided tool to estimate the administered activity. (author)

  16. Application for internal dosimetry using biokinetic distribution of photons based on nuclear medicine images

    Energy Technology Data Exchange (ETDEWEB)

    Leal Neto, Viriato, E-mail: viriatoleal@yahoo.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco (IFPE), Recife, PE (Brazil); Vieira, Jose Wilson [Universidade Federal de Pernambuco (UPE), Recife, PE (Brazil); Lima, Fernando Roberto de Andrade [Centro Regional de Ciencias Nucleares (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2014-09-15

    Objective: this article presents a way to obtain estimates of dose in patients submitted to radiotherapy with basis on the analysis of regions of interest on nuclear medicine images. Materials and methods: a software called DoRadIo (Dosimetria das Radiacoes Ionizantes [Ionizing Radiation Dosimetry]) was developed to receive information about source organs and target organs, generating graphical and numerical results. The nuclear medicine images utilized in the present study were obtained from catalogs provided by medical physicists. The simulations were performed with computational exposure models consisting of voxel phantoms coupled with the Monte Carlo EGSnrc code. The software was developed with the Microsoft Visual Studio 2010 Service Pack and the project template Windows Presentation Foundation for C ⧣ programming language. Results: with the mentioned tools, the authors obtained the file for optimization of Monte Carlo simulations using the EGSnrc; organization and compaction of dosimetry results with all radioactive sources; selection of regions of interest; evaluation of grayscale intensity in regions of interest; the file of weighted sources; and, finally, all the charts and numerical results. Conclusion: the user interface may be adapted for use in clinical nuclear medicine as a computer-aided tool to estimate the administered activity. (author)

  17. Cultural diversity: blind spot in medical curriculum documents, a document analysis.

    Science.gov (United States)

    Paternotte, Emma; Fokkema, Joanne P I; van Loon, Karsten A; van Dulmen, Sandra; Scheele, Fedde

    2014-08-22

    Cultural diversity among patients presents specific challenges to physicians. Therefore, cultural diversity training is needed in medical education. In cases where strategic curriculum documents form the basis of medical training it is expected that the topic of cultural diversity is included in these documents, especially if these have been recently updated. The aim of this study was to assess the current formal status of cultural diversity training in the Netherlands, which is a multi-ethnic country with recently updated medical curriculum documents. In February and March 2013, a document analysis was performed of strategic curriculum documents for undergraduate and postgraduate medical education in the Netherlands. All text phrases that referred to cultural diversity were extracted from these documents. Subsequently, these phrases were sorted into objectives, training methods or evaluation tools to assess how they contributed to adequate curriculum design. Of a total of 52 documents, 33 documents contained phrases with information about cultural diversity training. Cultural diversity aspects were more prominently described in the curriculum documents for undergraduate education than in those for postgraduate education. The most specific information about cultural diversity was found in the blueprint for undergraduate medical education. In the postgraduate curriculum documents, attention to cultural diversity differed among specialties and was mainly superficial. Cultural diversity is an underrepresented topic in the Dutch documents that form the basis for actual medical training, although the documents have been updated recently. Attention to the topic is thus unwarranted. This situation does not fit the demand of a multi-ethnic society for doctors with cultural diversity competences. Multi-ethnic countries should be critical on the content of the bases for their medical educational curricula.

  18. SU-D-BRA-04: Computerized Framework for Marker-Less Localization of Anatomical Feature Points in Range Images Based On Differential Geometry Features for Image-Guided Radiation Therapy

    International Nuclear Information System (INIS)

    Soufi, M; Arimura, H; Toyofuku, F; Nakamura, K; Hirose, T; Umezu, Y; Shioyama, Y

    2016-01-01

    Purpose: To propose a computerized framework for localization of anatomical feature points on the patient surface in infrared-ray based range images by using differential geometry (curvature) features. Methods: The general concept was to reconstruct the patient surface by using a mathematical modeling technique for the computation of differential geometry features that characterize the local shapes of the patient surfaces. A region of interest (ROI) was firstly extracted based on a template matching technique applied on amplitude (grayscale) images. The extracted ROI was preprocessed for reducing temporal and spatial noises by using Kalman and bilateral filters, respectively. Next, a smooth patient surface was reconstructed by using a non-uniform rational basis spline (NURBS) model. Finally, differential geometry features, i.e. the shape index and curvedness features were computed for localizing the anatomical feature points. The proposed framework was trained for optimizing shape index and curvedness thresholds and tested on range images of an anthropomorphic head phantom. The range images were acquired by an infrared ray-based time-of-flight (TOF) camera. The localization accuracy was evaluated by measuring the mean of minimum Euclidean distances (MMED) between reference (ground truth) points and the feature points localized by the proposed framework. The evaluation was performed for points localized on convex regions (e.g. apex of nose) and concave regions (e.g. nasofacial sulcus). Results: The proposed framework has localized anatomical feature points on convex and concave anatomical landmarks with MMEDs of 1.91±0.50 mm and 3.70±0.92 mm, respectively. A statistically significant difference was obtained between the feature points on the convex and concave regions (P<0.001). Conclusion: Our study has shown the feasibility of differential geometry features for localization of anatomical feature points on the patient surface in range images. The proposed

  19. SU-D-BRA-04: Computerized Framework for Marker-Less Localization of Anatomical Feature Points in Range Images Based On Differential Geometry Features for Image-Guided Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Soufi, M; Arimura, H; Toyofuku, F [Kyushu University, Fukuoka, Fukuoka (Japan); Nakamura, K [Hamamatsu University School of Medicine, Hamamatsu, Shizuoka (Japan); Hirose, T; Umezu, Y [Kyushu University Hospital, Fukuoka, Fukuoka (Japan); Shioyama, Y [Saga Heavy Ion Medical Accelerator in Tosu, Tosu, Saga (Japan)

    2016-06-15

    Purpose: To propose a computerized framework for localization of anatomical feature points on the patient surface in infrared-ray based range images by using differential geometry (curvature) features. Methods: The general concept was to reconstruct the patient surface by using a mathematical modeling technique for the computation of differential geometry features that characterize the local shapes of the patient surfaces. A region of interest (ROI) was firstly extracted based on a template matching technique applied on amplitude (grayscale) images. The extracted ROI was preprocessed for reducing temporal and spatial noises by using Kalman and bilateral filters, respectively. Next, a smooth patient surface was reconstructed by using a non-uniform rational basis spline (NURBS) model. Finally, differential geometry features, i.e. the shape index and curvedness features were computed for localizing the anatomical feature points. The proposed framework was trained for optimizing shape index and curvedness thresholds and tested on range images of an anthropomorphic head phantom. The range images were acquired by an infrared ray-based time-of-flight (TOF) camera. The localization accuracy was evaluated by measuring the mean of minimum Euclidean distances (MMED) between reference (ground truth) points and the feature points localized by the proposed framework. The evaluation was performed for points localized on convex regions (e.g. apex of nose) and concave regions (e.g. nasofacial sulcus). Results: The proposed framework has localized anatomical feature points on convex and concave anatomical landmarks with MMEDs of 1.91±0.50 mm and 3.70±0.92 mm, respectively. A statistically significant difference was obtained between the feature points on the convex and concave regions (P<0.001). Conclusion: Our study has shown the feasibility of differential geometry features for localization of anatomical feature points on the patient surface in range images. The proposed

  20. Technical and radiological image quality comparison of different liquid crystal displays for radiology

    Directory of Open Access Journals (Sweden)

    Dams FE

    2014-10-01

    Full Text Available Francina EM Dams,2 KY Esther Leung,1 Pieter HM van der Valk,2 Marc CJM Kock,2 Jeroen Bosman,1 Sjoerd P Niehof1 1Medical Physics and Technology, 2Department of Radiology, Albert Schweitzer Hospital, Dordrecht, The Netherlands Background: To inform cost-effective decisions in purchasing new medical liquid crystal displays, we compared the image quality in displays made by three manufacturers. Methods: We recruited 19 radiologists and residents to compare the image quality of four liquid crystal displays, including 3-megapixel Barco®, Eizo®, and NEC® displays and a 6-megapixel Barco display. The evaluators were blinded to the manufacturers' names. Technical assessments were based on acceptance criteria and test patterns proposed by the American Association of Physicists in Medicine. Radiological assessments were performed on images from the American Association of Physicists in Medicine Task Group 18. They included X-ray images of the thorax, knee, and breast, a computed tomographic image of the thorax, and a magnetic resonance image of the brain. Image quality was scored on an analog scale (range 0–10. Statistical analysis was performed with repeated-measures analysis of variance. Results: The Barco 3-megapixel display passed all acceptance criteria. The Eizo and NEC displays passed the acceptance criteria, except for the darkest pixel value in the grayscale display function. The Barco 6-megapixel display failed criteria for the maximum luminance response and the veiling glare. Mean radiological assessment scores were 7.8±1.1 (Barco 3-megapixel, 7.8±1.2 (Eizo, 8.1±1.0 (NEC, and 8.1±1.0 (Barco 6-megapixel. No significant differences were found between displays. Conclusion: According to the tested criteria, all the displays had comparable image quality; however, there was a three-fold difference in price between the most and least expensive displays. Keywords: data display, humans, radiographic image enhancement, user-computer interface

  1. Ways to integrate document management systems with industrial plant configuration management systems

    International Nuclear Information System (INIS)

    Munoz, M.

    1995-01-01

    Based on experience gained from tasks carried out for Almaraz Nuclear Power Plant, this paper describes computer platforms used both at the power plant and in the main offices of the engineering company. Subsequently, a description is given of the procedure followed for the continuous up-dating of plant documentation, in order to maintain consistency with other information stored in data bases in the Operation Management System, Maintenance System, Modification Management System, etc. The work method used for the unitary updating of all information (document images and attributes corresponding to the different data bases), following refuelling procedures is also described. Lastly, the paper describes the functions and the user interface of the system used in the power plant for document management. (Author)

  2. Subject (of documents)

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2017-01-01

    This article presents and discuss the concept “subject” or subject matter (of documents) as it has been examined in library and information science (LIS) for more than 100 years. Different theoretical positions are outlined and it is found that the most important distinction is between document......-oriented views versus request-oriented views. The document-oriented view conceive subject as something inherent in documents, whereas the request-oriented view (or the policy based view) understand subject as an attribution made to documents in order to facilitate certain uses of them. Related concepts...

  3. Mimicking human texture classification

    NARCIS (Netherlands)

    Rogowitz, B.E.; van Rikxoort, Eva M.; van den Broek, Egon; Pappas, T.N.; Schouten, Theo E.; Daly, S.J.

    2005-01-01

    In an attempt to mimic human (colorful) texture classification by a clustering algorithm three lines of research have been encountered, in which as test set 180 texture images (both their color and gray-scale equivalent) were drawn from the OuTex and VisTex databases. First, a k-means algorithm was

  4. Einstein Observations of Galactic supernova remnants

    Science.gov (United States)

    Seward, Frederick D.

    1990-01-01

    This paper summarizes the observations of Galactic supernova remnants with the imaging detectors of the Einstein Observatory. X-ray surface brightness contours of 47 remnants are shown together with gray-scale pictures. Count rates for these remnants have been derived and are listed for the HRI, IPC, and MPC detectors.

  5. Reactive documentation system

    Science.gov (United States)

    Boehnlein, Thomas R.; Kramb, Victoria

    2018-04-01

    Proper formal documentation of computer acquired NDE experimental data generated during research is critical to the longevity and usefulness of the data. Without documentation describing how and why the data was acquired, NDE research teams lose capability such as their ability to generate new information from previously collected data or provide adequate information so that their work can be replicated by others seeking to validate their research. Despite the critical nature of this issue, NDE data is still being generated in research labs without appropriate documentation. By generating documentation in series with data, equal priority is given to both activities during the research process. One way to achieve this is to use a reactive documentation system (RDS). RDS prompts an operator to document the data as it is generated rather than relying on the operator to decide when and what to document. This paper discusses how such a system can be implemented in a dynamic environment made up of in-house and third party NDE data acquisition systems without creating additional burden on the operator. The reactive documentation approach presented here is agnostic enough that the principles can be applied to any operator controlled, computer based, data acquisition system.

  6. The Role and Design of Screen Images in Software Documentation.

    Science.gov (United States)

    van der Meij, Hans

    2000-01-01

    Discussion of learning a new computer software program focuses on how to support the joint handling of a manual, input devices, and screen display. Describes a study that examined three design styles for manuals that included screen images to reduce split-attention problems and discusses theory versus practice and cognitive load theory.…

  7. Low-complexity camera digital signal imaging for video document projection system

    Science.gov (United States)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  8. Tangible interactive system for document browsing and visualisation of multimedia data

    Science.gov (United States)

    Rytsar, Yuriy; Voloshynovskiy, Sviatoslav; Koval, Oleksiy; Deguillaume, Frederic; Topak, Emre; Startchik, Sergei; Pun, Thierry

    2006-01-01

    In this paper we introduce and develop a framework for document interactive navigation in multimodal databases. First, we analyze the main open issues of existing multimodal interfaces and then discuss two applications that include interaction with documents in several human environments, i.e., the so-called smart rooms. Second, we propose a system set-up dedicated to the efficient navigation in the printed documents. This set-up is based on the fusion of data from several modalities that include images and text. Both modalities can be used as cover data for hidden indexes using data-hiding technologies as well as source data for robust visual hashing. The particularities of the proposed robust visual hashing are described in the paper. Finally, we address two practical applications of smart rooms for tourism and education and demonstrate the advantages of the proposed solution.

  9. Glow experiment documentation of OMS/RCS pod and vertical stabilizer

    Science.gov (United States)

    1982-01-01

    Glow experiment documentation of one of the orbital maneuvering system (OMS) reaction control system (RCS) pods and a portion of the vertical stabilizer shows chemoluminescent effectresulting from atomic oxygen impacting the spacecraft and building to the point that the atomic oxygen atoms combine to form molecules of oxygen. The Image Intensifier on NIKON 35mm camera was used to record the glow.

  10. Glow experiment documentation of OMS/RCS pods and vertical stabilizer

    Science.gov (United States)

    1982-01-01

    Glow experiment documentation of orbital maneuvering system (OMS) reaction control system (RCS) pods and vertical stabilizer shows chemo-luminescent effect resulting from atomic oxygen impacting the spacecraft and building to the point that the atomic oxygen atoms combine to form molecules of oxygen. Image intensifier on NIKON 35mm camera was used to record glow on vertical tail and OMS pods.

  11. Starlink Document Styles

    Science.gov (United States)

    Lawden, M. D.

    This document describes the various styles which are recommended for Starlink documents. It also explains how to use the templates which are provided by Starlink to help authors create documents in a standard style. This paper is concerned mainly with conveying the ``look and feel" of the various styles of Starlink document rather than describing the technical details of how to produce them. Other Starlink papers give recommendations for the detailed aspects of document production, design, layout, and typography. The only style that is likely to be used by most Starlink authors is the Standard style.

  12. Effects of ozone on the various digital print technologies: Photographs and documents

    Energy Technology Data Exchange (ETDEWEB)

    Burge, D; Gordeladze, N; Bigourdan, J-L; Nishimura, D, E-mail: dmbpph@rit.ed [Image Permanence Institute at Rochester Institute of Technology, 70 Lomb Memorial Drive, Rochester, NY 14623 (United States)

    2010-06-01

    The harmful effects of ozone on inkjet photographs have been well documented. This project expands on that research by performing ozone tests on a greater variety of digital prints including colour electrophotographic and dye sublimation. The sensitivities of these materials are compared to traditionally printed materials (black-and-white electrophotographic, colour photographic and offset lithographic) to determine if the digital prints require special care practices. In general, the digital prints were more sensitive to ozone than traditional prints. Dye inkjet prints were more sensitive to fade than pigment inkjet, though pigment was not immune. The dye sublimation, colour electrophotographic (dry and liquid toner), and traditional print systems were relatively resistant to ozone. Text-based documents were evaluated in addition to photographic images, since little work has been done to determine if the type of object (image or text) has an impact on its sensitivity to ozone. The results showed that documents can be more resistant to ozone than photographs even when created using the same printer and inks. It is recommended that cultural heritage institutions not expose their porous-coated, dye-based inkjet photos to open air for extended periods of time. Other inkjet prints should be monitored for early signs of change.

  13. Effects of ozone on the various digital print technologies: Photographs and documents

    International Nuclear Information System (INIS)

    Burge, D; Gordeladze, N; Bigourdan, J-L; Nishimura, D

    2010-01-01

    The harmful effects of ozone on inkjet photographs have been well documented. This project expands on that research by performing ozone tests on a greater variety of digital prints including colour electrophotographic and dye sublimation. The sensitivities of these materials are compared to traditionally printed materials (black-and-white electrophotographic, colour photographic and offset lithographic) to determine if the digital prints require special care practices. In general, the digital prints were more sensitive to ozone than traditional prints. Dye inkjet prints were more sensitive to fade than pigment inkjet, though pigment was not immune. The dye sublimation, colour electrophotographic (dry and liquid toner), and traditional print systems were relatively resistant to ozone. Text-based documents were evaluated in addition to photographic images, since little work has been done to determine if the type of object (image or text) has an impact on its sensitivity to ozone. The results showed that documents can be more resistant to ozone than photographs even when created using the same printer and inks. It is recommended that cultural heritage institutions not expose their porous-coated, dye-based inkjet photos to open air for extended periods of time. Other inkjet prints should be monitored for early signs of change.

  14. Robust Adaptive Thresholder For Document Scanning Applications

    Science.gov (United States)

    Hsing, To R.

    1982-12-01

    In document scanning applications, thresholding is used to obtain binary data from a scanner. However, due to: (1) a wide range of different color backgrounds; (2) density variations of printed text information; and (3) the shading effect caused by the optical systems, the use of adaptive thresholding to enhance the useful information is highly desired. This paper describes a new robust adaptive thresholder for obtaining valid binary images. It is basically a memory type algorithm which can dynamically update the black and white reference level to optimize a local adaptive threshold function. The results of high image quality from different types of simulate test patterns can be obtained by this algorithm. The software algorithm is described and experiment results are present to describe the procedures. Results also show that the techniques described here can be used for real-time signal processing in the varied applications.

  15. Digital enhancement of computerized axial tomograms

    Science.gov (United States)

    Roberts, E., Jr.

    1978-01-01

    A systematic evaluation has been conducted of certain digital image enhancement techniques performed in image space. Three types of images have been used, computer generated phantoms, tomograms of a synthetic phantom, and axial tomograms of human anatomy containing images of lesions, artificially introduced into the tomograms. Several types of smoothing, sharpening, and histogram modification have been explored. It has been concluded that the most useful enhancement techniques are a selective smoothing of singular picture elements, combined with contrast manipulation. The most useful tool in applying these techniques is the gray-scale histogram.

  16. Document understanding for a broad class of documents

    NARCIS (Netherlands)

    Aiello, Marco; Monz, Christof; Todoran, Leon; Worring, Marcel

    2002-01-01

    We present a document analysis system able to assign logical labels and extract the reading order in a broad set of documents. All information sources, from geometric features and spatial relations to the textual features and content are employed in the analysis. To deal effectively with these

  17. Robust non-local median filter

    Science.gov (United States)

    Matsuoka, Jyohei; Koga, Takanori; Suetake, Noriaki; Uchino, Eiji

    2017-04-01

    This paper describes a novel image filter with superior performance on detail-preserving removal of random-valued impulse noise superimposed on natural gray-scale images. The non-local means filter is in the limelight as a way of Gaussian noise removal with superior performance on detail preservation. By referring the fundamental concept of the non-local means, we had proposed a non-local median filter as a specialized way for random-valued impulse noise removal so far. In the non-local processing, the output of a filter is calculated from pixels in blocks which are similar to the block centered at a pixel of interest. As a result, aggressive noise removal is conducted without destroying the detailed structures in an original image. However, the performance of non-local processing decreases enormously in the case of high noise occurrence probability. A cause of this problem is that the superimposed noise disturbs accurate calculation of the similarity between the blocks. To cope with this problem, we propose an improved non-local median filter which is robust to the high level of corruption by introducing a new similarity measure considering possibility of being the original signal. The effectiveness and validity of the proposed method are verified in a series of experiments using natural gray-scale images.

  18. Feature Extraction For Application of Heart Abnormalities Detection Through Iris Based on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Entin Martiana Kusumaningtyas

    2018-01-01

    Full Text Available As the WHO says, heart disease is the leading cause of death and examining it by current methods in hospitals is not cheap. Iridology is one of the most popular alternative ways to detect the condition of organs. Iridology is the science that enables a health practitioner or non-expert to study signs in the iris that are capable of showing abnormalities in the body, including basic genetics, toxin deposition, circulation of dams, and other weaknesses. Research on computer iridology has been done before. One is about the computer's iridology system to detect heart conditions. There are several stages such as capture eye base on target, pre-processing, cropping, segmentation, feature extraction and classification using Thresholding algorithms. In this study, feature extraction process performed using binarization method by transforming the image into black and white. In this process we compare the two approaches of binarization method, binarization based on grayscale images and binarization based on proximity. The system we proposed was tested at Mugi Barokah Clinic Surabaya.  We conclude that the image grayscale approach performs better classification than using proximity.

  19. Remote Sensing and Imaging Physics

    Science.gov (United States)

    2012-03-07

    Program Manager AFOSR/RSE Air Force Research Laboratory Remote Sensing and Imaging Physics 7 March 2012 Report Documentation Page Form...00-00-2012 to 00-00-2012 4. TITLE AND SUBTITLE Remote Sensing And Imaging Physics 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...Imaging of Space Objects •Information without Imaging •Predicting the Location of Space Objects • Remote Sensing in Extreme Conditions •Propagation

  20. Query by image example: The CANDID approach

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, P.M.; Cannon, M. [Los Alamos National Lab., NM (United States). Computer Research and Applications Group; Hush, D.R. [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Electrical and Computer Engineering

    1995-02-01

    CANDID (Comparison Algorithm for Navigating Digital Image Databases) was developed to enable content-based retrieval of digital imagery from large databases using a query-by-example methodology. A user provides an example image to the system, and images in the database that are similar to that example are retrieved. The development of CANDID was inspired by the N-gram approach to document fingerprinting, where a ``global signature`` is computed for every document in a database and these signatures are compared to one another to determine the similarity between any two documents. CANDID computes a global signature for every image in a database, where the signature is derived from various image features such as localized texture, shape, or color information. A distance between probability density functions of feature vectors is then used to compare signatures. In this paper, the authors present CANDID and highlight two results from their current research: subtracting a ``background`` signature from every signature in a database in an attempt to improve system performance when using inner-product similarity measures, and visualizing the contribution of individual pixels in the matching process. These ideas are applicable to any histogram-based comparison technique.

  1. Criteria Document for B-plant's Surveillance and Maintenance Phase Safety Basis Document

    International Nuclear Information System (INIS)

    SCHWEHR, B.A.

    1999-01-01

    This document is required by the Project Hanford Managing Contractor (PHMC) procedure, HNF-PRO-705, Safety Basis Planning, Documentation, Review, and Approval. This document specifies the criteria that shall be in the B Plant surveillance and maintenance phase safety basis in order to obtain approval of the DOE-RL. This CD describes the criteria to be addressed in the S and M Phase safety basis for the deactivated Waste Fractionization Facility (B Plant) on the Hanford Site in Washington state. This criteria document describes: the document type and format that will be used for the S and M Phase safety basis, the requirements documents that will be invoked for the document development, the deactivated condition of the B Plant facility, and the scope of issues to be addressed in the S and M Phase safety basis document

  2. Structured diagnostic imaging in patients with multiple trauma

    International Nuclear Information System (INIS)

    Linsenmaier, U.; Rieger, J.; Rock, C.; Pfeifer, K.J.; Reiser, M.; Kanz, K.G.

    2002-01-01

    Purpose. Development of a concept for structured diagnostic imaging in patients with multiple trauma.Material and methods. Evaluation of data from a prospective trial with over 2400 documented patients with multiple trauma. All diagnostic and therapeutic steps, primary and secondary death and the 90 days lethality were documented.Structured diagnostic imaging of multiple injured patients requires the integration of an experienced radiologist in an interdisciplinary trauma team consisting of anesthesia, radiology and trauma surgery. Radiology itself deserves standardized concepts for equipment, personnel and logistics to perform diagnostic imaging for a 24-h-coverage with constant quality.Results. This paper describes criteria for initiation of a shock room or emergency room treatment, strategies for documentation and interdisciplinary algorithms for the early clinical care coordinating diagnostic imaging and therapeutic procedures following standardized guidelines. Diagnostic imaging consists of basic diagnosis, radiological ABC-rule, radiological follow-up and structured organ diagnosis using CT. Radiological trauma scoring allows improved quality control of diagnosis and therapy of multiple injured patients.Conclusion. Structured diagnostic imaging of multiple injured patients leads to a standardization of diagnosis and therapy and ensures constant process quality. (orig.) [de

  3. INCREASE OF STABILITY AT JPEG COMPRESSION OF THE DIGITAL WATERMARKS EMBEDDED IN STILL IMAGES

    Directory of Open Access Journals (Sweden)

    V. A. Batura

    2015-07-01

    Full Text Available Subject of Research. The paper deals with creation and research of method for increasing stability at JPEG compressing of digital watermarks embedded in still images. Method. A new algorithm of digital watermarking for still images which embeds digital watermark into a still image via modification of frequency coefficients for Hadamard discrete transformation is presented. The choice of frequency coefficients for embedding of a digital watermark is based on existence of sharp change of their values after modification at the maximum compression of JPEG. The choice of blocks of pixels for embedding is based on the value of their entropy. The new algorithm was subjected to the analysis of resistance to an image compression, noising, filtration, change of size, color and histogram equalization. Elham algorithm possessing a good resistance to JPEG compression was chosen for comparative analysis. Nine gray-scale images were selected as objects for protection. Obscurity of the distortions embedded in them was defined on the basis of the peak value of a signal to noise ratio which should be not lower than 43 dB for obscurity of the brought distortions. Resistibility of embedded watermark was determined by the Pearson correlation coefficient, which value should not be below 0.5 for the minimum allowed stability. The algorithm of computing experiment comprises: watermark embedding into each test image by the new algorithm and Elham algorithm; introducing distortions to the object of protection; extracting of embedded information with its subsequent comparison with the original. Parameters of the algorithms were chosen so as to provide approximately the same level of distortions introduced into the images. Main Results. The method of preliminary processing of digital watermark presented in the paper makes it possible to reduce significantly the volume of information embedded in the still image. The results of numerical experiment have shown that the

  4. Obtaining and Using Images in the Clinical Setting

    International Nuclear Information System (INIS)

    Cendales, Ricardo

    2009-01-01

    Currently small electronic devices capable of producing high quality images are available. The massive use of these devices has become common in the clinical setting as medical images represent a useful tool to document relevant clinical conditions for patient diagnosis, treatment and follow-up. Besides, clinical images are beneficial for legal, scientific and academic purposes. The extended practice without proper ethical guidelines might represent a significant risk for the protection of patient rights and clinical practice. This document discusses risks and duties when obtaining medical images, and presents some arguments on institutional and professional responsibilities around the definition of policies regarding the protection of privacy and dignity of the patient.

  5. Assessment of the systemic distribution of a bioconjugated anti-Her2 magnetic nanoparticle in a breast cancer model by means of magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Huerta-Núñez, L. F. E., E-mail: lidi-huerta@hotmail.com [Universidad del Ejercito y FAM/EMGS-Laboratorio Multidisciplinario de Investigación (Mexico); Villanueva-Lopez, G. Cleva, E-mail: villanuevacleva3@gmail.com [Instituto Politécnico Nacional-Escuela Superior de Medicina-Sección Investigación y Posgrado (Mexico); Morales-Guadarrama, A., E-mail: amorales@ci3m.mx [Centro Nacional de Investigacion en Imagenologia e Instrumentacion Medica-Universidad Autónoma (Mexico); Soto, S., E-mail: cuadrosdobles@hotmail.com; López, J., E-mail: jaimelocr@hotmail.com; Silva, J. G., E-mail: gabrielsilva173@gmail.com [Universidad del Ejercito y FAM/EMGS-Laboratorio Multidisciplinario de Investigación (Mexico); Perez-Vielma, N., E-mail: nadiampv@gmail.com [Instituto Politécnico Nacional - Centro Interdisciplinario de Ciencias de la Salud Unidad Santo Tomás (CICS-UST) (Mexico); Sacristán, E., E-mail: esacristan@ci3m.mx [Centro Nacional de Investigacion en Imagenologia e Instrumentacion Medica-Universidad Autónoma (Mexico); Gudiño-Zayas, Marco E., E-mail: gudino@unam.mx [UNAM, Departamento de Medicina Experimental, Facultad de Medicina (Mexico); González, C. A., E-mail: cgonzalezd@ipn.mx [Universidad del Ejercito y FAM/EMGS-Laboratorio Multidisciplinario de Investigación (Mexico)

    2016-09-15

    The aim of this study was to determine the systemic distribution of magnetic nanoparticles of 100 nm diameter (MNPs) coupled to a specific monoclonal antibody anti-Her2 in an experimental breast cancer (BC) model. The study was performed in two groups of Sprague–Dawley rats: control (n = 6) and BC chemically induced (n = 3). Bioconjugated “anti-Her2-MNPs” were intravenously administered, and magnetic resonance imaging (MRI) monitored its systemic distribution at seven times after administration. Non-heme iron presence associated with the location of the bioconjugated anti-Her2-MNPs in splenic, hepatic, cardiac and tumor tissues was detected by Perl’s Prussian blue (PPB) stain. Optical density measurements were used to semiquantitatively determine the iron presence in tissues on the basis of a grayscale values integration of T1 and T2 MRI sequence images. The results indicated a delayed systemic distribution of MNPs in cancer compared to healthy conditions with a maximum concentration of MNPs in cancer tissue at 24 h post-infusion.

  6. Dynamical System Approach for Edge Detection Using Coupled FitzHugh-Nagumo Neurons.

    Science.gov (United States)

    Li, Shaobai; Dasmahapatra, Srinandan; Maharatna, Koushik

    2015-12-01

    The prospect of emulating the impressive computational capabilities of biological systems has led to considerable interest in the design of analog circuits that are potentially implementable in very large scale integration CMOS technology and are guided by biologically motivated models. For example, simple image processing tasks, such as the detection of edges in binary and grayscale images, have been performed by networks of FitzHugh-Nagumo-type neurons using the reaction-diffusion models. However, in these studies, the one-to-one mapping of image pixels to component neurons makes the size of the network a critical factor in any such implementation. In this paper, we develop a simplified version of the employed reaction-diffusion model in three steps. In the first step, we perform a detailed study to locate this threshold using continuous Lyapunov exponents from dynamical system theory. Furthermore, we render the diffusion in the system to be anisotropic, with the degree of anisotropy being set by the gradients of grayscale values in each image. The final step involves a simplification of the model that is achieved by eliminating the terms that couple the membrane potentials of adjacent neurons. We apply our technique to detect edges in data sets of artificially generated and real images, and we demonstrate that the performance is as good if not better than that of the previous methods without increasing the size of the network.

  7. Comparisons of images simultaneously documented by digital subtraction coronary arteriography and cine coronary arteriography

    International Nuclear Information System (INIS)

    Kimura, Koji; Takamiya, Makoto; Yamamoto, Kazuo; Ohta, Mitsushige; Naito, Hiroaki

    1988-01-01

    Using an angiography apparatus capable of simultaneously processing digital subtraction angiograms and cine angiograms, the diagnostic capabilities of both methods for the coronary arteries (DSCAG and Cine-CAG) were compared. Twenty stenotic lesions of the coronary arteries of 11 patients were evaluated using both modalities. The severity of stenosis using DSCAG with a 512x512x8 bit matrix was semiautomatically measured on the cathode ray tube (CRT) based on enlarged images on the screen of a Vanguard cine projector which were of the same size as those of or 10 times larger than images of Cine-CAG. The negative and positive hard copies of DSCAG images were also compared with those of Cine-CAG. The correlation coefficients of the severity of stenosis by DSCAG and Cine-CAG were as follows: (1) the same size DSCAG images on CRT to Cine-CAG, 0.95, (2) 10 times enlarged DSCAG images on CRT to Cine-CAG, 0.96, and (3) the same size DSCAG images on negative and positive hard copies to Cine-CAG, 0.97. The semiautomatically measured values of 10 times enlarged DSCAG images on CRT and the manually measured values of the same size negative and positive DSCAG images in hard copy closely correlated with the values measured using Cine-CAG. When the liver was superimposed in the long-axis projection, the diagnostic capabilities of DSCAG and Cine-CAG were compared. The materials included 10 left coronary arteriograms and 11 right coronary arteriograms. Diagnostically, DSCAG was more useful than Cine-CAG in the long-axis projection. (author)

  8. Generic safety documentation model

    International Nuclear Information System (INIS)

    Mahn, J.A.

    1994-04-01

    This document is intended to be a resource for preparers of safety documentation for Sandia National Laboratories, New Mexico facilities. It provides standardized discussions of some topics that are generic to most, if not all, Sandia/NM facilities safety documents. The material provides a ''core'' upon which to develop facility-specific safety documentation. The use of the information in this document will reduce the cost of safety document preparation and improve consistency of information

  9. WIPP documentation plan

    International Nuclear Information System (INIS)

    Plung, D.L.; Montgomery, T.T.; Glasstetter, S.R.

    1986-01-01

    In support of the programs at the Waste Isolation Pilot Plant (WIPP), the Publications and Procedures Section developed a documentation plan that provides an integrated document hierarchy; further, this plan affords several unique features: 1) the format for procedures minimizes the writing responsibilities of the technical staff and maximizes use of the writing and editing staff; 2) review cycles have been structured to expedite the processing of documents; and 3) the numbers of documents needed to support the program have been appreciably reduced

  10. Level-set segmentation of pulmonary nodules in megavolt electronic portal images using a CT prior

    International Nuclear Information System (INIS)

    Schildkraut, J. S.; Prosser, N.; Savakis, A.; Gomez, J.; Nazareth, D.; Singh, A. K.; Malhotra, H. K.

    2010-01-01

    Purpose: Pulmonary nodules present unique problems during radiation treatment due to nodule position uncertainty that is caused by respiration. The radiation field has to be enlarged to account for nodule motion during treatment. The purpose of this work is to provide a method of locating a pulmonary nodule in a megavolt portal image that can be used to reduce the internal target volume (ITV) during radiation therapy. A reduction in the ITV would result in a decrease in radiation toxicity to healthy tissue. Methods: Eight patients with nonsmall cell lung cancer were used in this study. CT scans that include the pulmonary nodule were captured with a GE Healthcare LightSpeed RT 16 scanner. Megavolt portal images were acquired with a Varian Trilogy unit equipped with an AS1000 electronic portal imaging device. The nodule localization method uses grayscale morphological filtering and level-set segmentation with a prior. The treatment-time portion of the algorithm is implemented on a graphical processing unit. Results: The method was retrospectively tested on eight cases that include a total of 151 megavolt portal image frames. The method reduced the nodule position uncertainty by an average of 40% for seven out of the eight cases. The treatment phase portion of the method has a subsecond execution time that makes it suitable for near-real-time nodule localization. Conclusions: A method was developed to localize a pulmonary nodule in a megavolt portal image. The method uses the characteristics of the nodule in a prior CT scan to enhance the nodule in the portal image and to identify the nodule region by level-set segmentation. In a retrospective study, the method reduced the nodule position uncertainty by an average of 40% for seven out of the eight cases studied.

  11. Registration for Optical Multimodal Remote Sensing Images Based on FAST Detection, Window Selection, and Histogram Specification

    Directory of Open Access Journals (Sweden)

    Xiaoyang Zhao

    2018-04-01

    Full Text Available In recent years, digital frame cameras have been increasingly used for remote sensing applications. However, it is always a challenge to align or register images captured with different cameras or different imaging sensor units. In this research, a novel registration method was proposed. Coarse registration was first applied to approximately align the sensed and reference images. Window selection was then used to reduce the search space and a histogram specification was applied to optimize the grayscale similarity between the images. After comparisons with other commonly-used detectors, the fast corner detector, FAST (Features from Accelerated Segment Test, was selected to extract the feature points. The matching point pairs were then detected between the images, the outliers were eliminated, and geometric transformation was performed. The appropriate window size was searched and set to one-tenth of the image width. The images that were acquired by a two-camera system, a camera with five imaging sensors, and a camera with replaceable filters mounted on a manned aircraft, an unmanned aerial vehicle, and a ground-based platform, respectively, were used to evaluate the performance of the proposed method. The image analysis results showed that, through the appropriate window selection and histogram specification, the number of correctly matched point pairs had increased by 11.30 times, and that the correct matching rate had increased by 36%, compared with the results based on FAST alone. The root mean square error (RMSE in the x and y directions was generally within 0.5 pixels. In comparison with the binary robust invariant scalable keypoints (BRISK, curvature scale space (CSS, Harris, speed up robust features (SURF, and commercial software ERDAS and ENVI, this method resulted in larger numbers of correct matching pairs and smaller, more consistent RMSE. Furthermore, it was not necessary to choose any tie control points manually before registration

  12. Preliminary application of Structure from Motion and GIS to document decomposition and taphonomic processes.

    Science.gov (United States)

    Carlton, Connor D; Mitchell, Samantha; Lewis, Patrick

    2018-01-01

    Over the past decade, Structure from Motion (SfM) has increasingly been used as a means of digital preservation and for documenting archaeological excavations, architecture, and cultural material. However, few studies have tapped the potential of using SfM to document and analyze taphonomic processes affecting burials for forensic sciences purposes. This project utilizes SfM models to elucidate specific post-depositional events that affected a series of three human cadavers deposited at the South East Texas Applied Forensic Science Facility (STAFS). The aim of this research was to test the ability for untrained researchers to employ spatial software and photogrammetry for data collection purposes. For a series of three months a single lens reflex (SLR) camera was used to capture a series of overlapping images at periodic stages in the decomposition process of each cadaver. These images are processed through photogrammetric software that creates a 3D model that can be measured, manipulated, and viewed. This project used photogrammetric and geospatial software to map changes in decomposition and movement of the body from original deposition points. Project results indicate SfM and GIS as a useful tool for documenting decomposition and taphonomic processes. Results indicate photogrammetry is an efficient, relatively simple, and affordable tool for the documentation of decomposition. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. 2002 reference document; Document de reference 2002

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-07-01

    This 2002 reference document of the group Areva, provides information on the society. Organized in seven chapters, it presents the persons responsible for the reference document and for auditing the financial statements, information pertaining to the transaction, general information on the company and share capital, information on company operation, changes and future prospects, assets, financial position, financial performance, information on company management and executive board and supervisory board, recent developments and future prospects. (A.L.B.)

  14. Differential diagnosis of idiopathic granulomatous mastitis and breast cancer using acoustic radiation force impulse imaging.

    Science.gov (United States)

    Teke, Memik; Teke, Fatma; Alan, Bircan; Türkoğlu, Ahmet; Hamidi, Cihad; Göya, Cemil; Hattapoğlu, Salih; Gumus, Metehan

    2017-01-01

    Differentiation of idiopathic granulomatous mastitis (IGM) from carcinoma with routine imaging methods, such as ultrasonography (US) and mammography, is difficult. Therefore, we evaluated the value of a newly developed noninvasive technique called acoustic radiation force impulse imaging in differentiating IGM versus malignant lesions in the breast. Four hundred and eighty-six patients, who were referred to us with a presumptive diagnosis of a mass, underwent Virtual Touch tissue imaging (VTI; Siemens) and Virtual Touch tissue quantification (VTQ; Siemens) after conventional gray-scale US. US-guided percutaneous needle biopsy was then performed on 276 lesions with clinically and radiologically suspicious features. Malignant lesions (n = 122) and IGM (n = 48) were included in the final study group. There was a statistically significant difference in shear wave velocity marginal and internal values between the IGM and malignant lesions. The median marginal velocity for IGM and malignant lesions was 3.19 m/s (minimum-maximum 2.49-5.82) and 5.05 m/s (minimum-maximum 2.09-8.46), respectively (p < 0.001). The median internal velocity for IGM and malignant lesions was 2.76 m/s (minimum-maximum 1.14-4.12) and 4.79 m/s (minimum-maximum 2.12-8.02), respectively (p < 0.001). The combination of VTI and VTQ as a complement to conventional US provides viscoelastic properties of tissues, and thus has the potential to increase the specificity of US.

  15. Comparison between MDCT and Grayscale IVUS in a Quantitative Analysis of Coronary Lumen in Segments with or without Atherosclerotic Plaques

    Energy Technology Data Exchange (ETDEWEB)

    Falcão, João L. A. A.; Falcão, Breno A. A. [Heart Institute (InCor), University of São Paulo Medical School (USP), São Paulo, SP (Brazil); Gurudevan, Swaminatha V. [Cedars-Sinai Heart Institute, Los Angeles, California, USA (United States); Campos, Carlos M.; Silva, Expedito R.; Kalil-Filho, Roberto; Rochitte, Carlos E.; Shiozaki, Afonso A.; Coelho-Filho, Otavio R.; Lemos, Pedro A. [Heart Institute (InCor), University of São Paulo Medical School (USP), São Paulo, SP (Brazil)

    2015-04-15

    The diagnostic accuracy of 64-slice MDCT in comparison with IVUS has been poorly described and is mainly restricted to reports analyzing segments with documented atherosclerotic plaques. We compared 64-slice multidetector computed tomography (MDCT) with gray scale intravascular ultrasound (IVUS) for the evaluation of coronary lumen dimensions in the context of a comprehensive analysis, including segments with absent or mild disease. The 64-slice MDCT was performed within 72 h before the IVUS imaging, which was obtained for at least one coronary, regardless of the presence of luminal stenosis at angiography. A total of 21 patients were included, with 70 imaged vessels (total length 114.6 ± 38.3 mm per patient). A coronary plaque was diagnosed in segments with plaque burden > 40%. At patient, vessel, and segment levels, average lumen area, minimal lumen area, and minimal lumen diameter were highly correlated between IVUS and 64-slice MDCT (p < 0.01). However, 64-slice MDCT tended to underestimate the lumen size with a relatively wide dispersion of the differences. The comparison between 64-slice MDCT and IVUS lumen measurements was not substantially affected by the presence or absence of an underlying plaque. In addition, 64-slice MDCT showed good global accuracy for the detection of IVUS parameters associated with flow-limiting lesions. In a comprehensive, multi-territory, and whole-artery analysis, the assessment of coronary lumen by 64-slice MDCT compared with coronary IVUS showed a good overall diagnostic ability, regardless of the presence or absence of underlying atherosclerotic plaques.

  16. Apparatus-Program Complexes Processing and Creation of Essentially non-Format Documents on the Basis of Technology Auto-Adaptive Fonts

    Directory of Open Access Journals (Sweden)

    E. G. Andrianova

    2014-01-01

    Full Text Available The need to translate paper documents into electronic form demanded a development of methods and algorithms for automatic processing systems and web publishing unformatted graphic documents of on-line libraries. Translation of scanned images into modern formats of electronic documents using OCR programmes faces serious difficulties. These difficulties are connected with the standardization set of fonts and design of printed documents. There is also a need to maintain the original form of electronic format of such documents. The article discusses the possibility for building an extensible adaptive dictionary of graphic objects, which constitute unformatted graphics documents. Dictionary automatically adjusted as graphics processing and accumulation of statistical information for each new document. This adaptive extensible dictionary of graphic letters, fonts, and other objects of automated particular document processing is called "auto-adaptive font", and a set of its application methods is named "auto-adaptive font technology."Based on the theory of estimation algorithms, a mathematical model is designed. It allows us to represent all objects of unformatted graphic document in a unified manner to build a feature vector for each object, and evaluate a similarity of these objects in the selected metric. The algorithm of the adaptive models of graphic images is developed and a criterion for combining similar properties in one element to build an auto-adaptive font is offered thus allowing us to build a software core of hardware-software complex for processing the unformatted graphic documents. A standard block diagram of hardware-software complex is developed to process the unformatted graphic documents. The article presents a description of all the blocks of this complex, including document processing station and its interaction with the web server of publishing electronic documents.

  17. Analysis of image acquisition, post-processing and documentation in adolescents with spine injuries. Comparison before and after referral to a university hospital; Bildgebung bei wirbelsaeulenverletzten Kindern und jungen Erwachsenen. Eine Analyse von Umfeld, Standards und Wiederholungsuntersuchungen bei Patientenverlegungen

    Energy Technology Data Exchange (ETDEWEB)

    Lemburg, S.P.; Roggenland, D.; Nicolas, V.; Heyer, C.M. [Berufsgenossenschaftliches Universitaetsklinikum Bergmannshell, Bochum (Germany). Inst. fuer Diagnostische Radiologie, Interventionelle Radiologie und Nuklearmedizin

    2012-09-15

    Purpose: Systematic evaluation of imaging situation and standards in acute spinal injuries of adolescents. Materials and Methods: Retrospective analysis of imaging studies of transferred adolescents with spinal injuries and survey of transferring hospitals (TH) with respect to the availability of modalities and radiological expertise and post-processing and documentation of CT studies were performed. Repetitions of imaging studies and cumulative effective dose (CED) were noted. Results: 33 of 43 patients (77 %) treated in our hospital (MA 17.2 years, 52 % male) and 25 of 32 TH (78 %) were evaluated. 24-hr availability of conventional radiography and CT was present in 96 % and 92 % of TH, whereas MRI was available in only 36 %. In 64 % of TH, imaging expertise was guaranteed by an on-staff radiologist. During off-hours radiological service was provided on an on-call basis in 56 % of TH. Neuroradiologic and pediatric radiology expertise was not available in 44 % and 60 % of TH, respectively. CT imaging including post-processing and documentation matched our standards in 36 % and 32 % of cases. The repetition rate of CT studies was 39 % (CED 116.08 mSv). Conclusion: With frequent CT repetitions, two-thirds of re-examined patients revealed a different clinical estimation of trauma severity and insufficient CT quality as possible causes for re-examination. A standardization of initial clinical evaluation and CT imaging could possibly reduce the need for repeat examinations. (orig.)

  18. Documenting Employee Conduct

    Science.gov (United States)

    Dalton, Jason

    2009-01-01

    One of the best ways for a child care program to lose an employment-related lawsuit is failure to document the performance of its employees. Documentation of an employee's performance can provide evidence of an employment-related decision such as discipline, promotion, or discharge. When properly implemented, documentation of employee performance…

  19. Documentary images:aesthetic subjectivities e apolitical

    Directory of Open Access Journals (Sweden)

    Denize Correa Araujo

    2014-12-01

    Full Text Available This study intends to analyze images in three categories of films that contemplate dictatorial political regimes: documentaries, films based on real facts and feature films. I argue that images of the three categories can document “factuality” and are what I call “documental images”. Furthermore, they can contribute for a “metamorphosis-memory”, a kind of memory that reconstructs itself continuously according to new representations of dictatorships in films. The frame of reference includes theories by Bakhtin, Baudrillard, Benjamin, Debord, Derrida, Halbwachs,Metz, Nichols and Sarlo, among others.

  20. Health physics documentation

    International Nuclear Information System (INIS)

    Stablein, G.

    1980-01-01

    When dealing with radioactive material the health physicist receives innumerable papers and documents within the fields of researching, prosecuting, organizing and justifying radiation protection. Some of these papers are requested by the health physicist and some are required by law. The scope, quantity and deposit periods of the health physics documentation at the Karlsruhe Nuclear Research Center are presented and rationalizing methods discussed. The aim of this documentation should be the application of physics to accident prevention, i.e. documentation should protect those concerned and not the health physicist. (H.K.)

  1. Computer-aided diagnostic system for detection of Hashimoto thyroiditis on ultrasound images from a Polish population.

    Science.gov (United States)

    Acharya, U Rajendra; Sree, S Vinitha; Krishnan, M Muthu Rama; Molinari, Filippo; Zieleźnik, Witold; Bardales, Ricardo H; Witkowska, Agnieszka; Suri, Jasjit S

    2014-02-01

    Computer-aided diagnostic (CAD) techniques aid physicians in better diagnosis of diseases by extracting objective and accurate diagnostic information from medical data. Hashimoto thyroiditis is the most common type of inflammation of the thyroid gland. The inflammation changes the structure of the thyroid tissue, and these changes are reflected as echogenic changes on ultrasound images. In this work, we propose a novel CAD system (a class of systems called ThyroScan) that extracts textural features from a thyroid sonogram and uses them to aid in the detection of Hashimoto thyroiditis. In this paradigm, we extracted grayscale features based on stationary wavelet transform from 232 normal and 294 Hashimoto thyroiditis-affected thyroid ultrasound images obtained from a Polish population. Significant features were selected using a Student t test. The resulting feature vectors were used to build and evaluate the following 4 classifiers using a 10-fold stratified cross-validation technique: support vector machine, decision tree, fuzzy classifier, and K-nearest neighbor. Using 7 significant features that characterized the textural changes in the images, the fuzzy classifier had the highest classification accuracy of 84.6%, sensitivity of 82.8%, specificity of 87.0%, and a positive predictive value of 88.9%. The proposed ThyroScan CAD system uses novel features to noninvasively detect the presence of Hashimoto thyroiditis on ultrasound images. Compared to manual interpretations of ultrasound images, the CAD system offers a more objective interpretation of the nature of the thyroid. The preliminary results presented in this work indicate the possibility of using such a CAD system in a clinical setting after evaluating it with larger databases in multicenter clinical trials.

  2. Radioisotopic Imaging of Neuro-inflammation

    International Nuclear Information System (INIS)

    Winkeler, A.; Boisgard, R.; Martin, M.; Tavitian, B.

    2010-01-01

    Inflammatory responses are closely associated with many neurologic disorders and influence their outcome. In vivo imaging can document events accompanying neuro-inflammation, such as changes in blood flow, vascular permeability, tightness of the blood-to-brain barrier, local metabolic activity, and expression of specific molecular targets. Here, we briefly review current methods for imaging neuro-inflammation, with special emphasis on nuclear imaging techniques. (authors)

  3. Airborne imaging for heritage documentation using the Fotokite tethered flying camera

    Science.gov (United States)

    Verhoeven, Geert; Lupashin, Sergei; Briese, Christian; Doneus, Michael

    2014-05-01

    Since the beginning of aerial photography, researchers used all kinds of devices (from pigeons, kites, poles, and balloons to rockets) to take still cameras aloft and remotely gather aerial imagery. To date, many of these unmanned devices are still used for what has been referred to as Low-Altitude Aerial Photography or LAAP. In addition to these more traditional camera platforms, radio-controlled (multi-)copter platforms have recently added a new aspect to LAAP. Although model airplanes have been around for several decades, the decreasing cost, increasing functionality and stability of ready-to-fly multi-copter systems has proliferated their use among non-hobbyists. As such, they became a very popular tool for aerial imaging. The overwhelming amount of currently available brands and types (heli-, dual-, tri-, quad-, hexa-, octo-, dodeca-, deca-hexa and deca-octocopters), together with the wide variety of navigation options (e.g. altitude and position hold, waypoint flight) and camera mounts indicate that these platforms are here to stay for some time. Given the multitude of still camera types and the image quality they are currently capable of, endless combinations of low- and high-cost LAAP solutions are available. In addition, LAAP allows for the exploitation of new imaging techniques, as it is often only a matter of lifting the appropriate device (e.g. video cameras, thermal frame imagers, hyperspectral line sensors). Archaeologists were among the first to adopt this technology, as it provided them with a means to easily acquire essential data from a unique point of view, whether for simple illustration purposes of standing historic structures or to compute three-dimensional (3D) models and orthophotographs from excavation areas. However, even very cheap multi-copters models require certain skills to pilot them safely. Additionally, malfunction or overconfidence might lift these devices to altitudes where they can interfere with manned aircrafts. As such, the

  4. Speeding up the Raster Scanning Methods used in theX-Ray Fluorescence Imaging of the Ancient Greek Text of Archimedes

    Energy Technology Data Exchange (ETDEWEB)

    Turner, Manisha; /Norfolk State U.

    2006-08-24

    Progress has been made at the Stanford Linear Accelerator Center (SLAC) toward deciphering the remaining 10-20% of ancient Greek text contained in the Archimedes palimpsest. The text is known to contain valuable works by the mathematician, including the ''Method of Mechanical Theorems, the Equilibrium of Planes, On Floating Bodies'', and several diagrams as well. The only surviving copy of the text was recycled into a prayer book in the Middle Ages. The ink used to write on the goat skin parchment is partly composed of iron, which is visible by x-ray radiation. To image the palimpsest pages, the parchment is framed and placed in a stage that moves according to the raster method. When an x-ray beam strikes the parchment, the iron in the ink is detected by a germanium detector. The resulting signal is converted to a gray-scale image on the imaging program, Rasplot. It is extremely important that each line of data is perfectly aligned with the line that came before it because the image is scanned in two directions. The objectives of this experiment were to determine the best parameters for producing well-aligned images and to reduce the scanning time. Imaging half a page of parchment during previous beam time for this project was achieved in thirty hours. Equations were produced to evaluate count time, shutter time, and the number of pixels in this experiment. On Beamline 6-2 at the Stanford Synchrotron Radiation Laboratory (SSRL), actual scanning time was reduced by one fourth. The remaining pages were successfully imaged and sent to ancient Greek experts for translation.

  5. Speeding up the Raster Scanning Methods used in the X-Ray Fluorescence Imaging of the Ancient Greek Text of Archimedes

    International Nuclear Information System (INIS)

    Turner, Manisha; Norfolk State U.

    2006-01-01

    Progress has been made at the Stanford Linear Accelerator Center (SLAC) toward deciphering the remaining 10-20% of ancient Greek text contained in the Archimedes palimpsest. The text is known to contain valuable works by the mathematician, including the ''Method of Mechanical Theorems, the Equilibrium of Planes, On Floating Bodies'', and several diagrams as well. The only surviving copy of the text was recycled into a prayer book in the Middle Ages. The ink used to write on the goat skin parchment is partly composed of iron, which is visible by x-ray radiation. To image the palimpsest pages, the parchment is framed and placed in a stage that moves according to the raster method. When an x-ray beam strikes the parchment, the iron in the ink is detected by a germanium detector. The resulting signal is converted to a gray-scale image on the imaging program, Rasplot. It is extremely important that each line of data is perfectly aligned with the line that came before it because the image is scanned in two directions. The objectives of this experiment were to determine the best parameters for producing well-aligned images and to reduce the scanning time. Imaging half a page of parchment during previous beam time for this project was achieved in thirty hours. Equations were produced to evaluate count time, shutter time, and the number of pixels in this experiment. On Beamline 6-2 at the Stanford Synchrotron Radiation Laboratory (SSRL), actual scanning time was reduced by one fourth. The remaining pages were successfully imaged and sent to ancient Greek experts for translation

  6. Using 3D range cameras for crime scene documentation and legal medicine

    Science.gov (United States)

    Cavagnini, Gianluca; Sansoni, Giovanna; Trebeschi, Marco

    2009-01-01

    Crime scene documentation and legal medicine analysis are part of a very complex process which is aimed at identifying the offender starting from the collection of the evidences on the scene. This part of the investigation is very critical, since the crime scene is extremely volatile, and once it is removed, it can not be precisely created again. For this reason, the documentation process should be as complete as possible, with minimum invasiveness. The use of optical 3D imaging sensors has been considered as a possible aid to perform the documentation step, since (i) the measurement is contactless and (ii) the process required to editing and modeling the 3D data is quite similar to the reverse engineering procedures originally developed for the manufacturing field. In this paper we show the most important results obtained in the experimentation.

  7. Natural display mode for digital DICOM-conformant diagnostic imaging.

    Science.gov (United States)

    Peters, Klaus-Ruediger; Ramsby, Gale R

    2002-09-01

    The authors performed this study to investigate the verification of the contrast display properties defined by the digital imaging and communication in medicine (DICOM) PS (picture archiving and communication system [PACS] standard) 3.14-2001 gray-scale display function standard and their dependency on display luminance range and video signal bandwidth. Contrast sensitivity and contrast linearity of DICOM-conformant displays were measured in just-noticeable differences (JNDs) on special perceptual contrast test patterns. Measurements were obtained six times at various display settings under dark room conditions. Display luminance range and video bandwidth had a significant effect on contrast perception. The perceptual promises of the standard could be established only with displays that were calibrated to a unity contrast resolution, at which the number of displayed intensity steps was equal to the number of perceivable contrast steps (JNDs). Such display conditions provide for visual perception information at the level of single-step contrast sensitivity and full-range contrast linearity. These "natural display" conditions also help minimize the Mach banding effects that otherwise reduce contrast sensitivity and contrast linearity. Most, if not all, conventionally used digital display modalities are driven with a contrast resolution larger than 1. Such conditions reduce contrast perception when compared with natural imaging conditions. The DICOM-conformant display conditions at unity contrast resolution were characterized as the "natural display" mode, and, thus, the authors a priori recommend them as being useful for making a primary diagnosis with PACS and teleradiology and as a standard for psychophysical research and performance measurements.

  8. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    Science.gov (United States)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  9. [The cell phones as devices for the ocular fundus documentation].

    Science.gov (United States)

    Němčanský, J; Kopecký, A; Timkovič, J; Mašek, P

    2014-12-01

    To present our experience with "smart phones" when examining and documenting human eyes. From September to October 2013 fifteen patients (8 men, 7 women) eye fundus was examined, an average age during the examination was 58 year (ranging from 20-65 years). The photo-documentation was performed with dilated pupils (tropicamid hydrochloridum 1% eye drops) with mobile phone Samsung Galaxy Nexus with the operating system Android 4.3 (Google Inc., Mountain View, CA, USA) and iPhone 4 with the operating system 7.0.4 (Apple Inc., Loop Cupertino, CA, USA), and with 20D lens (Volk Optical Inc., Mentor, OH, USA). The images of the retina taken with a mobile phone and the spherical lens are of a very good quality, precise and reproducible. Learning this technique is easy and fast, the learning curve is steep. Photo-documentation of retina with a mobile phone is a safe, time-saving, easy-to-learn technique, which may be used in a routine ophthalmologic practice. The main advantage of this technique is availability, small size and easy portability of the devices.

  10. Standardization Documents

    Science.gov (United States)

    2011-08-01

    Specifications and Standards; Guide Specifications; CIDs; and NGSs . Learn. Perform. Succeed. STANDARDIZATION DOCUMENTS Federal Specifications Commercial...national or international standardization document developed by a private sector association, organization, or technical society that plans ...Maintain lessons learned • Examples: Guidance for application of a technology; Lists of options Learn. Perform. Succeed. DEFENSE HANDBOOK

  11. Variations in performance of LCDs are still evident after DICOM gray-scale standard display calibration.

    LENUS (Irish Health Repository)

    Lowe, Joanna M

    2010-07-01

    Quality assurance in medical imaging is directly beneficial to image quality. Diagnostic images are frequently displayed on secondary-class displays that have minimal or no regular quality assurance programs, and treatment decisions are being made from these display types. The purpose of this study is to identify the impact of calibration on physical and psychophysical performance of liquid crystal displays (LCDs) and the extent of potential variance across various types of LCDs.

  12. NASA Image Exchange (NIX)

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA Technical Reports Server (NTRS) provides access to aerospace-related citations, full-text online documents, and images and videos. The types of information...

  13. Decentralised firewall for malware detection

    OpenAIRE

    Raje, Saurabh; Vaderia, Shyamal; Wilson, Neil; Panigrahi, Rudrakh

    2017-01-01

    This paper describes the design and development of a decentralized firewall system powered by a novel malware detection engine. The firewall is built using blockchain technology. The detection engine aims to classify Portable Executable (PE) files as malicious or benign. File classification is carried out using a deep belief neural network (DBN) as the detection engine. Our approach is to model the files as grayscale images and use the DBN to classify those images into the aforementioned two ...

  14. A decision support system for the reading of ancient documents

    DEFF Research Database (Denmark)

    Roued-Cunliffe, Henriette

    2011-01-01

    The research presented in this thesis is based in the Humanities discipline of Ancient History and begins by attempting to understand the interpretation process involved in reading ancient documents and how this process can be aided by computer systems such as Decision Support Systems (DSS...... this process in the five areas: remembering complex reasoning, searching huge datasets, international collaboration, publishing editions, and image enhancement. This research contains a large practical element involving the development of a DSS prototype. The prototype is used to illustrate how a DSS......, by remembering complex reasoning, can aid the process of interpretation that is reading ancient documents. It is based on the idea that the interpretation process goes through a network of interpretation. The network of interpretation illustrates a recursive process where scholars move between reading levels...

  15. Synthesis document on the long time behavior of packages: operational document ''bituminous'' 2204

    International Nuclear Information System (INIS)

    Tiffreau, C.

    2004-09-01

    This document is realized in the framework of the law of 1991 on the radioactive wastes management. The 2004 synthesis document on long time behavior of bituminous sludges packages is constituted by two documents, the reference document and the operational document. This paper presents the operational model describing the water alteration of the packages and the associated radioelements release, as the gas term source and the swelling associated to the self-irradiation and the bituminous radiolysis. (A.L.B.)

  16. Securing XML Documents

    Directory of Open Access Journals (Sweden)

    Charles Shoniregun

    2004-11-01

    Full Text Available XML (extensible markup language is becoming the current standard for establishing interoperability on the Web. XML data are self-descriptive and syntax-extensible; this makes it very suitable for representation and exchange of semi-structured data, and allows users to define new elements for their specific applications. As a result, the number of documents incorporating this standard is continuously increasing over the Web. The processing of XML documents may require a traversal of all document structure and therefore, the cost could be very high. A strong demand for a means of efficient and effective XML processing has posed a new challenge for the database world. This paper discusses a fast and efficient indexing technique for XML documents, and introduces the XML graph numbering scheme. It can be used for indexing and securing graph structure of XML documents. This technique provides an efficient method to speed up XML data processing. Furthermore, the paper explores the classification of existing methods impact of query processing, and indexing.

  17. Reconstruction and 3D visualisation based on objective real 3D based documentation.

    Science.gov (United States)

    Bolliger, Michael J; Buck, Ursula; Thali, Michael J; Bolliger, Stephan A

    2012-09-01

    Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (computer tomography, magnetic resonance imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image.

  18. Information Types in Nonmimetic Documents: A Review of Biddle's Wipe-Clean Slate (Understanding Documents).

    Science.gov (United States)

    Mosenthal, Peter B.; Kirsch, Irwin S.

    1991-01-01

    Describes how the 16 permanent lists used by a first grade reading teacher (and mother of 6) to manage the household represents the whole range of documents covered in the 3 major types of documents: matrix documents, graphic documents, and locative documents. Suggests class activities to clarify students' understanding of the information in…

  19. Technical approach document

    International Nuclear Information System (INIS)

    1988-04-01

    This document describes the general technical approaches and design criteria adopted by the US Department of Energy (DOE) in order to implement Remedial Action Plans (RAPs) and final designs that comply with EPS standards. This document is a revision to the original document. Major revisions were made to the sections in riprap selection and sizing, and ground-water; only minor revisions were made to the remainder of the document. The US Nuclear Regulatory Commission (NRC) has prepared a Standard Review Plan (NRC-SRP) which describes factors to be considered by the NRC in approving the RAP. Sections 3.0, 4.0, 5.0, and 7.0 of this document are arranged under the same headings as those used in the NRC-SRP. This approach is adopted in order to facilitate joint use of the documents. Section 2.0 (not included in the NRC-SRP) discusses design considerations; Section 3.0 describes surface-water hydrology and erosion control; Section 4.0 describes geotechnical aspects of pile design; Section 5.0 discusses the Alternate Site Selection Process; Section 6.0 deals with radiological issues (in particular, the design of the radon barrier); Section 7.0 discusses protection of groundwater resources; and Section 8.0 discusses site design criteria for the RAC

  20. WE-D-9A-03: CSDF: A Color Extension of the Grayscale Standard Display Function

    International Nuclear Information System (INIS)

    Kimpe, T; Marchessoux, C; Rostang, J; Piepers, B; Avanaki, A; Espig, K; Xthona, A

    2014-01-01

    Purpose: Use of color images in medical imaging has increased significantly the last few years. As of today there is no agreed standard on how color information needs to be visualized on medical color displays, resulting into large variability of color appearance and it making consistency and quality assurance a challenge. This paper presents a proposal for an extension of DICOM GSDF towards color. Methods: Visualization needs for several color modalities (multimodality imaging, nuclear medicine, digital pathology, quantitative imaging applications…) have been studied. On this basis a proposal was made for desired color behavior of color medical display systems and its behavior and effect on color medical images was analyzed. Results: Several medical color modalities could benefit from perceptually linear color visualization for similar reasons as why GSDF was put in place for greyscale medical images. An extension of the GSDF (Greyscale Standard Display Function) to color is proposed: CSDF (color standard display function). CSDF is based on deltaE2000 and offers a perceptually linear color behavior. CSDF uses GSDF as its neutral grey behavior. A comparison between sRGB/GSDF and CSDF confirms that CSDF significantly improves perceptual color linearity. Furthermore, results also indicate that because of the improved perceptual linearity, CSDF has the potential to increase perceived contrast of clinically relevant color features. Conclusion: There is a need for an extension of GSDF towards color visualization in order to guarantee consistency and quality. A first proposal (CSDF) for such extension has been made. Behavior of a CSDF calibrated display has been characterized and compared with sRGB/GSDF behavior. First results indicate that CSDF could have a positive influence on perceived contrast of clinically relevant color features and could offer benefits for quantitative imaging applications. Authors are employees of Barco Healthcare

  1. A simple and effective figure caption detection system for old-style documents

    Science.gov (United States)

    Liu, Zongyi; Zhou, Hanning

    2011-01-01

    Identifying figure captions has wide applications in producing high quality e-books such as kindle books or ipad books. In this paper, we present a rule-based system to detect horizontal figure captions in old-style documents. Our algorithm consists of three steps: (i) segment images into regions of different types such as text and figures, (ii) search the best caption region candidate based on heuristic rules such as region alignments and distances, and (iii) expand caption regions identified in step (ii) with its neighboring text-regions in order to correct oversegmentation errors. We test our algorithm using 81 images collected from old-style books, with each image containing at least one figure area. We show that the approach is able to correctly detect figure captions from images with different layouts, and we also measure its performances in terms of both precision rate and recall rate.

  2. Sub-word image clustering in Farsi printed books

    Science.gov (United States)

    Soheili, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier

    2015-02-01

    Most OCR systems are designed for the recognition of a single page. In case of unfamiliar font faces, low quality papers and degraded prints, the performance of these products drops sharply. However, an OCR system can use redundancy of word occurrences in large documents to improve recognition results. In this paper, we propose a sub-word image clustering method for the applications dealing with large printed documents. We assume that the whole document is printed by a unique unknown font with low quality print. Our proposed method finds clusters of equivalent sub-word images with an incremental algorithm. Due to the low print quality, we propose an image matching algorithm for measuring the distance between two sub-word images, based on Hamming distance and the ratio of the area to the perimeter of the connected components. We built a ground-truth dataset of more than 111000 sub-word images to evaluate our method. All of these images were extracted from an old Farsi book. We cluster all of these sub-words, including isolated letters and even punctuation marks. Then all centers of created clusters are labeled manually. We show that all sub-words of the book can be recognized with more than 99.7% accuracy by assigning the label of each cluster center to all of its members.

  3. An index of beam hardening artifact for two-dimensional cone-beam CT tomographic images: establishment and preliminary evaluation

    Science.gov (United States)

    Yuan, Fusong; Lv, Peijun; Yang, Huifang; Wang, Yong; Sun, Yuchun

    2015-07-01

    Objectives: Based on the pixel gray value measurements, establish a beam-hardening artifacts index of the cone-beam CT tomographic image, and preliminarily evaluate its applicability. Methods: The 5mm-diameter metal ball and resin ball were fixed on the light-cured resin base plate respectively, while four vitro molars were fixed above and below the ball, on the left and right respectively, which have 10mm distance with the metal ball. Then, cone beam CT was used to scan the fixed base plate twice. The same layer tomographic images were selected from the two data and imported into the Photoshop software. The circle boundary was built through the determination of the center and radius of the circle, according to the artifact-free images section. Grayscale measurement tools were used to measure the internal boundary gray value G0, gray value G1 and G2 of 1mm and 20mm artifacts outside the circular boundary, the length L1 of the arc with artifacts in the circular boundary, the circumference L2. Hardening artifacts index was set A = (G1 / G0) * 0.5 + (G2 / G1) * 0.4 + (L2 / L1) * 0.1. Then, the A values of metal and resin materials were calculated respectively. Results: The A value of cobalt-chromium alloy material is 1, and resin material is 0. Conclusion: The A value reflects comprehensively the three factors of hardening artifacts influencing normal oral tissue image sharpness of cone beam CT. The three factors include relative gray value, the decay rate and range of artifacts.

  4. Enterprise Document Management

    Data.gov (United States)

    US Agency for International Development — The function of the operation is to provide e-Signature and document management support for Acquisition and Assisitance (A&A) documents including vouchers in...

  5. Extraction of Coal and Gangue Geometric Features with Multifractal Detrending Fluctuation Analysis

    Directory of Open Access Journals (Sweden)

    Kai Liu

    2018-03-01

    Full Text Available The separation of coal and gangue is an important process of the coal preparation technology. The conventional way of manual selection and separation of gangue from the raw coal can be replaced by computer vision technology. In the literature, research on image recognition and classification of coal and gangue is mainly based on the grayscale and texture features of the coal and gangue. However, there are few studies on characteristics of coal and gangue from the perspective of their outline differences. Therefore, the multifractal detrended fluctuation analysis (MFDFA method is introduced in this paper to extract the geometric features of coal and gangue. Firstly, the outline curves of coal and gangue in polar coordinates are detected and achieved along the centroid, thereby the multifractal characteristics of the series are analyzed and compared. Subsequently, the modified local singular spectrum widths Δ h of the outline curve series are extracted as the characteristic variables of the coal and gangue for pattern recognition. Finally, the extracted geometric features by MFDFA combined with the grayscale and texture features of the images are compared with other methods, indicating that the recognition rate of coal gangue images can be increased by introducing the geometric features.

  6. Development of Standard Process for Private Information Protection of Medical Imaging Issuance

    International Nuclear Information System (INIS)

    Park, Bum Jin; Jeong, Jae Ho; Son, Gi Gyeong Son; Kang, Hee Doo; Yoo, Beong Gyu; Lee, Jong Seok

    2009-01-01

    The medical imaging issuance is changed from conventional film method to Digital Compact Disk solution because of development on IT technology. However other medical record department's are undergoing identification check through and through whereas medical imaging department cannot afford to do that. So, we examine present applicant's recognition of private intelligence safeguard, and medical imaging issuance condition by CD and DVD medium toward various medical facility and then perform comparative analysis associated with domestic and foreign law and recommendation, lastly suggest standard for medical imaging issuance and process relate with internal environment. First, we surveyed issuance process and required documents when situation of medical image issuance in the metropolitan medical facility by wire telephone between 2008.6.-12008.7.1. in accordance with the medical law Article 21clause 2, suggested standard through applicant's required documents occasionally - (1) in the event of oneself verifying identification, (2) in the event of family verifying applicant identification and family relations document (health insurance card, attested copy, and so on), (3) third person or representative verifying applicant identification and letter of attorney and certificate of one's seal impression. Second, also checked required documents of applicant in accordance with upper standard when situation of medical image issuance in Kyung-hee university medical center during 3 month 2008.5.-12008.7.31. Third, developed a work process by triangular position of issuance procedure for situation when verifying required documents and management of unpreparedness. Look all over the our manufactured output in the hospital - satisfy the all conditions 4 place(12%), possibly request everyone 4 place(12%), and apply in the clinic section 9 place(27%) that does not medical imaging issuance office, so we don't know about required documents condition. and look into whether meet or not

  7. Fusion of colour and monochromatic images with edge emphasis

    Directory of Open Access Journals (Sweden)

    Rade M. Pavlović

    2014-02-01

    Full Text Available We propose a novel method to fuse true colour images with monochromatic non-visible range images that seeks to encode important structural information from monochromatic images efficiently but also preserve the natural appearance of the available true chromacity information. We utilise the β colour opponency channel of the lαβ colour as the domain to fuse information from the monochromatic input into the colour input by the way of robust grayscale fusion. This is followed by an effective gradient structure visualisation step that enhances the visibility of monochromatic information in the final colour fused image. Images fused using this method preserve their natural appearance and chromacity better than conventional methods while at the same time clearly encode structural information from the monochormatic input. This is demonstrated on a number of well-known true colour fusion examples and confirmed by the results of subjective trials on the data from several colour fusion scenarios. Introduction The goal of image fusion can be broadly defined as: the representation of visual information contained in a number of input images into a single fused image without distortion or loss of information. In practice, however, a representation of all available information from multiple inputs in a single image is almost impossible and fusion is generally a data reduction task.  One of the sensors usually provides a true colour image that by definition has all of its data dimensions already populated by the spatial and chromatic information. Fusing such images with information from monochromatic inputs in a conventional manner can severely affect natural appearance of the fused image. This is a difficult problem and partly the reason why colour fusion received only a fraction of the attention than better behaved grayscale fusion even long after colour sensors became widespread. Fusion method Humans tend to see colours as contrasts between opponent

  8. Tank Monitoring and Document control System (TMACS) As Built Software Design Document

    International Nuclear Information System (INIS)

    GLASSCOCK, J.A.

    2000-01-01

    This document describes the software design for the Tank Monitor and Control System (TMACS). This document captures the existing as-built design of TMACS as of November 1999. It will be used as a reference document to the system maintainers who will be maintaining and modifying the TMACS functions as necessary. The heart of the TMACS system is the ''point-processing'' functionality where a sample value is received from the field sensors and the value is analyzed, logged, or alarmed as required. This Software Design Document focuses on the point-processing functions

  9. Generation of mask patterns for diffractive optical elements using MathematicaTM

    International Nuclear Information System (INIS)

    OShea, D.C.

    1996-01-01

    The generation of binary and grayscale masks used in the fabrication of diffractive optical elements is usually performed using a proprietary piece of software or a computer-aided drafting package. Once the pattern is computed or designed, it must be output to a plotting or imaging system that will produce a reticle plate. This article describes a number of short Mathematica modules that can be used to generate binary and grayscale patterns in a PostScript-compatible format. Approaches to ensure that the patterns are directly related to the function of the element and the design wavelength are discussed. A procedure to preserve the scale of the graphic output when it is transferred to another application is given. Examples of surfaces for a 100 mm effective focal length lens and an Alvarez surface are given. copyright 1996 American Institute of Physics

  10. PHOTOGRAMMETRIC AND LIDAR DOCUMENTATION OF THE ROYAL CHAPEL (CATHEDRAL-MOSQUE OF CORDOBA, SPAIN

    Directory of Open Access Journals (Sweden)

    J. Cardenal

    2012-07-01

    Full Text Available At present, cultural heritage documentation projects use a variety of spatial data acquisition techniques such as conventional surveying, photogrammetry and terrestrial laser scanning. This paper deals with a full documentation project based on all those techniques in the Royal Chapel located in the Cathedral-Mosque of Cordoba in Spain, declared World Heritage Site by UNESCO. At present, the Royal Chapel is under study for a detailed diagnostic analysis in order to evaluate the actual state of the chapel, pathologies, construction phases, previous restoration works, material analysis, etc. So in order to assist the evaluation, a documentation project with photogrammetric and laser scanner techniques (TLS has been carried out. With this purpose, accurate cartographic and 3D products, by means of the integration of both image and laser based techniques, were needed to register all data collected during the diagnostic analysis.

  11. Bistatic SAR: Imagery & Image Products.

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, David A.; Wahl, Daniel E.; Jakowatz, Charles V,

    2014-10-01

    While typical SAR imaging employs a co-located (monostatic) RADAR transmitter and receiver, bistatic SAR imaging separates the transmitter and receiver locations. The transmitter and receiver geometry determines if the scattered signal is back scatter, forward scatter, or side scatter. The monostatic SAR image is backscatter. Therefore, depending on the transmitter/receiver collection geometry, the captured imagery may be quite different that that sensed at the monostatic SAR. This document presents imagery and image products formed from captured signals during the validation stage of the bistatic SAR research. Image quality and image characteristics are discussed first. Then image products such as two-color multi-view (2CMV) and coherent change detection (CCD) are presented.

  12. Shape in Picture: Mathematical Description of Shape in Grey-Level Images

    Science.gov (United States)

    1992-09-11

    following diagram commutes: A N FGA fl______ IFg B FGB rB Define Gjrff g; it is routine (an exercise) to show that G is a functor. [3 Jetu IL- to Cat ...Heijmans, H.J.A.M., Dougherty, E.R. (1992). Gray-scale granulome - tries compatible with spatial scalings, CWI Report BS-R9212, Amsterdam. 16...sensorimotor cortex of cat . Consider the archetypal pyramidal neuron of Fig. 3. The classical view is that afferent volleys synapse upon the dendrites

  13. CNEA's quality system documentation

    International Nuclear Information System (INIS)

    Mazzini, M.M.; Garonis, O.H.

    1998-01-01

    Full text: To obtain an effective and coherent documentation system suitable for CNEA's Quality Management Program, we decided to organize the CNEA's quality documentation with : a- Level 1. Quality manual. b- Level 2. Procedures. c-Level 3. Qualities plans. d- Level 4: Instructions. e- Level 5. Records and other documents. The objective of this work is to present a standardization of the documentation of the CNEA's quality system of facilities, laboratories, services, and R and D activities. Considering the diversity of criteria and formats for elaboration the documentation by different departments, and since ultimately each of them generally includes the same quality management policy, we proposed the elaboration of a system in order to improve the documentation, avoiding unnecessary time wasting and costs. This will aloud each sector to focus on their specific documentation. The quality manuals of the atomic centers fulfill the rule 3.6.1 of the Nuclear Regulatory Authority, and the Safety Series 50-C/SG-Q of the International Atomic Energy Agency. They are designed by groups of competent and highly trained people of different departments. The normative procedures are elaborated with the same methodology as the quality manuals. The quality plans which describe the organizational structure of working group and the appropriate documentation, will asses the quality manuals of facilities, laboratories, services, and research and development activities of atomic centers. The responsibilities for approval of the normative documentation are assigned to the management in charge of the administration of economic and human resources in order to fulfill the institutional objectives. Another improvement aimed to eliminate unnecessary invaluable processes is the inclusion of all quality system's normative documentation in the CNEA intranet. (author) [es

  14. Changing Landscapes in Documentation Efforts: Civil Society Documentation of Serious Human Rights Violations

    Directory of Open Access Journals (Sweden)

    Brianne McGonigle Leyh

    2017-04-01

    Full Text Available Wittingly or unwittingly, civil society actors have long been faced with the task of documenting serious human rights violations. Thirty years ago, such efforts were largely organised by grassroots movements, often with little support or funding from international actors. Sharing information and best practices was difficult. Today that situation has significantly changed. The purpose of this article is to explore the changing landscape of civil society documentation of serious human rights violations, and what that means for standardising and professionalising documentation efforts. Using the recent Hisséne Habré case as an example, this article begins by looking at how civil society documentation can successfully influence an accountability process. Next, the article touches upon barriers that continue to impede greater documentation efforts. The article examines the changing landscape of documentation, focusing on technological changes and the rise of citizen journalism and unofficial investigations, using Syria as an example, as well as on the increasing support for documentation efforts both in Syria and worldwide. The changing landscape has resulted in the proliferation of international documentation initiatives aimed at providing local civil society actors guidelines and practical assistance on how to recognise, collect, manage, store and use information about serious human rights violations, as well as on how to minimise the risks associated with the documentation of human rights violations. The recent initiatives undertaken by international civil society, including those by the Public International Law & Policy Group, play an important role in helping to standardise and professionalise documentation work and promote the foundational principles of documentation, namely the ‘do no harm’ principle, and the principles of informed consent and confidentiality. Recognising the drawback that greater professionalisation may bring, it

  15. Quantitative evaluation of low-cost frame-grabber boards for personal computers.

    Science.gov (United States)

    Kofler, J M; Gray, J E; Fuelberth, J T; Taubel, J P

    1995-11-01

    Nine moderately priced frame-grabber boards for both Macintosh (Apple Computers, Cupertino, CA) and IBM-compatible computers were evaluated using a Society of Motion Pictures and Television Engineers (SMPTE) pattern and a video signal generator for dynamic range, gray-scale reproducibility, and spatial integrity of the captured image. The degradation of the video information ranged from minor to severe. Some boards are of reasonable quality for applications in diagnostic imaging and education. However, price and quality are not necessarily directly related.

  16. Documenting the location of systematic transrectal ultrasound-guided prostate biopsies: correlation with multi-parametric MRI.

    Science.gov (United States)

    Turkbey, Baris; Xu, Sheng; Kruecker, Jochen; Locklin, Julia; Pang, Yuxi; Shah, Vijay; Bernardo, Marcelino; Baccala, Angelo; Rastinehad, Ardeshir; Benjamin, Compton; Merino, Maria J; Wood, Bradford J; Choyke, Peter L; Pinto, Peter A

    2011-03-29

    During transrectal ultrasound (TRUS)-guided prostate biopsies, the actual location of the biopsy site is rarely documented. Here, we demonstrate the capability of TRUS-magnetic resonance imaging (MRI) image fusion to document the biopsy site and correlate biopsy results with multi-parametric MRI findings. Fifty consecutive patients (median age 61 years) with a median prostate-specific antigen (PSA) level of 5.8 ng/ml underwent 12-core TRUS-guided biopsy of the prostate. Pre-procedural T2-weighted magnetic resonance images were fused to TRUS. A disposable needle guide with miniature tracking sensors was attached to the TRUS probe to enable fusion with MRI. Real-time TRUS images during biopsy and the corresponding tracking information were recorded. Each biopsy site was superimposed onto the MRI. Each biopsy site was classified as positive or negative for cancer based on the results of each MRI sequence. Sensitivity, specificity, and receiver operating curve (ROC) area under the curve (AUC) values were calculated for multi-parametric MRI. Gleason scores for each multi-parametric MRI pattern were also evaluated. Six hundred and 5 systemic biopsy cores were analyzed in 50 patients, of whom 20 patients had 56 positive cores. MRI identified 34 of 56 positive cores. Overall, sensitivity, specificity, and ROC area values for multi-parametric MRI were 0.607, 0.727, 0.667, respectively. TRUS-MRI fusion after biopsy can be used to document the location of each biopsy site, which can then be correlated with MRI findings. Based on correlation with tracked biopsies, T2-weighted MRI and apparent diffusion coefficient maps derived from diffusion-weighted MRI are the most sensitive sequences, whereas the addition of delayed contrast enhancement MRI and three-dimensional magnetic resonance spectroscopy demonstrated higher specificity consistent with results obtained using radical prostatectomy specimens.

  17. Imaginary Documentary: reflecting upon contemporary documental photography Documentário Imaginário: reflexões sobre a fotografia documental contemporânea

    Directory of Open Access Journals (Sweden)

    Kátia Hallak Lombardi

    2008-01-01

    Full Text Available This article pursues the idea of an Imaginary Documentary – a possible new inflexion on the practices of contemporary documental photography. The text establishes its theoretical foundations through a forthcoming approach of the discussions about documental photography to the concept of imaginary, by Gilbert Durand, and the notion of Imaginary Museum, by André Malraux. Photographers that are part of documental photography history are the elected objects in which we shall confront the potentialities of the Imaginary Documentary. Este artigo tem como propósito buscar a estruturação da idéia de Documentário Imaginário – uma possível inflexão na prática da fotografia documental contemporânea. O texto assenta suas bases teóricas por meio da aproximação de reflexões sobre a fotografia documental ao conceito de imaginário em Gilbert Durand e à noção de Museu Imaginário de André Malraux. Fotógrafos que fazem parte da história da fotografia documental são os objetos eleitos para aferir as potencialidades do Documentário Imaginário.

  18. [A new concept for integration of image databanks into a comprehensive patient documentation].

    Science.gov (United States)

    Schöll, E; Holm, J; Eggli, S

    2001-05-01

    Image processing and archiving are of increasing importance in the practice of modern medicine. Particularly due to the introduction of computer-based investigation methods, physicians are dealing with a wide variety of analogue and digital picture archives. On the other hand, clinical information is stored in various text-based information systems without integration of image components. The link between such traditional medical databases and picture archives is a prerequisite for efficient data management as well as for continuous quality control and medical education. At the Department of Orthopedic Surgery, University of Berne, a software program was developed to create a complete multimedia electronic patient record. The client-server system contains all patients' data, questionnaire-based quality control, and a digital picture archive. Different interfaces guarantee the integration into the hospital's data network. This article describes our experiences in the development and introduction of a comprehensive image archiving system at a large orthopedic center.

  19. Document management in engineering construction

    International Nuclear Information System (INIS)

    Liao Bing

    2008-01-01

    Document management is one important part of systematic quality management, which is one of the key factors to ensure the construction quality. In the engineering construction, quality management and document management shall interwork all the time, to ensure the construction quality. Quality management ensures that the document is correctly generated and adopted, and thus the completeness, accuracy and systematicness of the document satisfy the filing requirements. Document management ensures that the document is correctly transferred during the construction, and various testimonies such as files and records are kept for the engineering construction and its quality management. This paper addresses the document management in the engineering construction based on the interwork of the quality management and document management. (author)

  20. Sub-pixel analysis to support graphic security after scanning at low resolution

    Science.gov (United States)

    Haas, Bertrand; Cordery, Robert; Gou, Hongmei; Decker, Steve

    2006-02-01

    Whether in the domain of audio, video or finance, our world tends to become increasingly digital. However, for diverse reasons, the transition from analog to digital is often much extended in time, and proceeds by long steps (and sometimes never completes). One such step is the conversion of information on analog media to digital information. We focus in this paper on the conversion (scanning) of printed documents to digital images. Analog media have the advantage over digital channels that they can harbor much imperceptible information that can be used for fraud detection and forensic purposes. But this secondary information usually fails to be retrieved during the conversion step. This is particularly relevant since the Check-21 act (Check Clearing for the 21st Century act) became effective in 2004 and allows images of checks to be handled by banks as usual paper checks. We use here this situation of check scanning as our primary benchmark for graphic security features after scanning. We will first present a quick review of the most common graphic security features currently found on checks, with their specific purpose, qualities and disadvantages, and we demonstrate their poor survivability after scanning in the average scanning conditions expected from the Check-21 Act. We will then present a novel method of measurement of distances between and rotations of line elements in a scanned image: Based on an appropriate print model, we refine direct measurements to an accuracy beyond the size of a scanning pixel, so we can then determine expected distances, periodicity, sharpness and print quality of known characters, symbols and other graphic elements in a document image. Finally we will apply our method to fraud detection of documents after gray-scale scanning at 300dpi resolution. We show in particular that alterations on legitimate checks or copies of checks can be successfully detected by measuring with sub-pixel accuracy the irregularities inherently introduced

  1. Document Level Assessment of Document Retrieval Systems in a Pairwise System Evaluation

    Science.gov (United States)

    Rajagopal, Prabha; Ravana, Sri Devi

    2017-01-01

    Introduction: The use of averaged topic-level scores can result in the loss of valuable data and can cause misinterpretation of the effectiveness of system performance. This study aims to use the scores of each document to evaluate document retrieval systems in a pairwise system evaluation. Method: The chosen evaluation metrics are document-level…

  2. Shoulder dystocia documentation: an evaluation of a documentation training intervention.

    Science.gov (United States)

    LeRiche, Tammy; Oppenheimer, Lawrence; Caughey, Sharon; Fell, Deshayne; Walker, Mark

    2015-03-01

    To evaluate the quality and content of nurse and physician shoulder dystocia delivery documentation before and after MORE training in shoulder dystocia management skills and documentation. Approximately 384 charts at the Ottawa Hospital General Campus involving a diagnosis of shoulder dystocia between the years of 2000 and 2006 excluding the training year of 2003 were identified. The charts were evaluated for 14 key components derived from a validated instrument. The delivery notes were then scored based on these components by 2 separate investigators who were blinded to delivery note author, date, and patient identification to further quantify delivery record quality. Approximately 346 charts were reviewed for physician and nurse delivery documentation. The average score for physician notes was 6 (maximum possible score of 14) both before and after the training intervention. The nurses' average score was 5 before and after the training intervention. Negligible improvement was observed in the content and quality of shoulder dystocia documentation before and after nurse and physician training.

  3. The National Library of Kosovo "PJETER Bogdani" Rapid Condition Assessment and Documentation

    Science.gov (United States)

    Eppich, R.; Ramku, B.; Binakaj, N.

    2017-08-01

    The National Library of Kosovo "Pjetër Bogdani" is a symbol of Prishtina, Kosovo and the quest for knowledge. It is simultaneously an icon of modernity and symbol of the past. Unfortunately, it suffered through the Kosovo war and neglect in times of economic difficulty. It was also unfortunately featured in the British newspaper The Telegraph in their travel section: "One of the world's 30 ugliest buildings?" In late 2015 the Kosovo Architectural Foundation, a non-profit dedicated to spirit of creating and preserving unique architecture, became concerned with the reputation and condition of the Library and contacted the Kosovo Ministry of Culture, visited the site and initiated a project to raise awareness and document this modern masterpiece. The Getty Foundation and their Keeping it Modern grant program awarded funding for initial condition assessment, documentation, capacity building and investigations. This paper discusses the project to document and improve the image and awareness of this important structure and set priorities for its future.

  4. Classification of videocapsule endoscopy image patterns: comparative analysis between patients with celiac disease and normal individuals

    Directory of Open Access Journals (Sweden)

    Ciaccio Edward J

    2010-09-01

    Full Text Available Abstract Background Quantitative disease markers were developed to assess videocapsule images acquired from celiac disease patients with villous atrophy, and from control patients. Method Capsule endoscopy videoclip images (576 × 576 pixels were acquired at 2/second frame rate (11 celiacs, 10 controls at regions: 1. bulb, 2. duodenum, 3. jejunum, 4. ileum and 5. distal ileum. Each of 200 images per videoclip (= 100s were subdivided into 10 × 10 pixel subimages for which mean grayscale brightness level and its standard deviation (texture were calculated. Pooled subimage values were grouped into low, intermediate, and high texture bands, and mean brightness, texture, and number of subimages in each band (nine features in all were used for quantifying regions 1-5, and to determine the three best features for threshold and incremental learning classification. Classifiers were developed using 6 celiac and 5 control patients' data as exemplars, and tested on 5 celiacs and 5 controls. Results Pooled from all regions, the threshold classifier had 80% sensitivity and 96% specificity and the incremental classifier had 88% sensitivity and 80% specificity for predicting celiac versus control videoclips in the test set. Trends of increasing texture from regions 1 to 5 occurred in the low and high texture bands in celiacs, and the number of subimages in the low texture band diminished (r2 > 0.5. No trends occurred in controls. Conclusions Celiac videocapsule images have textural properties that vary linearly along the small intestine. Quantitative markers can assist in screening for celiac disease and localize extent and degree of pathology throughout the small intestine.

  5. Tank Monitoring and Document control System (TMACS) As Built Software Design Document

    Energy Technology Data Exchange (ETDEWEB)

    GLASSCOCK, J.A.

    2000-01-27

    This document describes the software design for the Tank Monitor and Control System (TMACS). This document captures the existing as-built design of TMACS as of November 1999. It will be used as a reference document to the system maintainers who will be maintaining and modifying the TMACS functions as necessary. The heart of the TMACS system is the ''point-processing'' functionality where a sample value is received from the field sensors and the value is analyzed, logged, or alarmed as required. This Software Design Document focuses on the point-processing functions.

  6. Toward Documentation of Program Evolution

    DEFF Research Database (Denmark)

    Vestdam, Thomas; Nørmark, Kurt

    2005-01-01

    The documentation of a program often falls behind the evolution of the program source files. When this happens it may be attractive to shift the documentation mode from updating the documentation to documenting the evolution of the program. This paper describes tools that support the documentatio....... It is concluded that our approach can help revitalize older documentation, and that discovery of the fine grained program evolution steps help the programmer in documenting the evolution of the program....

  7. Web document engineering

    International Nuclear Information System (INIS)

    White, B.

    1996-05-01

    This tutorial provides an overview of several document engineering techniques which are applicable to the authoring of World Wide Web documents. It illustrates how pre-WWW hypertext research is applicable to the development of WWW information resources

  8. The Application Of Open-Source And Free Photogrammetric Software For The Purposes Of Cultural Heritage Documentation

    Directory of Open Access Journals (Sweden)

    Bartoš Karol

    2014-07-01

    Full Text Available The documentation of cultural heritage is an essential part of appropriate care of historical monuments, representing a part of our history. At present, it represents the current issue, for which considerable funds are being spent, as well as for the documentation of immovable historical monuments in a form of castle ruins, among the others. Non-contact surveying technologies - terrestrial laser scanning and digital photogrammetry belong to the most commonly used technologies, by which suitable documentation can be obtained, however their use may be very costly. In recent years, various types of software products and web services based on the SfM (or MVS method and developed as open-source software, or as a freely available and free service, relying on the basic principles of photogrammetry and computer vision, have started to get into the spotlight. By using the services and software, acquired digital images of a given object can be processed into a point cloud, serving directly as a final output or as a basis for further processing. The aim of this paper, based on images of various objects of the Slanec castle ruins obtained by the DSLR Pentax K5, is to assess the suitability of different types of open-source and free software and free web services and their reliability in terms of surface reconstruction and photo-texture quality for the purposes of castle ruins documentation.

  9. Enabling outsourcing XDS for imaging on the public cloud.

    Science.gov (United States)

    Ribeiro, Luís S; Rodrigues, Renato P; Costa, Carlos; Oliveira, José Luís

    2013-01-01

    Picture Archiving and Communication System (PACS) has been the main paradigm in supporting medical imaging workflows during the last decades. Despite its consolidation, the appearance of Cross-Enterprise Document Sharing for imaging (XDS-I), within IHE initiative, constitutes a great opportunity to readapt PACS workflow for inter-institutional data exchange. XDS-I provides a centralized discovery of medical imaging and associated reports. However, the centralized XDS-I actors (document registry and repository) must be deployed in a trustworthy node in order to safeguard patient privacy, data confidentiality and integrity. This paper presents XDS for Protected Imaging (XDS-p), a new approach to XDS-I that is capable of being outsourced (e.g. Cloud Computing) while maintaining privacy, confidentiality, integrity and legal concerns about patients' medical information.

  10. Issues and Images: New Yorkers during the Thirties. A Teaching Packet of Historical Documents.

    Science.gov (United States)

    New York State Education Dept., Albany. Cultural Education Center.

    Derived from an exhibit produced cooperatively by the New York State Archives and the New York State Museum for the Franklin D. Roosevelt Centennial, and designed to provide secondary students with first-hand exposure to New York during the Great Depression, this packet contains a teacher's guide and 22 facsimile documents, including historic…

  11. Documents preparation and review

    International Nuclear Information System (INIS)

    1999-01-01

    Ignalina Safety Analysis Group takes active role in assisting regulatory body VATESI to prepare various regulatory documents and reviewing safety reports and other documentation presented by Ignalina NPP in the process of licensing of unit 1. The list of main documents prepared and reviewed is presented

  12. IMAGE User Manual

    Energy Technology Data Exchange (ETDEWEB)

    Stehfest, E; De Waal, L; Oostenrijk, R.

    2010-09-15

    This user manual contains the basic information for running the simulation model IMAGE ('Integrated Model to Assess the Global Environment') of PBL. The motivation for this report was a substantial restructuring of the source code for IMAGE version 2.5. The document gives concise content information about the submodels, tells the user how to install the program, describes the directory structure of the run environment, shows how scenarios have to be prepared and run, and gives insight in the restart functionality.

  13. Documentation of Cultural Heritage Objects

    Directory of Open Access Journals (Sweden)

    Jon Grobovšek

    2013-09-01

    Full Text Available EXTENDED ABSTRACT:The first and important phase of documentation of cultural heritage objects is to understand which objects need to be documented. The entire documentation process is determined by the characteristics and scope of the cultural heritage object. The next question to be considered is the expected outcome of the documentation process and the purpose for which it will be used. These two essential guidelines determine each stage of the documentation workflow: the choice of the most appropriate data capturing technology and data processing method, how detailed should the documentation be, what problems may occur, what the expected outcome is, what it will be used for, and the plan for storing data and results. Cultural heritage objects require diverse data capturing and data processing methods. It is important that even the first stages of raw data capturing are oriented towards the applicability of results. The selection of the appropriate working method can facilitate the data processing and the preparation of final documentation. Documentation of paintings requires different data capturing method than documentation of buildings or building areas. The purpose of documentation can also be the preservation of the contemporary cultural heritage to posterity or the basis for future projects and activities on threatened objects. Documentation procedures should be adapted to our needs and capabilities. Captured and unprocessed data are lost unless accompanied by additional analyses and interpretations. Information on tools, procedures and outcomes must be included into documentation. A thorough analysis of unprocessed but accessible documentation, if adequately stored and accompanied by additional information, enables us to gather useful data. In this way it is possible to upgrade the existing documentation and to avoid data duplication or unintentional misleading of users. The documentation should be archived safely and in a way to meet

  14. Clinically relevant magnetic resonance imaging (MRI) findings in ...

    African Journals Online (AJOL)

    Background: Shoulder pain is the most common and well-documented site of musculoskeletal pain in elite swimmers. Structural abnormalities on magnetic resonance imaging (MRI) of elite swimmers' symptomatic shoulders are common. Little has been documented about the association between MRI findings in the ...

  15. Documentation: Records and Reports.

    Science.gov (United States)

    Akers, Michael J

    2017-01-01

    This article deals with documentation to include the beginning of documentation, the requirements of Good Manufacturing Practice reports and records, and the steps that can be taken to minimize Good Manufacturing Practice documentation problems. It is important to remember that documentation for 503a compounding involves the Formulation Record, Compounding Record, Standard Operating Procedures, Safety Data Sheets, etc. For 503b outsourcing facilities, compliance with Current Good Manufacturing Practices is required, so this article is applicable to them. For 503a pharmacies, one can see the development and modification of Good Manufacturing Practice and even observe changes as they are occurring in 503a documentation requirements and anticipate that changes will probably continue to occur. Copyright© by International Journal of Pharmaceutical Compounding, Inc.

  16. Attractive celebrity and peer images on Instagram: Effect on women's mood and body image.

    Science.gov (United States)

    Brown, Zoe; Tiggemann, Marika

    2016-12-01

    A large body of research has documented that exposure to images of thin fashion models contributes to women's body dissatisfaction. The present study aimed to experimentally investigate the impact of attractive celebrity and peer images on women's body image. Participants were 138 female undergraduate students who were randomly assigned to view either a set of celebrity images, a set of equally attractive unknown peer images, or a control set of travel images. All images were sourced from public Instagram profiles. Results showed that exposure to celebrity and peer images increased negative mood and body dissatisfaction relative to travel images, with no significant difference between celebrity and peer images. This effect was mediated by state appearance comparison. In addition, celebrity worship moderated an increased effect of celebrity images on body dissatisfaction. It was concluded that exposure to attractive celebrity and peer images can be detrimental to women's body image. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Using Color and Grayscale Images to Teach Histology to Color-Deficient Medical Students

    Science.gov (United States)

    Rubin, Lindsay R.; Lackey, Wendy L.; Kennedy, Frances A.; Stephenson, Robert B.

    2009-01-01

    Examination of histologic and histopathologic microscopic sections relies upon differential colors provided by staining techniques, such as hematoxylin and eosin, to delineate normal tissue components and to identify pathologic alterations in these components. Given the prevalence of color deficiency (commonly called "color blindness")…

  18. Instructions for submittal and control of FFTF design documents and design related documentation

    International Nuclear Information System (INIS)

    Grush, R.E.

    1976-10-01

    This document provides the system and requirements for management of FFTF technical data prepared by Westinghouse Hanford (HEDL), and design contractors, the construction contractor and lower tier equipment suppliers. Included in this document are provisions for the review, approval, release, change control, and accounting of FFTF design disclosure and base documentation. Also included are provisions for submittal of other design related documents for review and approval consistent with applicable requirements of RDT-Standard F 2-2, ''Quality Assurance Program Requirements.''

  19. Visual tool for estimating the fractal dimension of images

    Science.gov (United States)

    Grossu, I. V.; Besliu, C.; Rusu, M. V.; Jipa, Al.; Bordeianu, C. C.; Felea, D.

    2009-10-01

    This work presents a new Visual Basic 6.0 application for estimating the fractal dimension of images, based on an optimized version of the box-counting algorithm. Following the attempt to separate the real information from "noise", we considered also the family of all band-pass filters with the same band-width (specified as parameter). The fractal dimension can be thus represented as a function of the pixel color code. The program was used for the study of paintings cracks, as an additional tool which can help the critic to decide if an artistic work is original or not. Program summaryProgram title: Fractal Analysis v01 Catalogue identifier: AEEG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 29 690 No. of bytes in distributed program, including test data, etc.: 4 967 319 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30M Classification: 14 Nature of problem: Estimating the fractal dimension of images. Solution method: Optimized implementation of the box-counting algorithm. Use of a band-pass filter for separating the real information from "noise". User friendly graphical interface. Restrictions: Although various file-types can be used, the application was mainly conceived for the 8-bit grayscale, windows bitmap file format. Running time: In a first approximation, the algorithm is linear.

  20. Images crossing borders: image and workflow sharing on multiple levels.

    Science.gov (United States)

    Ross, Peeter; Pohjonen, Hanna

    2011-04-01

    Digitalisation of medical data makes it possible to share images and workflows between related parties. In addition to linear data flow where healthcare professionals or patients are the information carriers, a new type of matrix of many-to-many connections is emerging. Implementation of shared workflow brings challenges of interoperability and legal clarity. Sharing images or workflows can be implemented on different levels with different challenges: inside the organisation, between organisations, across country borders, or between healthcare institutions and citizens. Interoperability issues vary according to the level of sharing and are either technical or semantic, including language. Legal uncertainty increases when crossing national borders. Teleradiology is regulated by multiple European Union (EU) directives and legal documents, which makes interpretation of the legal system complex. To achieve wider use of eHealth and teleradiology several strategic documents were published recently by the EU. Despite EU activities, responsibility for organising, providing and funding healthcare systems remains with the Member States. Therefore, the implementation of new solutions requires strong co-operation between radiologists, societies of radiology, healthcare administrators, politicians and relevant EU authorities. The aim of this article is to describe different dimensions of image and workflow sharing and to analyse legal acts concerning teleradiology in the EU.