WorldWideScience

Sample records for computer image generation

  1. Colour vision and computer-generated images

    International Nuclear Information System (INIS)

    Ramek, Michael

    2010-01-01

    Colour vision deficiencies affect approximately 8% of the male and approximately 0.4% of the female population. In this work, it is demonstrated that computer generated images oftentimes pose unnecessary problems for colour deficient viewers. Three examples, the visualization of molecular structures, graphs of mathematical functions, and colour coded images from numerical data are used to identify problematic colour combinations: red/black, green/black, red/yellow, yellow/white, fuchsia/white, and aqua/white. Alternatives for these combinations are discussed.

  2. CGI delay compensation. [Computer Generated Image

    Science.gov (United States)

    Mcfarland, R. E.

    1986-01-01

    Computer-generated graphics in real-time helicopter simulation produces objectionable scene-presentation time delays. In the flight simulation laboratory at Ames Research Center, it has been determined that these delays have an adverse influence on pilot performance during agressive tasks such as nap of the earth (NOE) maneuvers. Using contemporary equipment, computer generated image (CGI) time delays are an unavoidable consequence of the operations required for scene generation. However, providing that magnitude distortions at higher frequencies are tolerable, delay compensation is possible over a restricted frequency range. This range, assumed to have an upper limit of perhaps 10 or 15 rad/sec, conforms approximately to the bandwidth associated with helicopter handling qualities research. A compensation algorithm is introduced here and evaluated in terms of tradeoffs in frequency responses. The algorithm has a discrete basis and accommodates both a large, constant transport delay interval and a periodic delay interval, as associated with asynchronous operations.

  3. Computing Homology Group Generators of Images Using Irregular Graph Pyramids

    OpenAIRE

    Peltier , Samuel; Ion , Adrian; Haxhimusa , Yll; Kropatsch , Walter; Damiand , Guillaume

    2007-01-01

    International audience; We introduce a method for computing homology groups and their generators of a 2D image, using a hierarchical structure i.e. irregular graph pyramid. Starting from an image, a hierarchy of the image is built, by two operations that preserve homology of each region. Instead of computing homology generators in the base where the number of entities (cells) is large, we first reduce the number of cells by a graph pyramid. Then homology generators are computed efficiently on...

  4. Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory

    Science.gov (United States)

    Dichter, W.; Doris, K.; Conkling, C.

    1982-06-01

    A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.

  5. Image communication scheme based on dynamic visual cryptography and computer generated holography

    Science.gov (United States)

    Palevicius, Paulius; Ragulskis, Minvydas

    2015-01-01

    Computer generated holograms are often exploited to implement optical encryption schemes. This paper proposes the integration of dynamic visual cryptography (an optical technique based on the interplay of visual cryptography and time-averaging geometric moiré) with Gerchberg-Saxton algorithm. A stochastic moiré grating is used to embed the secret into a single cover image. The secret can be visually decoded by a naked eye if only the amplitude of harmonic oscillations corresponds to an accurately preselected value. The proposed visual image encryption scheme is based on computer generated holography, optical time-averaging moiré and principles of dynamic visual cryptography. Dynamic visual cryptography is used both for the initial encryption of the secret image and for the final decryption. Phase data of the encrypted image are computed by using Gerchberg-Saxton algorithm. The optical image is decrypted using the computationally reconstructed field of amplitudes.

  6. Single-Frame Cinema. Three Dimensional Computer-Generated Imaging.

    Science.gov (United States)

    Cheetham, Edward Joseph, II

    This master's thesis provides a description of the proposed art form called single-frame cinema, which is a category of computer imagery that takes the temporal polarities of photography and cinema and unites them into a single visual vignette of time. Following introductory comments, individual chapters discuss (1) the essential physical…

  7. Identification of natural images and computer-generated graphics based on statistical and textural features.

    Science.gov (United States)

    Peng, Fei; Li, Jiao-ting; Long, Min

    2015-03-01

    To discriminate the acquisition pipelines of digital images, a novel scheme for the identification of natural images and computer-generated graphics is proposed based on statistical and textural features. First, the differences between them are investigated from the view of statistics and texture, and 31 dimensions of feature are acquired for identification. Then, LIBSVM is used for the classification. Finally, the experimental results are presented. The results show that it can achieve an identification accuracy of 97.89% for computer-generated graphics, and an identification accuracy of 97.75% for natural images. The analyses also demonstrate the proposed method has excellent performance, compared with some existing methods based only on statistical features or other features. The method has a great potential to be implemented for the identification of natural images and computer-generated graphics. © 2014 American Academy of Forensic Sciences.

  8. Brain-computer interface based on generation of visual images.

    Directory of Open Access Journals (Sweden)

    Pavel Bobrov

    Full Text Available This paper examines the task of recognizing EEG patterns that correspond to performing three mental tasks: relaxation and imagining of two types of pictures: faces and houses. The experiments were performed using two EEG headsets: BrainProducts ActiCap and Emotiv EPOC. The Emotiv headset becomes widely used in consumer BCI application allowing for conducting large-scale EEG experiments in the future. Since classification accuracy significantly exceeded the level of random classification during the first three days of the experiment with EPOC headset, a control experiment was performed on the fourth day using ActiCap. The control experiment has shown that utilization of high-quality research equipment can enhance classification accuracy (up to 68% in some subjects and that the accuracy is independent of the presence of EEG artifacts related to blinking and eye movement. This study also shows that computationally-inexpensive bayesian classifier based on covariance matrix analysis yields similar classification accuracy in this problem as a more sophisticated Multi-class Common Spatial Patterns (MCSP classifier.

  9. Distinguishing Computer-Generated Graphics from Natural Images Based on Sensor Pattern Noise and Deep Learning

    Directory of Open Access Journals (Sweden)

    Ye Yao

    2018-04-01

    Full Text Available Computer-generated graphics (CGs are images generated by computer software. The rapid development of computer graphics technologies has made it easier to generate photorealistic computer graphics, and these graphics are quite difficult to distinguish from natural images (NIs with the naked eye. In this paper, we propose a method based on sensor pattern noise (SPN and deep learning to distinguish CGs from NIs. Before being fed into our convolutional neural network (CNN-based model, these images—CGs and NIs—are clipped into image patches. Furthermore, three high-pass filters (HPFs are used to remove low-frequency signals, which represent the image content. These filters are also used to reveal the residual signal as well as SPN introduced by the digital camera device. Different from the traditional methods of distinguishing CGs from NIs, the proposed method utilizes a five-layer CNN to classify the input image patches. Based on the classification results of the image patches, we deploy a majority vote scheme to obtain the classification results for the full-size images. The experiments have demonstrated that (1 the proposed method with three HPFs can achieve better results than that with only one HPF or no HPF and that (2 the proposed method with three HPFs achieves 100% accuracy, although the NIs undergo a JPEG compression with a quality factor of 75.

  10. CG2Real: Improving the Realism of Computer Generated Images Using a Large Collection of Photographs.

    Science.gov (United States)

    Johnson, Micah K; Dale, Kevin; Avidan, Shai; Pfister, Hanspeter; Freeman, William T; Matusik, Wojciech

    2011-09-01

    Computer-generated (CG) images have achieved high levels of realism. This realism, however, comes at the cost of long and expensive manual modeling, and often humans can still distinguish between CG and real images. We introduce a new data-driven approach for rendering realistic imagery that uses a large collection of photographs gathered from online repositories. Given a CG image, we retrieve a small number of real images with similar global structure. We identify corresponding regions between the CG and real images using a mean-shift cosegmentation algorithm. The user can then automatically transfer color, tone, and texture from matching regions to the CG image. Our system only uses image processing operations and does not require a 3D model of the scene, making it fast and easy to integrate into digital content creation workflows. Results of a user study show that our hybrid images appear more realistic than the originals.

  11. Three-dimensional imaging using computer-generated holograms synthesized from 3-D Fourier spectra

    International Nuclear Information System (INIS)

    Yatagai, Toyohiko; Miura, Ken-ichi; Sando, Yusuke; Itoh, Masahide

    2008-01-01

    Computer-generated holograms(CGHs) synthesized from projection images of real existing objects are considered. A series of projection images are recorded both vertically and horizontally with an incoherent light source and a color CCD. According to the principles of computer tomography(CT), the 3-D Fourier spectrum is calculated from several projection images of objects and the Fresnel CGH is synthesized using a part of the 3-D Fourier spectrum. This method has following advantages. At first, no-blur reconstructed images in any direction are obtained owing to two-dimensionally scanning in recording. Secondarily, since not interference fringes but simple projection images of objects are recorded, a coherent light source is not necessary. Moreover, when a color CCD is used in recording, it is easily possible to record and reconstruct colorful objects. Finally, we demonstrate reconstruction of biological objects.

  12. Three-dimensional imaging using computer-generated holograms synthesized from 3-D Fourier spectra

    Energy Technology Data Exchange (ETDEWEB)

    Yatagai, Toyohiko; Miura, Ken-ichi; Sando, Yusuke; Itoh, Masahide [University of Tsukba, Institute of Applied Physics, Tennoudai 1-1-1, Tsukuba, Ibaraki 305-8571 (Japan)], E-mail: yatagai@cc.utsunomiya-u.ac.jp

    2008-11-01

    Computer-generated holograms(CGHs) synthesized from projection images of real existing objects are considered. A series of projection images are recorded both vertically and horizontally with an incoherent light source and a color CCD. According to the principles of computer tomography(CT), the 3-D Fourier spectrum is calculated from several projection images of objects and the Fresnel CGH is synthesized using a part of the 3-D Fourier spectrum. This method has following advantages. At first, no-blur reconstructed images in any direction are obtained owing to two-dimensionally scanning in recording. Secondarily, since not interference fringes but simple projection images of objects are recorded, a coherent light source is not necessary. Moreover, when a color CCD is used in recording, it is easily possible to record and reconstruct colorful objects. Finally, we demonstrate reconstruction of biological objects.

  13. Computer-Generated Abstract Paintings Oriented by the Color Composition of Images

    Directory of Open Access Journals (Sweden)

    Mao Li

    2017-06-01

    Full Text Available Designers and artists often require reference images at authoring time. The emergence of computer technology has provided new conditions and possibilities for artistic creation and research. It has also expanded the forms of artistic expression and attracted many artists, designers and computer experts to explore different artistic directions and collaborate with one another. In this paper, we present an efficient k-means-based method to segment the colors of an original picture to analyze the composition ratio of the color information and calculate individual color areas that are associated with their sizes. This information is transformed into regular geometries to reconstruct the colors of the picture to generate abstract images. Furthermore, we designed an application system using the proposed method and generated many works; some artists and designers have used it as an auxiliary tool for art and design creation. The experimental results of datasets demonstrate the effectiveness of our method and can give us inspiration for our work.

  14. Three-dimensional image acquisition and reconstruction system on a mobile device based on computer-generated integral imaging.

    Science.gov (United States)

    Erdenebat, Munkh-Uchral; Kim, Byeong-Jun; Piao, Yan-Ling; Park, Seo-Yeon; Kwon, Ki-Chul; Piao, Mei-Lan; Yoo, Kwan-Hee; Kim, Nam

    2017-10-01

    A mobile three-dimensional image acquisition and reconstruction system using a computer-generated integral imaging technique is proposed. A depth camera connected to the mobile device acquires the color and depth data of a real object simultaneously, and an elemental image array is generated based on the original three-dimensional information for the object, with lens array specifications input into the mobile device. The three-dimensional visualization of the real object is reconstructed on the mobile display through optical or digital reconstruction methods. The proposed system is implemented successfully and the experimental results certify that the system is an effective and interesting method of displaying real three-dimensional content on a mobile device.

  15. Evaluation of artifacts generated by zirconium implants in cone-beam computed tomography images.

    Science.gov (United States)

    Vasconcelos, Taruska Ventorini; Bechara, Boulos B; McMahan, Clyde Alex; Freitas, Deborah Queiroz; Noujeim, Marcel

    2017-02-01

    To evaluate zirconium implant artifact production in cone beam computed tomography images obtained with different protocols. One zirconium implant was inserted in an edentulous mandible. Twenty scans were acquired with a ProMax 3D unit (Planmeca Oy, Helsinki, Finland), with acquisition settings ranging from 70 to 90 peak kilovoltage (kVp) and voxel sizes of 0.32 and 0.16 mm. A metal artifact reduction (MAR) tool was activated in half of the scans. An axial slice through the middle region of the implant was selected for each dataset. Gray values (mean ± standard deviation) were measured in two regions of interest, one close to and the other distant from the implant (control area). The contrast-to-noise ratio was also calculated. Standard deviation decreased with greater kVp and when the MAR tool was used. The contrast-to-noise ratio was significantly higher when the MAR tool was turned off, except for low resolution with kVp values above 80. Selection of the MAR tool and greater kVp resulted in an overall reduction of artifacts in images acquired with low resolution. Although zirconium implants do produce image artifacts in cone-bean computed tomography scans, the setting that best controlled artifact generation by zirconium implants was 90 kVp at low resolution and with the MAR tool turned on. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Computer Software Configuration Item-Specific Flight Software Image Transfer Script Generator

    Science.gov (United States)

    Bolen, Kenny; Greenlaw, Ronald

    2010-01-01

    A K-shell UNIX script enables the International Space Station (ISS) Flight Control Team (FCT) operators in NASA s Mission Control Center (MCC) in Houston to transfer an entire or partial computer software configuration item (CSCI) from a flight software compact disk (CD) to the onboard Portable Computer System (PCS). The tool is designed to read the content stored on a flight software CD and generate individual CSCI transfer scripts that are capable of transferring the flight software content in a given subdirectory on the CD to the scratch directory on the PCS. The flight control team can then transfer the flight software from the PCS scratch directory to the Electronically Erasable Programmable Read Only Memory (EEPROM) of an ISS Multiplexer/ Demultiplexer (MDM) via the Indirect File Transfer capability. The individual CSCI scripts and the CSCI Specific Flight Software Image Transfer Script Generator (CFITSG), when executed a second time, will remove all components from their original execution. The tool will identify errors in the transfer process and create logs of the transferred software for the purposes of configuration management.

  17. Encryption and display of multiple-image information using computer-generated holography with modified GS iterative algorithm

    Science.gov (United States)

    Xiao, Dan; Li, Xiaowei; Liu, Su-Juan; Wang, Qiong-Hua

    2018-03-01

    In this paper, a new scheme of multiple-image encryption and display based on computer-generated holography (CGH) and maximum length cellular automata (MLCA) is presented. With the scheme, the computer-generated hologram, which has the information of the three primitive images, is generated by modified Gerchberg-Saxton (GS) iterative algorithm using three different fractional orders in fractional Fourier domain firstly. Then the hologram is encrypted using MLCA mask. The ciphertext can be decrypted combined with the fractional orders and the rules of MLCA. Numerical simulations and experimental display results have been carried out to verify the validity and feasibility of the proposed scheme.

  18. Computed tomography image using sub-terahertz waves generated from a high-Tc superconducting intrinsic Josephson junction oscillator

    International Nuclear Information System (INIS)

    Kashiwagi, T.; Minami, H.; Kadowaki, K.; Nakade, K.; Saiwai, Y.; Kitamura, T.; Watanabe, C.; Ishida, K.; Sekimoto, S.; Asanuma, K.; Yasui, T.; Shibano, Y.; Tsujimoto, M.; Yamamoto, T.; Marković, B.; Mirković, J.; Klemm, R. A.

    2014-01-01

    A computed tomography (CT) imaging system using monochromatic sub-terahertz coherent electromagnetic waves generated from a device constructed from the intrinsic Josephson junctions in a single crystalline mesa structure of the high-T c superconductor Bi 2 Sr 2 CaCu 2 O 8+δ was developed and tested on three samples: Standing metallic rods supported by styrofoam, a dried plant (heart pea) containing seeds, and a plastic doll inside an egg shell. The images obtained strongly suggest that this CT imaging system may be useful for a variety of practical applications

  19. Adult congenital heart disease imaging with second-generation dual-source computed tomography: initial experiences and findings.

    Science.gov (United States)

    Ghoshhajra, Brian B; Sidhu, Manavjot S; El-Sherief, Ahmed; Rojas, Carlos; Yeh, Doreen Defaria; Engel, Leif-Christopher; Liberthson, Richard; Abbara, Suhny; Bhatt, Ami

    2012-01-01

    Adult congenital heart disease patients present a unique challenge to the cardiac imager. Patients may present with both acute and chronic manifestations of their complex congenital heart disease and also require surveillance for sequelae of their medical and surgical interventions. Multimodality imaging is often required to clarify their anatomy and physiology. Radiation dose is of particular concern in these patients with lifelong imaging needs for their chronic disease. The second-generation dual-source scanner is a recently available advanced clinical cardiac computed tomography (CT) scanner. It offers a combination of the high-spatial resolution of modern CT, the high-temporal resolution of dual-source technology, and the wide z-axis coverage of modern cone-beam geometry CT scanners. These advances in technology allow novel protocols that markedly reduce scan time, significantly reduce radiation exposure, and expand the physiologic imaging capabilities of cardiac CT. We present a case series of complicated adult congenital heart disease patients imaged by the second-generation dual-source CT scanner with extremely low-radiation doses and excellent image quality. © 2012 Wiley Periodicals, Inc.

  20. Applying a new computer-aided detection scheme generated imaging marker to predict short-term breast cancer risk

    Science.gov (United States)

    Mirniaharikandehei, Seyedehnafiseh; Hollingsworth, Alan B.; Patel, Bhavika; Heidari, Morteza; Liu, Hong; Zheng, Bin

    2018-05-01

    This study aims to investigate the feasibility of identifying a new quantitative imaging marker based on false-positives generated by a computer-aided detection (CAD) scheme to help predict short-term breast cancer risk. An image dataset including four view mammograms acquired from 1044 women was retrospectively assembled. All mammograms were originally interpreted as negative by radiologists. In the next subsequent mammography screening, 402 women were diagnosed with breast cancer and 642 remained negative. An existing CAD scheme was applied ‘as is’ to process each image. From CAD-generated results, four detection features including the total number of (1) initial detection seeds and (2) the final detected false-positive regions, (3) average and (4) sum of detection scores, were computed from each image. Then, by combining the features computed from two bilateral images of left and right breasts from either craniocaudal or mediolateral oblique view, two logistic regression models were trained and tested using a leave-one-case-out cross-validation method to predict the likelihood of each testing case being positive in the next subsequent screening. The new prediction model yielded the maximum prediction accuracy with an area under a ROC curve of AUC  =  0.65  ±  0.017 and the maximum adjusted odds ratio of 4.49 with a 95% confidence interval of (2.95, 6.83). The results also showed an increasing trend in the adjusted odds ratio and risk prediction scores (p  breast cancer risk.

  1. The use of computer-generated color graphic images for transient thermal analysis. [for hypersonic aircraft

    Science.gov (United States)

    Edwards, C. L. W.; Meissner, F. T.; Hall, J. B.

    1979-01-01

    Color computer graphics techniques were investigated as a means of rapidly scanning and interpreting large sets of transient heating data. The data presented were generated to support the conceptual design of a heat-sink thermal protection system (TPS) for a hypersonic research airplane. Color-coded vector and raster displays of the numerical geometry used in the heating calculations were employed to analyze skin thicknesses and surface temperatures of the heat-sink TPS under a variety of trajectory flight profiles. Both vector and raster displays proved to be effective means for rapidly identifying heat-sink mass concentrations, regions of high heating, and potentially adverse thermal gradients. The color-coded (raster) surface displays are a very efficient means for displaying surface-temperature and heating histories, and thereby the more stringent design requirements can quickly be identified. The related hardware and software developments required to implement both the vector and the raster displays for this application are also discussed.

  2. Needs assessment for next generation computer-aided mammography reference image databases and evaluation studies.

    Science.gov (United States)

    Horsch, Alexander; Hapfelmeier, Alexander; Elter, Matthias

    2011-11-01

    Breast cancer is globally a major threat for women's health. Screening and adequate follow-up can significantly reduce the mortality from breast cancer. Human second reading of screening mammograms can increase breast cancer detection rates, whereas this has not been proven for current computer-aided detection systems as "second reader". Critical factors include the detection accuracy of the systems and the screening experience and training of the radiologist with the system. When assessing the performance of systems and system components, the choice of evaluation methods is particularly critical. Core assets herein are reference image databases and statistical methods. We have analyzed characteristics and usage of the currently largest publicly available mammography database, the Digital Database for Screening Mammography (DDSM) from the University of South Florida, in literature indexed in Medline, IEEE Xplore, SpringerLink, and SPIE, with respect to type of computer-aided diagnosis (CAD) (detection, CADe, or diagnostics, CADx), selection of database subsets, choice of evaluation method, and quality of descriptions. 59 publications presenting 106 evaluation studies met our selection criteria. In 54 studies (50.9%), the selection of test items (cases, images, regions of interest) extracted from the DDSM was not reproducible. Only 2 CADx studies, not any CADe studies, used the entire DDSM. The number of test items varies from 100 to 6000. Different statistical evaluation methods are chosen. Most common are train/test (34.9% of the studies), leave-one-out (23.6%), and N-fold cross-validation (18.9%). Database-related terminology tends to be imprecise or ambiguous, especially regarding the term "case". Overall, both the use of the DDSM as data source for evaluation of mammography CAD systems, and the application of statistical evaluation methods were found highly diverse. Results reported from different studies are therefore hardly comparable. Drawbacks of the DDSM

  3. Image scaling curve generation

    NARCIS (Netherlands)

    2012-01-01

    The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then

  4. Image scaling curve generation.

    NARCIS (Netherlands)

    2011-01-01

    The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then

  5. Computer generated holographic microtags

    International Nuclear Information System (INIS)

    Sweatt, W.C.

    1998-01-01

    A microlithographic tag comprising an array of individual computer generated holographic patches having feature sizes between 250 and 75 nanometers is disclosed. The tag is a composite hologram made up of the individual holographic patches and contains identifying information when read out with a laser of the proper wavelength and at the proper angles of probing and reading. The patches are fabricated in a steep angle Littrow readout geometry to maximize returns in the -1 diffracted order. The tags are useful as anti-counterfeiting markers because of the extreme difficulty in reproducing them. 5 figs

  6. Second harmonic generation imaging

    CERN Document Server

    2013-01-01

    Second-harmonic generation (SHG) microscopy has shown great promise for imaging live cells and tissues, with applications in basic science, medical research, and tissue engineering. Second Harmonic Generation Imaging offers a complete guide to this optical modality, from basic principles, instrumentation, methods, and image analysis to biomedical applications. The book features contributions by experts in second-harmonic imaging, including many pioneering researchers in the field. Written for researchers at all levels, it takes an in-depth look at the current state of the art and possibilities of SHG microscopy. Organized into three sections, the book: Provides an introduction to the physics of the process, step-by-step instructions on how to build an SHG microscope, and comparisons with related imaging techniques Gives an overview of the capabilities of SHG microscopy for imaging tissues and cells—including cell membranes, muscle, collagen in tissues, and microtubules in live cells—by summarizing experi...

  7. Designing the next generation (fifth generation computers)

    International Nuclear Information System (INIS)

    Wallich, P.

    1983-01-01

    A description is given of the designs necessary to develop fifth generation computers. An analysis is offered of problems and developments in parallelism, VLSI, artificial intelligence, knowledge engineering and natural language processing. Software developments are outlined including logic programming, object-oriented programming and exploratory programming. Computer architecture is detailed including concurrent computer architecture

  8. Computer generation of random deviates

    International Nuclear Information System (INIS)

    Cormack, John

    1991-01-01

    The need for random deviates arises in many scientific applications. In medical physics, Monte Carlo simulations have been used in radiology, radiation therapy and nuclear medicine. Specific instances include the modelling of x-ray scattering processes and the addition of random noise to images or curves in order to assess the effects of various processing procedures. Reliable sources of random deviates with statistical properties indistinguishable from true random deviates are a fundamental necessity for such tasks. This paper provides a review of computer algorithms which can be used to generate uniform random deviates and other distributions of interest to medical physicists, along with a few caveats relating to various problems and pitfalls which can occur. Source code listings for the generators discussed (in FORTRAN, Turbo-PASCAL and Data General ASSEMBLER) are available on request from the authors. 27 refs., 3 tabs., 5 figs

  9. A new generation in computing

    International Nuclear Information System (INIS)

    Kahn, R.E.

    1983-01-01

    Fifth generation of computers is described. The three disciplines involved in bringing such a new generation to reality are: microelectronics; artificial intelligence and, computer systems and architecture. Applications in industry, offices, aerospace, education, health care and retailing are outlined. An analysis is given of research efforts in the US, Japan, U.K., and Europe. Fifth generation programming languages are detailed

  10. Computed tomography image using sub-terahertz waves generated from a high-T{sub c} superconducting intrinsic Josephson junction oscillator

    Energy Technology Data Exchange (ETDEWEB)

    Kashiwagi, T., E-mail: kashiwagi@ims.tsukuba.ac.jp; Minami, H.; Kadowaki, K. [Graduate School of Pure and Applied Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba (Japan); Division of Materials Science, Faculty of Pure and Applied Sciences, University of Tsukuba, 1-1-1, Tennodai, Tsukuba, Ibaraki 305-8573 (Japan); Nakade, K.; Saiwai, Y.; Kitamura, T.; Watanabe, C.; Ishida, K.; Sekimoto, S.; Asanuma, K.; Yasui, T.; Shibano, Y. [Graduate School of Pure and Applied Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba (Japan); Tsujimoto, M. [Department of Electronic Science and Engineering, Kyoto University, Nishikyo-ku, Kyoto 615-8510 (Japan); Yamamoto, T. [Wide Bandgap Materials Group, Optical and Electronic Materials Unit, Environment and Energy Materials Division, National Institute for Materials Science, 1-1 Namiki, Tsukuba, Ibaraki 305-0044 (Japan); Marković, B. [Faculty of Sciences, University of Montenegro, George Washington Str., 81000 Podgorica (Montenegro); Mirković, J. [Faculty of Science, University of Montenegro, and CETI, Put Radomira Ivanovica, 81000 Podgorica (Montenegro); Klemm, R. A. [Department of Physics, University of Central Florida, 4000 Central Florida Blvd., Orlando, Florida 32816-2385 (United States)

    2014-02-24

    A computed tomography (CT) imaging system using monochromatic sub-terahertz coherent electromagnetic waves generated from a device constructed from the intrinsic Josephson junctions in a single crystalline mesa structure of the high-T{sub c} superconductor Bi{sub 2}Sr{sub 2}CaCu{sub 2}O{sub 8+δ} was developed and tested on three samples: Standing metallic rods supported by styrofoam, a dried plant (heart pea) containing seeds, and a plastic doll inside an egg shell. The images obtained strongly suggest that this CT imaging system may be useful for a variety of practical applications.

  11. Computational fluid dynamics and particle image velocimetry assisted design tools for a new generation of trochoidal gear pumps

    Directory of Open Access Journals (Sweden)

    M Garcia-Vilchez

    2015-06-01

    Full Text Available Trochoidal gear pumps produce significant flow pulsations that result in pressure pulsations, which interact with the system where they are connected, shortening the life of both the pump and circuit components. The complicated aspects of the operation of a gerotor pump make computational fluid dynamics the proper tool for modelling and simulating its flow characteristics. A three-dimensional model with deforming mesh computational fluid dynamics is presented, including the effects of the manufacturing tolerance and the leakage inside the pump. A new boundary condition is created for the simulation of the solid contact in the interteeth radial clearance. The experimental study of the pump is carried out by means of time-resolved particle image velocimetry, and results are qualitatively evaluated, thanks to the numerical simulation results. Time-resolved particle image velocimetry is developed in order to adapt it to the gerotor pump, and it is proved to be a feasible alternative to obtain the instantaneous flow of the pump in a direct mode, which would allow the determination of geometries that minimize the non-desired flow pulsations. Thus, a new methodology involving computational fluid dynamics and time-resolved particle image velocimetry is presented, which allows the obtaining of the instantaneous flow of the pump in a direct mode without altering its behaviour significantly.

  12. Knowledge Generation as Natural Computation

    Directory of Open Access Journals (Sweden)

    Gordana Dodig-Crnkovic

    2008-04-01

    Full Text Available Knowledge generation can be naturalized by adopting computational model of cognition and evolutionary approach. In this framework knowledge is seen as a result of the structuring of input data (data ? information ? knowledge by an interactive computational process going on in the agent during the adaptive interplay with the environment, which clearly presents developmental advantage by increasing agent's ability to cope with the situation dynamics. This paper addresses the mechanism of knowledge generation, a process that may be modeled as natural computation in order to be better understood and improved.

  13. The influence of leg-to-body ratio (LBR) on judgments of female physical attractiveness: assessments of computer-generated images varying in LBR.

    Science.gov (United States)

    Frederick, David A; Hadji-Michael, Maria; Furnham, Adrian; Swami, Viren

    2010-01-01

    The leg-to-body ratio (LBR), which is reliably associated with developmental stability and health outcomes, is an understudied component of human physical attractiveness. Several studies examining the effects of LBR on aesthetic judgments have been limited by the reliance on stimuli composed of hand-drawn silhouettes. In the present study, we developed a new set of female computer-generated images portraying eight levels of LBR that fell within the typical range of human variation. A community sample of 207 Britons in London and students from two samples drawn from a US university (Ns=940, 114) rated the physical attractiveness of the images. We found that mid-ranging female LBRs were perceived as maximally attractive. The present research overcomes some of the problems associated with past work on LBR and aesthetic preferences through use of computer-generated images rather than hand-drawn images and provides an instrument that may be useful in future investigations of LBR preferences. Copyright 2009 Elsevier Ltd. All rights reserved.

  14. Architectures for single-chip image computing

    Science.gov (United States)

    Gove, Robert J.

    1992-04-01

    This paper will focus on the architectures of VLSI programmable processing components for image computing applications. TI, the maker of industry-leading RISC, DSP, and graphics components, has developed an architecture for a new-generation of image processors capable of implementing a plurality of image, graphics, video, and audio computing functions. We will show that the use of a single-chip heterogeneous MIMD parallel architecture best suits this class of processors--those which will dominate the desktop multimedia, document imaging, computer graphics, and visualization systems of this decade.

  15. Next generation thermal imaging

    International Nuclear Information System (INIS)

    Marche, P.P.

    1988-01-01

    The best design of high performance thermal imagers for the 1990s will use horizontal quasi-linear arrays with focal plane processing associated with a simple vertical mechanical scanner. These imagers will have performance that is greatly improved compared to that of present-day devices (50 to 100 percent range and resolution improvement). 5 references

  16. Image quality and artefact generation post-cerebral aneurysm clipping using a 64-row multislice computer tomography angiography (MSCTA) technology: A retrospective study and review of the literature.

    Science.gov (United States)

    Zachenhofer, Iris; Cejna, Manfred; Schuster, Antonius; Donat, Markus; Roessler, Karl

    2010-06-01

    Computed tomography angiography (CTA) is a time and cost saving investigation for postoperative evaluation of clipped cerebral aneurysm patients. A retrospective study was conducted to analyse image quality and artefact generation due to implanted aneurysm clips using a new technology. MSCTA was performed pre- and postoperatively using a Philips Brilliance 64-detector-row CT scanner. Altogether, 32 clipping sites were analysed in 27 patients (11 female and 16 male, mean ages 52a, from 24 to 72 years). Clip number per aneurysm was 2.3 mean (from 1 to 4), 54 clips were made of titanium alloy and 5 of cobalt alloy. Altogether, image quality was rated 1.8 mean, using a scale from 1 (very good) to 5 (unserviceable) and clip artefacts were rated 2.4 mean, using a 5 point rating scale (1 no artefacts, 5 unserviceable due to artefacts). A significant loss of image quality and rise of artefacts was found when using cobalt alloy clips (1.4 versus 4.2 and 2.1 versus 4.0). In 72% of all investigations, an excellent image quality was found. Excluding the cobalt clip group, 85% of scans showed excellent image quality. Artefacts were absent or minimal (grade 1 or 2) in 69% of all investigations and in 81% in the pure titanium clip group. In 64-row MSCTA of good image quality with low artefacts, it was possible to detect small aneurysm remnants of 2mm size in individual patients. By using titanium alloy clips, in our study up to 85% of postoperative CTA images were of excellent quality with absent or minimal artefacts in 81% and seem adequate to detect small aneurysm remnants. Copyright 2010 Elsevier B.V. All rights reserved.

  17. Rotational control of computer generated holograms.

    Science.gov (United States)

    Preece, Daryl; Rubinsztein-Dunlop, Halina

    2017-11-15

    We develop a basis for three-dimensional rotation of arbitrary light fields created by computer generated holograms. By adding an extra phase function into the kinoform, any light field or holographic image can be tilted in the focal plane with minimized distortion. We present two different approaches to rotate an arbitrary hologram: the Scheimpflug method and a novel coordinate transformation method. Experimental results are presented to demonstrate the validity of both proposed methods.

  18. Computer-Generated Feedback on Student Writing

    Science.gov (United States)

    Ware, Paige

    2011-01-01

    A distinction must be made between "computer-generated scoring" and "computer-generated feedback". Computer-generated scoring refers to the provision of automated scores derived from mathematical models built on organizational, syntactic, and mechanical aspects of writing. In contrast, computer-generated feedback, the focus of this article, refers…

  19. A Lightweight Compact Multi-Spectral Imager Using Novel Computer-Generated Micro-Optics and Spectral-Extraction Algorithms

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this NASA Early-stage research proposal is to demonstrate an ultra-compact, lightweight broadband hyper- and multi-spectral imaging system that is...

  20. Computational intelligence in biomedical imaging

    CERN Document Server

    2014-01-01

    This book provides a comprehensive overview of the state-of-the-art computational intelligence research and technologies in biomedical images with emphasis on biomedical decision making. Biomedical imaging offers useful information on patients’ medical conditions and clues to causes of their symptoms and diseases. Biomedical images, however, provide a large number of images which physicians must interpret. Therefore, computer aids are demanded and become indispensable in physicians’ decision making. This book discusses major technical advancements and research findings in the field of computational intelligence in biomedical imaging, for example, computational intelligence in computer-aided diagnosis for breast cancer, prostate cancer, and brain disease, in lung function analysis, and in radiation therapy. The book examines technologies and studies that have reached the practical level, and those technologies that are becoming available in clinical practices in hospitals rapidly such as computational inte...

  1. Generation connected with images

    Directory of Open Access Journals (Sweden)

    Adriana RECAMÁN PAYO

    2011-12-01

    Full Text Available 0 0 1 197 1086 Instituto Universitario de Ciencias de la Educación 9 2 1281 14.0 Normal 0 21 false false false ES JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:Calibri; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-ansi-language:ES; mso-fareast-language:EN-US;} In learning contexts studying the image as a focus of sensitive knowledge and formative purposes is crucial to achieve high levels of quality and educational excellence. As Renobell (2005 stated, image analysis encourages the development of critical capacity and contributes to developing a personal style for the gradual acquisition of a visual culture. Images educate and consequently, their presence in the field of education should not be a mere accompaniment to the text. They should not be limited to adorn or illustrate a linguistic content but to complement and deepen it, activating the thought and the reflection of the reader. In Internet culture, image as a focus of knowledge, of shared use, of social content and educational purposes, contribute to explain the implications and vivacity around this technological environment, which plays a leading role in the current social changes and movements. The culture of the network has changed our perceptual sensitivity to interpret images, which are now more complex, integrated, multidimensional and dynamic than ever. The interactivity, the strong relationship with the text content, the graphic sequentiality, the associated sound effects or the iconical text design reveal the

  2. Computational methods for molecular imaging

    CERN Document Server

    Shi, Kuangyu; Li, Shuo

    2015-01-01

    This volume contains original submissions on the development and application of molecular imaging computing. The editors invited authors to submit high-quality contributions on a wide range of topics including, but not limited to: • Image Synthesis & Reconstruction of Emission Tomography (PET, SPECT) and other Molecular Imaging Modalities • Molecular Imaging Enhancement • Data Analysis of Clinical & Pre-clinical Molecular Imaging • Multi-Modal Image Processing (PET/CT, PET/MR, SPECT/CT, etc.) • Machine Learning and Data Mining in Molecular Imaging. Molecular imaging is an evolving clinical and research discipline enabling the visualization, characterization and quantification of biological processes taking place at the cellular and subcellular levels within intact living subjects. Computational methods play an important role in the development of molecular imaging, from image synthesis to data analysis and from clinical diagnosis to therapy individualization. This work will bring readers fro...

  3. Computational scalability of large size image dissemination

    Science.gov (United States)

    Kooper, Rob; Bajcsy, Peter

    2011-01-01

    We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.

  4. Comparison of Intensity-Modulated Radiotherapy Planning Based on Manual and Automatically Generated Contours Using Deformable Image Registration in Four-Dimensional Computed Tomography of Lung Cancer Patients

    International Nuclear Information System (INIS)

    Weiss, Elisabeth; Wijesooriya, Krishni; Ramakrishnan, Viswanathan; Keall, Paul J.

    2008-01-01

    Purpose: To evaluate the implications of differences between contours drawn manually and contours generated automatically by deformable image registration for four-dimensional (4D) treatment planning. Methods and Materials: In 12 lung cancer patients intensity-modulated radiotherapy (IMRT) planning was performed for both manual contours and automatically generated ('auto') contours in mid and peak expiration of 4D computed tomography scans, with the manual contours in peak inspiration serving as the reference for the displacement vector fields. Manual and auto plans were analyzed with respect to their coverage of the manual contours, which were assumed to represent the anatomically correct volumes. Results: Auto contours were on average larger than manual contours by up to 9%. Objective scores, D 2% and D 98% of the planning target volume, homogeneity and conformity indices, and coverage of normal tissue structures (lungs, heart, esophagus, spinal cord) at defined dose levels were not significantly different between plans (p = 0.22-0.94). Differences were statistically insignificant for the generalized equivalent uniform dose of the planning target volume (p = 0.19-0.94) and normal tissue complication probabilities for lung and esophagus (p = 0.13-0.47). Dosimetric differences >2% or >1 Gy were more frequent in patients with auto/manual volume differences ≥10% (p = 0.04). Conclusions: The applied deformable image registration algorithm produces clinically plausible auto contours in the majority of structures. At this stage clinical supervision of the auto contouring process is required, and manual interventions may become necessary. Before routine use, further investigations are required, particularly to reduce imaging artifacts

  5. Generative Interpretation of Medical Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2004-01-01

    This thesis describes, proposes and evaluates methods for automated analysis and quantification of medical images. A common theme is the usage of generative methods, which draw inference from unknown images by synthesising new images having shape, pose and appearance similar to the analysed images......, handling of non-Gaussian variation by means of cluster analysis, correction of respiratory noise in cardiac MRI, and the extensions to multi-slice two-dimensional time-series and bi-temporal three-dimensional models. The medical applications include automated estimation of: left ventricular ejection...

  6. Automatic Substitute Computed Tomography Generation and Contouring for Magnetic Resonance Imaging (MRI)-Alone External Beam Radiation Therapy From Standard MRI Sequences

    Energy Technology Data Exchange (ETDEWEB)

    Dowling, Jason A., E-mail: jason.dowling@csiro.au [CSIRO Australian e-Health Research Centre, Herston, Queensland (Australia); University of Newcastle, Callaghan, New South Wales (Australia); Sun, Jidi [University of Newcastle, Callaghan, New South Wales (Australia); Pichler, Peter [Calvary Mater Newcastle Hospital, Waratah, New South Wales (Australia); Rivest-Hénault, David; Ghose, Soumya [CSIRO Australian e-Health Research Centre, Herston, Queensland (Australia); Richardson, Haylea [Calvary Mater Newcastle Hospital, Waratah, New South Wales (Australia); Wratten, Chris; Martin, Jarad [University of Newcastle, Callaghan, New South Wales (Australia); Calvary Mater Newcastle Hospital, Waratah, New South Wales (Australia); Arm, Jameen [Calvary Mater Newcastle Hospital, Waratah, New South Wales (Australia); Best, Leah [Department of Radiology, Hunter New England Health, New Lambton, New South Wales (Australia); Chandra, Shekhar S. [School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland (Australia); Fripp, Jurgen [CSIRO Australian e-Health Research Centre, Herston, Queensland (Australia); Menk, Frederick W. [University of Newcastle, Callaghan, New South Wales (Australia); Greer, Peter B. [University of Newcastle, Callaghan, New South Wales (Australia); Calvary Mater Newcastle Hospital, Waratah, New South Wales (Australia)

    2015-12-01

    Purpose: To validate automatic substitute computed tomography CT (sCT) scans generated from standard T2-weighted (T2w) magnetic resonance (MR) pelvic scans for MR-Sim prostate treatment planning. Patients and Methods: A Siemens Skyra 3T MR imaging (MRI) scanner with laser bridge, flat couch, and pelvic coil mounts was used to scan 39 patients scheduled for external beam radiation therapy for localized prostate cancer. For sCT generation a whole-pelvis MRI scan (1.6 mm 3-dimensional isotropic T2w SPACE [Sampling Perfection with Application optimized Contrasts using different flip angle Evolution] sequence) was acquired. Three additional small field of view scans were acquired: T2w, T2*w, and T1w flip angle 80° for gold fiducials. Patients received a routine planning CT scan. Manual contouring of the prostate, rectum, bladder, and bones was performed independently on the CT and MR scans. Three experienced observers contoured each organ on MRI, allowing interobserver quantification. To generate a training database, each patient CT scan was coregistered to their whole-pelvis T2w using symmetric rigid registration and structure-guided deformable registration. A new multi-atlas local weighted voting method was used to generate automatic contours and sCT results. Results: The mean error in Hounsfield units between the sCT and corresponding patient CT (within the body contour) was 0.6 ± 14.7 (mean ± 1 SD), with a mean absolute error of 40.5 ± 8.2 Hounsfield units. Automatic contouring results were very close to the expert interobserver level (Dice similarity coefficient): prostate 0.80 ± 0.08, bladder 0.86 ± 0.12, rectum 0.84 ± 0.06, bones 0.91 ± 0.03, and body 1.00 ± 0.003. The change in monitor units between the sCT-based plans relative to the gold standard CT plan for the same dose prescription was found to be 0.3% ± 0.8%. The 3-dimensional γ pass rate was 1.00 ± 0.00 (2 mm/2%). Conclusions: The MR-Sim setup and automatic s

  7. Automatic Substitute Computed Tomography Generation and Contouring for Magnetic Resonance Imaging (MRI)-Alone External Beam Radiation Therapy From Standard MRI Sequences

    International Nuclear Information System (INIS)

    Dowling, Jason A.; Sun, Jidi; Pichler, Peter; Rivest-Hénault, David; Ghose, Soumya; Richardson, Haylea; Wratten, Chris; Martin, Jarad; Arm, Jameen; Best, Leah; Chandra, Shekhar S.; Fripp, Jurgen; Menk, Frederick W.; Greer, Peter B.

    2015-01-01

    Purpose: To validate automatic substitute computed tomography CT (sCT) scans generated from standard T2-weighted (T2w) magnetic resonance (MR) pelvic scans for MR-Sim prostate treatment planning. Patients and Methods: A Siemens Skyra 3T MR imaging (MRI) scanner with laser bridge, flat couch, and pelvic coil mounts was used to scan 39 patients scheduled for external beam radiation therapy for localized prostate cancer. For sCT generation a whole-pelvis MRI scan (1.6 mm 3-dimensional isotropic T2w SPACE [Sampling Perfection with Application optimized Contrasts using different flip angle Evolution] sequence) was acquired. Three additional small field of view scans were acquired: T2w, T2*w, and T1w flip angle 80° for gold fiducials. Patients received a routine planning CT scan. Manual contouring of the prostate, rectum, bladder, and bones was performed independently on the CT and MR scans. Three experienced observers contoured each organ on MRI, allowing interobserver quantification. To generate a training database, each patient CT scan was coregistered to their whole-pelvis T2w using symmetric rigid registration and structure-guided deformable registration. A new multi-atlas local weighted voting method was used to generate automatic contours and sCT results. Results: The mean error in Hounsfield units between the sCT and corresponding patient CT (within the body contour) was 0.6 ± 14.7 (mean ± 1 SD), with a mean absolute error of 40.5 ± 8.2 Hounsfield units. Automatic contouring results were very close to the expert interobserver level (Dice similarity coefficient): prostate 0.80 ± 0.08, bladder 0.86 ± 0.12, rectum 0.84 ± 0.06, bones 0.91 ± 0.03, and body 1.00 ± 0.003. The change in monitor units between the sCT-based plans relative to the gold standard CT plan for the same dose prescription was found to be 0.3% ± 0.8%. The 3-dimensional γ pass rate was 1.00 ± 0.00 (2 mm/2%). Conclusions: The MR-Sim setup and automatic s

  8. Advances in medical image computing.

    Science.gov (United States)

    Tolxdorff, T; Deserno, T M; Handels, H; Meinzer, H-P

    2009-01-01

    Medical image computing has become a key technology in high-tech applications in medicine and an ubiquitous part of modern imaging systems and the related processes of clinical diagnosis and intervention. Over the past years significant progress has been made in the field, both on methodological and on application level. Despite this progress there are still big challenges to meet in order to establish image processing routinely in health care. In this issue, selected contributions of the German Conference on Medical Image Processing (BVM) are assembled to present latest advances in the field of medical image computing. The winners of scientific awards of the German Conference on Medical Image Processing (BVM) 2008 were invited to submit a manuscript on their latest developments and results for possible publication in Methods of Information in Medicine. Finally, seven excellent papers were selected to describe important aspects of recent advances in the field of medical image processing. The selected papers give an impression of the breadth and heterogeneity of new developments. New methods for improved image segmentation, non-linear image registration and modeling of organs are presented together with applications of image analysis methods in different medical disciplines. Furthermore, state-of-the-art tools and techniques to support the development and evaluation of medical image processing systems in practice are described. The selected articles describe different aspects of the intense development in medical image computing. The image processing methods presented enable new insights into the patient's image data and have the future potential to improve medical diagnostics and patient treatment.

  9. Computational Intelligence in Image Processing

    CERN Document Server

    Siarry, Patrick

    2013-01-01

    Computational intelligence based techniques have firmly established themselves as viable, alternate, mathematical tools for more than a decade. They have been extensively employed in many systems and application domains, among these signal processing, automatic control, industrial and consumer electronics, robotics, finance, manufacturing systems, electric power systems, and power electronics. Image processing is also an extremely potent area which has attracted the atten­tion of many researchers who are interested in the development of new computational intelligence-based techniques and their suitable applications, in both research prob­lems and in real-world problems. Part I of the book discusses several image preprocessing algorithms; Part II broadly covers image compression algorithms; Part III demonstrates how computational intelligence-based techniques can be effectively utilized for image analysis purposes; and Part IV shows how pattern recognition, classification and clustering-based techniques can ...

  10. Computational Ghost Imaging for Remote Sensing

    Science.gov (United States)

    Erkmen, Baris I.

    2012-01-01

    This work relates to the generic problem of remote active imaging; that is, a source illuminates a target of interest and a receiver collects the scattered light off the target to obtain an image. Conventional imaging systems consist of an imaging lens and a high-resolution detector array [e.g., a CCD (charge coupled device) array] to register the image. However, conventional imaging systems for remote sensing require high-quality optics and need to support large detector arrays and associated electronics. This results in suboptimal size, weight, and power consumption. Computational ghost imaging (CGI) is a computational alternative to this traditional imaging concept that has a very simple receiver structure. In CGI, the transmitter illuminates the target with a modulated light source. A single-pixel (bucket) detector collects the scattered light. Then, via computation (i.e., postprocessing), the receiver can reconstruct the image using the knowledge of the modulation that was projected onto the target by the transmitter. This way, one can construct a very simple receiver that, in principle, requires no lens to image a target. Ghost imaging is a transverse imaging modality that has been receiving much attention owing to a rich interconnection of novel physical characteristics and novel signal processing algorithms suitable for active computational imaging. The original ghost imaging experiments consisted of two correlated optical beams traversing distinct paths and impinging on two spatially-separated photodetectors: one beam interacts with the target and then illuminates on a single-pixel (bucket) detector that provides no spatial resolution, whereas the other beam traverses an independent path and impinges on a high-resolution camera without any interaction with the target. The term ghost imaging was coined soon after the initial experiments were reported, to emphasize the fact that by cross-correlating two photocurrents, one generates an image of the target. In

  11. Quantitative skeletal maturation estimation using cone-beam computed tomography-generated cervical vertebral images: a pilot study in 5- to 18-year-old Japanese children.

    Science.gov (United States)

    Byun, Bo-Ram; Kim, Yong-Il; Yamaguchi, Tetsutaro; Maki, Koutaro; Ko, Ching-Chang; Hwang, Dea-Seok; Park, Soo-Byung; Son, Woo-Sung

    2015-11-01

    The purpose of this study was to establish multivariable regression models for the estimation of skeletal maturation status in Japanese boys and girls using the cone-beam computed tomography (CBCT)-based cervical vertebral maturation (CVM) assessment method and hand-wrist radiography. The analyzed sample consisted of hand-wrist radiographs and CBCT images from 47 boys and 57 girls. To quantitatively evaluate the correlation between the skeletal maturation status and measurement ratios, a CBCT-based CVM assessment method was applied to the second, third, and fourth cervical vertebrae. Pearson's correlation coefficient analysis and multivariable regression analysis were used to determine the ratios for each of the cervical vertebrae (p maturation status according to the CBCT-based quantitative cervical vertebral maturation (QCVM) assessment was 5.90 + 99.11 × AH3/W3 - 14.88 × (OH2 + AH2)/W2 + 13.24 × D2; for the Japanese girls, it was 41.39 + 59.52 × AH3/W3 - 15.88 × (OH2 + PH2)/W2 + 10.93 × D2. The CBCT-generated CVM images proved very useful to the definition of the cervical vertebral body and the odontoid process. The newly developed CBCT-based QCVM assessment method showed a high correlation between the derived ratios from the second cervical vertebral body and odontoid process. There are high correlations between the skeletal maturation status and the ratios of the second cervical vertebra based on the remnant of dentocentral synchondrosis.

  12. Computer generated movies at Hanford

    International Nuclear Information System (INIS)

    Lewis, C.H.; Fox, G.L.

    1979-10-01

    The message contained in the results of a large computer program is often difficult to present to large groups of people. This difficulty may be overcome by using 16mm color movie techniques. This presentation shows the results of directly using computer output to explain a story about fuel behavior during a power transient

  13. Fast generation of computer-generated holograms using wavelet shrinkage.

    Science.gov (United States)

    Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2017-01-09

    Computer-generated holograms (CGHs) are generated by superimposing complex amplitudes emitted from a number of object points. However, this superposition process remains very time-consuming even when using the latest computers. We propose a fast calculation algorithm for CGHs that uses a wavelet shrinkage method, eliminating small wavelet coefficient values to express approximated complex amplitudes using only a few representative wavelet coefficients.

  14. Computer model for harmonic ultrasound imaging.

    Science.gov (United States)

    Li, Y; Zagzebski, J A

    2000-01-01

    Harmonic ultrasound imaging has received great attention from ultrasound scanner manufacturers and researchers. In this paper, we present a computer model that can generate realistic harmonic images. In this model, the incident ultrasound is modeled after the "KZK" equation, and the echo signal is modeled using linear propagation theory because the echo signal is much weaker than the incident pulse. Both time domain and frequency domain numerical solutions to the "KZK" equation were studied. Realistic harmonic images of spherical lesion phantoms were generated for scans by a circular transducer. This model can be a very useful tool for studying the harmonic buildup and dissipation processes in a nonlinear medium, and it can be used to investigate a wide variety of topics related to B-mode harmonic imaging.

  15. Computational multispectral video imaging [Invited].

    Science.gov (United States)

    Wang, Peng; Menon, Rajesh

    2018-01-01

    Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.

  16. Cardiac dosimetric evaluation of deep inspiration breath-hold level variances using computed tomography scans generated from deformable image registration displacement vectors

    International Nuclear Information System (INIS)

    Harry, Taylor; Rahn, Doug; Semenov, Denis; Gu, Xuejun; Yashar, Catheryn; Einck, John; Jiang, Steve; Cerviño, Laura

    2016-01-01

    There is a reduction in cardiac dose for left-sided breast radiotherapy during treatment with deep inspiration breath-hold (DIBH) when compared with treatment with free breathing (FB). Various levels of DIBH may occur for different treatment fractions. Dosimetric effects due to this and other motions are a major component of uncertainty in radiotherapy in this setting. Recent developments in deformable registration techniques allow displacement vectors between various temporal and spatial patient representations to be digitally quantified. We propose a method to evaluate the dosimetric effect to the heart from variable reproducibility of DIBH by using deformable registration to create new anatomical computed tomography (CT) scans. From deformable registration, 3-dimensional deformation vectors are generated with FB and DIBH. The obtained deformation vectors are scaled to 75%, 90%, and 110% and are applied to the reference image to create new CT scans at these inspirational levels. The scans are then imported into the treatment planning system and dose calculations are performed. The average mean dose to the heart was 2.5 Gy (0.7 to 9.6 Gy) at FB, 1.2 Gy (0.6 to 3.8 Gy, p < 0.001) at 75% inspiration, 1.1 Gy (0.6 to 3.1 Gy, p = 0.004) at 90% inspiration, 1.0 Gy (0.6 to 3.0 Gy) at 100% inspiration or DIBH, and 1.0 Gy (0.6 to 2.8 Gy, p = 0.019) at 110% inspiration. The average mean dose to the left anterior descending artery (LAD) was 19.9 Gy (2.4 to 46.4 Gy), 8.6 Gy (2.0 to 43.8 Gy, p < 0.001), 7.2 Gy (1.9 to 40.1 Gy, p = 0.035), 6.5 Gy (1.8 to 34.7 Gy), and 5.3 Gy (1.5 to 31.5 Gy, p < 0.001), correspondingly. This novel method enables numerous anatomical situations to be mimicked and quantifies the dosimetric effect they have on a treatment plan.

  17. Cardiac dosimetric evaluation of deep inspiration breath-hold level variances using computed tomography scans generated from deformable image registration displacement vectors

    Energy Technology Data Exchange (ETDEWEB)

    Harry, Taylor [Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, CA (United States); Department of Radiation Medicine, Oregon Health and Science University, Portland, OR (United States); Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, Corvallis, OR (United States); Rahn, Doug; Semenov, Denis [Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, CA (United States); Gu, Xuejun [Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX (United States); Yashar, Catheryn; Einck, John [Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, CA (United States); Jiang, Steve [Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX (United States); Cerviño, Laura, E-mail: lcervino@ucsd.edu [Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, CA (United States)

    2016-04-01

    There is a reduction in cardiac dose for left-sided breast radiotherapy during treatment with deep inspiration breath-hold (DIBH) when compared with treatment with free breathing (FB). Various levels of DIBH may occur for different treatment fractions. Dosimetric effects due to this and other motions are a major component of uncertainty in radiotherapy in this setting. Recent developments in deformable registration techniques allow displacement vectors between various temporal and spatial patient representations to be digitally quantified. We propose a method to evaluate the dosimetric effect to the heart from variable reproducibility of DIBH by using deformable registration to create new anatomical computed tomography (CT) scans. From deformable registration, 3-dimensional deformation vectors are generated with FB and DIBH. The obtained deformation vectors are scaled to 75%, 90%, and 110% and are applied to the reference image to create new CT scans at these inspirational levels. The scans are then imported into the treatment planning system and dose calculations are performed. The average mean dose to the heart was 2.5 Gy (0.7 to 9.6 Gy) at FB, 1.2 Gy (0.6 to 3.8 Gy, p < 0.001) at 75% inspiration, 1.1 Gy (0.6 to 3.1 Gy, p = 0.004) at 90% inspiration, 1.0 Gy (0.6 to 3.0 Gy) at 100% inspiration or DIBH, and 1.0 Gy (0.6 to 2.8 Gy, p = 0.019) at 110% inspiration. The average mean dose to the left anterior descending artery (LAD) was 19.9 Gy (2.4 to 46.4 Gy), 8.6 Gy (2.0 to 43.8 Gy, p < 0.001), 7.2 Gy (1.9 to 40.1 Gy, p = 0.035), 6.5 Gy (1.8 to 34.7 Gy), and 5.3 Gy (1.5 to 31.5 Gy, p < 0.001), correspondingly. This novel method enables numerous anatomical situations to be mimicked and quantifies the dosimetric effect they have on a treatment plan.

  18. Cloud computing in medical imaging.

    Science.gov (United States)

    Kagadis, George C; Kloukinas, Christos; Moore, Kevin; Philbin, Jim; Papadimitroulas, Panagiotis; Alexakos, Christos; Nagy, Paul G; Visvikis, Dimitris; Hendee, William R

    2013-07-01

    Over the past century technology has played a decisive role in defining, driving, and reinventing procedures, devices, and pharmaceuticals in healthcare. Cloud computing has been introduced only recently but is already one of the major topics of discussion in research and clinical settings. The provision of extensive, easily accessible, and reconfigurable resources such as virtual systems, platforms, and applications with low service cost has caught the attention of many researchers and clinicians. Healthcare researchers are moving their efforts to the cloud, because they need adequate resources to process, store, exchange, and use large quantities of medical data. This Vision 20/20 paper addresses major questions related to the applicability of advanced cloud computing in medical imaging. The paper also considers security and ethical issues that accompany cloud computing.

  19. Computer applications in diagnostic imaging.

    Science.gov (United States)

    Horii, S C

    1991-03-01

    This article has introduced the nature, generation, use, and future of digital imaging. As digital technology has transformed other aspects of our lives--has the reader tried to buy a conventional record album recently? almost all music store stock is now compact disks--it is sure to continue to transform medicine as well. Whether that transformation will be to our liking as physicians or a source of frustration and disappointment is dependent on understanding the issues involved.

  20. Color evaluation of computer-generated color rainbow holography

    International Nuclear Information System (INIS)

    Shi, Yile; Wang, Hui; Wu, Qiong

    2013-01-01

    A color evaluation approach for computer-generated color rainbow holography (CGCRH) is presented. Firstly, the relationship between color quantities of a computer display and a color computer-generated holography (CCGH) colorimetric system is discussed based on color matching theory. An isochromatic transfer relationship of color quantity and amplitude of object light field is proposed. Secondly, the color reproduction mechanism and factors leading to the color difference between the color object and the holographic image that is reconstructed by CGCRH are analyzed in detail. A quantitative color calculation method for the holographic image reconstructed by CGCRH is given. Finally, general color samples are selected as numerical calculation test targets and the color differences between holographic images and test targets are calculated based on our proposed method. (paper)

  1. Computational morphology of the lung and its virtual imaging

    International Nuclear Information System (INIS)

    Kitaoka, Hiroko

    2002-01-01

    The author proposes an entirely new approach called 'virtual imaging' of an organ based on 'computational morphology'. Computational morphology describes mathematically design as principles of an organ structure to generate the organ model via computer, which can be called virtual organ. Virtual imaging simulates image data using the virtual organ. The virtual organ is divided into cubic voxels, and the CT value or other intensity value for each voxel is calculated according to the tissue properties within the voxel. The validity of the model is examined by comparing virtual images with clinical images. Computational image analysis methods can be developed based on validated models. In this paper, computational anatomy of the lung and its virtual X-ray imaging are introduced

  2. Processing computed tomography images by using personal computer

    International Nuclear Information System (INIS)

    Seto, Kazuhiko; Fujishiro, Kazuo; Seki, Hirofumi; Yamamoto, Tetsuo.

    1994-01-01

    Processing of CT images was attempted by using a popular personal computer. The program for image-processing was made with C compiler. The original images, acquired with CT scanner (TCT-60A, Toshiba), were transferred to the computer by 8-inch flexible diskette. Many fundamental image-processing, such as displaying image to the monitor, calculating CT value and drawing the profile curve. The result showed that a popular personal computer had ability to process CT images. It seemed that 8-inch flexible diskette was still useful medium of transferring image data. (author)

  3. Computer-generated diagram of an LHC dipole

    CERN Multimedia

    AC Team

    1998-01-01

    This computer-generated image of an LHC dipole magnet shows some of the parts vital for the operation of these components. The magnets must be cooled to 1.9 K (less than –270.3°C) so that the superconducting coils can produce the required 8 T magnetic field strength.

  4. System Matrix Analysis for Computed Tomography Imaging

    Science.gov (United States)

    Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo

    2015-01-01

    In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482

  5. Image-Based Geometric Modeling and Mesh Generation

    CERN Document Server

    2013-01-01

    As a new interdisciplinary research area, “image-based geometric modeling and mesh generation” integrates image processing, geometric modeling and mesh generation with finite element method (FEM) to solve problems in computational biomedicine, materials sciences and engineering. It is well known that FEM is currently well-developed and efficient, but mesh generation for complex geometries (e.g., the human body) still takes about 80% of the total analysis time and is the major obstacle to reduce the total computation time. It is mainly because none of the traditional approaches is sufficient to effectively construct finite element meshes for arbitrarily complicated domains, and generally a great deal of manual interaction is involved in mesh generation. This contributed volume, the first for such an interdisciplinary topic, collects the latest research by experts in this area. These papers cover a broad range of topics, including medical imaging, image alignment and segmentation, image-to-mesh conversion,...

  6. Image processing with personal computer

    International Nuclear Information System (INIS)

    Hara, Hiroshi; Handa, Madoka; Watanabe, Yoshihiko

    1990-01-01

    The method of automating the judgement works using photographs in radiation nondestructive inspection with a simple type image processor on the market was examined. The software for defect extraction and making binary and the software for automatic judgement were made for trial, and by using the various photographs on which the judgement was already done as the object, the accuracy and the problematic points were tested. According to the state of the objects to be photographed and the condition of inspection, the accuracy of judgement from 100% to 45% was obtained. The criteria for judgement were in conformity with the collection of reference photographs made by Japan Cast Steel Association. In the non-destructive inspection by radiography, the number and size of the defect images in photographs are visually judged, the results are collated with the standard, and the quality is decided. Recently, the technology of image processing with personal computers advanced, therefore by utilizing this technology, the automation of the judgement of photographs was attempted to improve the accuracy, to increase the inspection efficiency and to realize labor saving. (K.I.)

  7. An investigation of the potential of optical computed tomography for imaging of synchrotron-generated x-rays at high spatial resolution

    International Nuclear Information System (INIS)

    Doran, Simon J; Brochard, Thierry; Braeuer-Krisch, Elke; Adamovics, John; Krstajic, Nikola

    2010-01-01

    X-ray microbeam radiation therapy (MRT) is a novel form of treatment, currently in its preclinical stage, which uses microplanar x-ray beams from a synchrotron radiation source. It is important to perform accurate dosimetry on these microbeams, but, to date, there has been no accurate enough method available for making 3D dose measurements with isotropic, high spatial resolution to verify the results of Monte Carlo dose simulations. Here, we investigate the potential of optical computed tomography for satisfying these requirements. The construction of a simple optical CT microscopy (optical projection tomography) system from standard commercially available hardware is described. The measurement of optical densities in projection data is shown to be highly linear (r 2 = 0.999). The depth-of-field (DOF) of the imaging system is calculated based on the previous literature and measured experimentally using a commercial DOF target. It is shown that high quality images can be acquired despite the evident lack of telecentricity and despite DOF of the system being much lower than the sample diameter. Possible reasons for this are discussed. Results are presented for a complex irradiation of a 22 mm diameter cylinder of the radiochromic polymer PRESAGE(TM), demonstrating the exquisite 'dose-painting' abilities available in the MRT hutch of beamline ID-17 at the European Synchrotron Radiation Facility. Dose distributions in this initial experiment are equally well resolved on both an optical CT scan and a corresponding transmission image of radiochromic film, down to a line width of 83 μm (6 lp mm -1 ) with an MTF value of 0.40. A group of 33 μm wide lines was poorly resolved on both the optical CT and film images, and this is attributed to an incorrect exposure time calculation, leading to under-delivery of dose. Image artefacts in the optical CT scan are discussed. PRESAGE(TM) irradiated using the microbeam facility is proposed as a suitable material for producing

  8. New coding technique for computer generated holograms.

    Science.gov (United States)

    Haskell, R. E.; Culver, B. C.

    1972-01-01

    A coding technique is developed for recording computer generated holograms on a computer controlled CRT in which each resolution cell contains two beam spots of equal size and equal intensity. This provides a binary hologram in which only the position of the two dots is varied from cell to cell. The amplitude associated with each resolution cell is controlled by selectively diffracting unwanted light into a higher diffraction order. The recording of the holograms is fast and simple.

  9. Metasurface optics for full-color computational imaging.

    Science.gov (United States)

    Colburn, Shane; Zhan, Alan; Majumdar, Arka

    2018-02-01

    Conventional imaging systems comprise large and expensive optical components that successively mitigate aberrations. Metasurface optics offers a route to miniaturize imaging systems by replacing bulky components with flat and compact implementations. The diffractive nature of these devices, however, induces severe chromatic aberrations, and current multiwavelength and narrowband achromatic metasurfaces cannot support full visible spectrum imaging (400 to 700 nm). We combine principles of both computational imaging and metasurface optics to build a system with a single metalens of numerical aperture ~0.45, which generates in-focus images under white light illumination. Our metalens exhibits a spectrally invariant point spread function that enables computational reconstruction of captured images with a single digital filter. This work connects computational imaging and metasurface optics and demonstrates the capabilities of combining these disciplines by simultaneously reducing aberrations and downsizing imaging systems using simpler optics.

  10. Computational methods in molecular imaging technologies

    CERN Document Server

    Gunjan, Vinit Kumar; Venkatesh, C; Amarnath, M

    2017-01-01

    This book highlights the experimental investigations that have been carried out on magnetic resonance imaging and computed tomography (MRI & CT) images using state-of-the-art Computational Image processing techniques, and tabulates the statistical values wherever necessary. In a very simple and straightforward way, it explains how image processing methods are used to improve the quality of medical images and facilitate analysis. It offers a valuable resource for researchers, engineers, medical doctors and bioinformatics experts alike.

  11. Optical Interconnection Via Computer-Generated Holograms

    Science.gov (United States)

    Liu, Hua-Kuang; Zhou, Shaomin

    1995-01-01

    Method of free-space optical interconnection developed for data-processing applications like parallel optical computing, neural-network computing, and switching in optical communication networks. In method, multiple optical connections between multiple sources of light in one array and multiple photodetectors in another array made via computer-generated holograms in electrically addressed spatial light modulators (ESLMs). Offers potential advantages of massive parallelism, high space-bandwidth product, high time-bandwidth product, low power consumption, low cross talk, and low time skew. Also offers advantage of programmability with flexibility of reconfiguration, including variation of strengths of optical connections in real time.

  12. Prior image constrained image reconstruction in emerging computed tomography applications

    Science.gov (United States)

    Brunner, Stephen T.

    Advances have been made in computed tomography (CT), especially in the past five years, by incorporating prior images into the image reconstruction process. In this dissertation, we investigate prior image constrained image reconstruction in three emerging CT applications: dual-energy CT, multi-energy photon-counting CT, and cone-beam CT in image-guided radiation therapy. First, we investigate the application of Prior Image Constrained Compressed Sensing (PICCS) in dual-energy CT, which has been called "one of the hottest research areas in CT." Phantom and animal studies are conducted using a state-of-the-art 64-slice GE Discovery 750 HD CT scanner to investigate the extent to which PICCS can enable radiation dose reduction in material density and virtual monochromatic imaging. Second, we extend the application of PICCS from dual-energy CT to multi-energy photon-counting CT, which has been called "one of the 12 topics in CT to be critical in the next decade." Numerical simulations are conducted to generate multiple energy bin images for a photon-counting CT acquisition and to investigate the extent to which PICCS can enable radiation dose efficiency improvement. Third, we investigate the performance of a newly proposed prior image constrained scatter correction technique to correct scatter-induced shading artifacts in cone-beam CT, which, when used in image-guided radiation therapy procedures, can assist in patient localization, and potentially, dose verification and adaptive radiation therapy. Phantom studies are conducted using a Varian 2100 EX system with an on-board imager to investigate the extent to which the prior image constrained scatter correction technique can mitigate scatter-induced shading artifacts in cone-beam CT. Results show that these prior image constrained image reconstruction techniques can reduce radiation dose in dual-energy CT by 50% in phantom and animal studies in material density and virtual monochromatic imaging, can lead to radiation

  13. Computers are stepping stones to improved imaging.

    Science.gov (United States)

    Freiherr, G

    1991-02-01

    Never before has the radiology industry embraced the computer with such enthusiasm. Graphics supercomputers as well as UNIX- and RISC-based computing platforms are turning up in every digital imaging modality and especially in systems designed to enhance and transmit images, says author Greg Freiherr on assignment for Computers in Healthcare at the Radiological Society of North America conference in Chicago.

  14. Introduction to computer image processing

    Science.gov (United States)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  15. Image Visual Realism: From Human Perception to Machine Computation.

    Science.gov (United States)

    Fan, Shaojing; Ng, Tian-Tsong; Koenig, Bryan L; Herberg, Jonathan S; Jiang, Ming; Shen, Zhiqi; Zhao, Qi

    2017-08-30

    Visual realism is defined as the extent to which an image appears to people as a photo rather than computer generated. Assessing visual realism is important in applications like computer graphics rendering and photo retouching. However, current realism evaluation approaches use either labor-intensive human judgments or automated algorithms largely dependent on comparing renderings to reference images. We develop a reference-free computational framework for visual realism prediction to overcome these constraints. First, we construct a benchmark dataset of 2520 images with comprehensive human annotated attributes. From statistical modeling on this data, we identify image attributes most relevant for visual realism. We propose both empirically-based (guided by our statistical modeling of human data) and CNN-learned features to predict visual realism of images. Our framework has the following advantages: (1) it creates an interpretable and concise empirical model that characterizes human perception of visual realism; (2) it links computational features to latent factors of human image perception.

  16. Fast calculation method for computer-generated cylindrical holograms.

    Science.gov (United States)

    Yamaguchi, Takeshi; Fujii, Tomohiko; Yoshikawa, Hiroshi

    2008-07-01

    Since a general flat hologram has a limited viewable area, we usually cannot see the other side of a reconstructed object. There are some holograms that can solve this problem. A cylindrical hologram is well known to be viewable in 360 deg. Most cylindrical holograms are optical holograms, but there are few reports of computer-generated cylindrical holograms. The lack of computer-generated cylindrical holograms is because the spatial resolution of output devices is not great enough; therefore, we have to make a large hologram or use a small object to fulfill the sampling theorem. In addition, in calculating the large fringe, the calculation amount increases in proportion to the hologram size. Therefore, we propose what we believe to be a new calculation method for fast calculation. Then, we print these fringes with our prototype fringe printer. As a result, we obtain a good reconstructed image from a computer-generated cylindrical hologram.

  17. Computer generated movies to display biotelemetry data

    International Nuclear Information System (INIS)

    White, G.C.

    1979-01-01

    The three dimensional nature of biotelemetry data (x, y, time) makes them difficult to comprehend. Graphic displays provide a means of extracting information and analyzing biotelemetry data. The extensive computer graphics facilities at Los Alamos Scientific Laboratory have been utilized to analyze elk biotelemetry data. Fixes have been taken weekly for 14 months on 14 animals'. The inadequacy of still graphic displays to portray the time dimension of this data has lead to the use of computer generated movies to help grasp time relationships. A computer movie of the data from one animal demonstrates habitat use as a function of time, while a movie of 2 or more animals illustrates the correlations between the animals movements. About 2 hours of computer time were required to generate the movies for each animal for 1 year of data. The cost of the movies is quite small relative to the cost of collecting the data, so that computer generated movies are a reasonable method to depict biotelemetry data

  18. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  19. Optoelectronic Computer Architecture Development for Image Reconstruction

    National Research Council Canada - National Science Library

    Forber, Richard

    1996-01-01

    .... Specifically, we collaborated with UCSD and ERIM on the development of an optically augmented electronic computer for high speed inverse transform calculations to enable real time image reconstruction...

  20. Generating realistic images using Kray

    Science.gov (United States)

    Tanski, Grzegorz

    2004-07-01

    Kray is an application for creating realistic images. It is written in C++ programming language, has a text-based interface, solves global illumination problem using techniques such as radiosity, path tracing and photon mapping.

  1. Efficient generation of image chips for training deep learning algorithms

    Science.gov (United States)

    Han, Sanghui; Fafard, Alex; Kerekes, John; Gartley, Michael; Ientilucci, Emmett; Savakis, Andreas; Law, Charles; Parhan, Jason; Turek, Matt; Fieldhouse, Keith; Rovito, Todd

    2017-05-01

    Training deep convolutional networks for satellite or aerial image analysis often requires a large amount of training data. For a more robust algorithm, training data need to have variations not only in the background and target, but also radiometric variations in the image such as shadowing, illumination changes, atmospheric conditions, and imaging platforms with different collection geometry. Data augmentation is a commonly used approach to generating additional training data. However, this approach is often insufficient in accounting for real world changes in lighting, location or viewpoint outside of the collection geometry. Alternatively, image simulation can be an efficient way to augment training data that incorporates all these variations, such as changing backgrounds, that may be encountered in real data. The Digital Imaging and Remote Sensing Image Image Generation (DIRSIG) model is a tool that produces synthetic imagery using a suite of physics-based radiation propagation modules. DIRSIG can simulate images taken from different sensors with variation in collection geometry, spectral response, solar elevation and angle, atmospheric models, target, and background. Simulation of Urban Mobility (SUMO) is a multi-modal traffic simulation tool that explicitly models vehicles that move through a given road network. The output of the SUMO model was incorporated into DIRSIG to generate scenes with moving vehicles. The same approach was used when using helicopters as targets, but with slight modifications. Using the combination of DIRSIG and SUMO, we quickly generated many small images, with the target at the center with different backgrounds. The simulations generated images with vehicles and helicopters as targets, and corresponding images without targets. Using parallel computing, 120,000 training images were generated in about an hour. Some preliminary results show an improvement in the deep learning algorithm when real image training data are augmented with

  2. Noise simulation in cone beam CT imaging with parallel computing

    International Nuclear Information System (INIS)

    Tu, S.-J.; Shaw, Chris C; Chen, Lingyun

    2006-01-01

    We developed a computer noise simulation model for cone beam computed tomography imaging using a general purpose PC cluster. This model uses a mono-energetic x-ray approximation and allows us to investigate three primary performance components, specifically quantum noise, detector blurring and additive system noise. A parallel random number generator based on the Weyl sequence was implemented in the noise simulation and a visualization technique was accordingly developed to validate the quality of the parallel random number generator. In our computer simulation model, three-dimensional (3D) phantoms were mathematically modelled and used to create 450 analytical projections, which were then sampled into digital image data. Quantum noise was simulated and added to the analytical projection image data, which were then filtered to incorporate flat panel detector blurring. Additive system noise was generated and added to form the final projection images. The Feldkamp algorithm was implemented and used to reconstruct the 3D images of the phantoms. A 24 dual-Xeon PC cluster was used to compute the projections and reconstructed images in parallel with each CPU processing 10 projection views for a total of 450 views. Based on this computer simulation system, simulated cone beam CT images were generated for various phantoms and technique settings. Noise power spectra for the flat panel x-ray detector and reconstructed images were then computed to characterize the noise properties. As an example among the potential applications of our noise simulation model, we showed that images of low contrast objects can be produced and used for image quality evaluation

  3. Interpretation of computed tomographic images

    International Nuclear Information System (INIS)

    Stickle, R.L.; Hathcock, J.T.

    1993-01-01

    This article discusses the production of optimal CT images in small animal patients as well as principles of radiographic interpretation. Technical factors affecting image quality and aiding image interpretation are included. Specific considerations for scanning various anatomic areas are given, including indications and potential pitfalls. Principles of radiographic interpretation are discussed. Selected patient images are illustrated

  4. A study on NMI report generation with computer aid diagnosis

    International Nuclear Information System (INIS)

    Yang Xiaona; Li Zhimin; Zhao Xiangjun; Qiu Jinping

    1994-01-01

    An expert system of intelligent diagnosis, computer aid diagnosis and computerized report generation and management for an nuclear medicine imaging (NMI) was performed. The mathematic model with finite set mapping for the diagnosis was evaluated. The clinical application shows, the diagnostic sensitivity and specificity of it was 85.7% ∼ 93.4% and 92% ∼ 95.6% respectively. Therefore, its application may be extended

  5. The vectorization of a ray tracing program for image generation

    Science.gov (United States)

    Plunkett, D. J.; Cychosz, J. M.; Bailey, M. J.

    1984-01-01

    Ray tracing is a widely used method for producing realistic computer generated images. Ray tracing involves firing an imaginary ray from a view point, through a point on an image plane, into a three dimensional scene. The intersections of the ray with the objects in the scene determines what is visible at the point on the image plane. This process must be repeated many times, once for each point (commonly called a pixel) in the image plane. A typical image contains more than a million pixels making this process computationally expensive. A traditional ray tracing program processes one ray at a time. In such a serial approach, as much as ninety percent of the execution time is spent computing the intersection of a ray with the surface in the scene. With the CYBER 205, many rays can be intersected with all the bodies im the scene with a single series of vector operations. Vectorization of this intersection process results in large decreases in computation time. The CADLAB's interest in ray tracing stems from the need to produce realistic images of mechanical parts. A high quality image of a part during the design process can increase the productivity of the designer by helping him visualize the results of his work. To be useful in the design process, these images must be produced in a reasonable amount of time. This discussion will explain how the ray tracing process was vectorized and gives examples of the images obtained.

  6. Tensors in image processing and computer vision

    CERN Document Server

    De Luis García, Rodrigo; Tao, Dacheng; Li, Xuelong

    2009-01-01

    Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the developments in this branch of signal processing, offering research and discussions by experts in the area. It is suitable for advanced students working in the area of computer vision and image processing.

  7. Correction for polychromatic aberration in computed tomography images

    International Nuclear Information System (INIS)

    Naparstek, A.

    1979-01-01

    A method and apparatus for correcting a computed tomography image for polychromatic aberration caused by the non-linear interaction (i.e. the energy dependent attenuation characteristics) of different body constituents, such as bone and soft tissue, with a polychromatic X-ray beam are described in detail. An initial image is conventionally computed from path measurements made as source and detector assembly scan a body section. In the improvement, each image element of the initial computed image representing attenuation is recorded in a store and is compared with two thresholds, one representing bone and the other soft tissue. Depending on the element value relative to the thresholds, a proportion of the respective constituent is allocated to that element location and corresponding bone and soft tissue projections are determined and stored. An error projection generator calculates projections of polychromatic aberration errors in the raw image data from recalled bone and tissue projections using a multidimensional polynomial function which approximates the non-linear interaction involved. After filtering, these are supplied to an image reconstruction computer to compute image element correction values which are subtracted from raw image element values to provide a corrected reconstructed image for display. (author)

  8. CERPHASE: Computer-generated phase diagrams

    International Nuclear Information System (INIS)

    Ruys, A.J.; Sorrell, C.C.; Scott, F.H.

    1990-01-01

    CERPHASE is a collection of computer programs written in the programming language basic and developed for the purpose of teaching the principles of phase diagram generation from the ideal solution model of thermodynamics. Two approaches are used in the generation of the phase diagrams: freezing point depression and minimization of the free energy of mixing. Binary and ternary phase diagrams can be generated as can diagrams containing the ideal solution parameters used to generate the actual phase diagrams. Since the diagrams generated utilize the ideal solution model, data input required from the operator is minimal: only the heat of fusion and melting point of each component. CERPHASE is menu-driven and user-friendly, containing simple instructions in the form of screen prompts as well as a HELP file to guide the operator. A second purpose of CERPHASE is in the prediction of phase diagrams in systems for which no experimentally determined phase diagrams are available, enabling the estimation of suitable firing or sintering temperatures for otherwise unknown systems. Since CERPHASE utilizes ideal solution theory, there are certain limitations imposed on the types of systems that can be predicted reliably. 6 refs., 13 refs

  9. Automatic caption generation for news images.

    Science.gov (United States)

    Feng, Yansong; Lapata, Mirella

    2013-04-01

    This paper is concerned with the task of automatically generating captions for images, which is important for many image-related applications. Examples include video and image retrieval as well as the development of tools that aid visually impaired individuals to access pictorial information. Our approach leverages the vast resource of pictures available on the web and the fact that many of them are captioned and colocated with thematically related documents. Our model learns to create captions from a database of news articles, the pictures embedded in them, and their captions, and consists of two stages. Content selection identifies what the image and accompanying article are about, whereas surface realization determines how to verbalize the chosen content. We approximate content selection with a probabilistic image annotation model that suggests keywords for an image. The model postulates that images and their textual descriptions are generated by a shared set of latent variables (topics) and is trained on a weakly labeled dataset (which treats the captions and associated news articles as image labels). Inspired by recent work in summarization, we propose extractive and abstractive surface realization models. Experimental results show that it is viable to generate captions that are pertinent to the specific content of an image and its associated article, while permitting creativity in the description. Indeed, the output of our abstractive model compares favorably to handwritten captions and is often superior to extractive methods.

  10. E-Beam Written Computer Generated Holograms.

    Science.gov (United States)

    1983-08-01

    the V2 parabola is tested without aid of the computer generated hologram, and the interferogram of Figure 3-5a results. It shows about 40 waves of...bymricl crain Atso, whee PCGH ae • " in ihe lowret Thae vroush considerations wih respect t he degnos Thi mCHaeanis. cussed in the folowing siectionsuo.e...spacing of 0.5 has proven to be a greater challenge than achieving the correct milling depth, particularly for higher spatial frequency pat- terns

  11. Generating region proposals for histopathological whole slide image retrieval.

    Science.gov (United States)

    Ma, Yibing; Jiang, Zhiguo; Zhang, Haopeng; Xie, Fengying; Zheng, Yushan; Shi, Huaqiang; Zhao, Yu; Shi, Jun

    2018-06-01

    Content-based image retrieval is an effective method for histopathological image analysis. However, given a database of huge whole slide images (WSIs), acquiring appropriate region-of-interests (ROIs) for training is significant and difficult. Moreover, histopathological images can only be annotated by pathologists, resulting in the lack of labeling information. Therefore, it is an important and challenging task to generate ROIs from WSI and retrieve image with few labels. This paper presents a novel unsupervised region proposing method for histopathological WSI based on Selective Search. Specifically, the WSI is over-segmented into regions which are hierarchically merged until the WSI becomes a single region. Nucleus-oriented similarity measures for region mergence and Nucleus-Cytoplasm color space for histopathological image are specially defined to generate accurate region proposals. Additionally, we propose a new semi-supervised hashing method for image retrieval. The semantic features of images are extracted with Latent Dirichlet Allocation and transformed into binary hashing codes with Supervised Hashing. The methods are tested on a large-scale multi-class database of breast histopathological WSIs. The results demonstrate that for one WSI, our region proposing method can generate 7.3 thousand contoured regions which fit well with 95.8% of the ROIs annotated by pathologists. The proposed hashing method can retrieve a query image among 136 thousand images in 0.29 s and reach precision of 91% with only 10% of images labeled. The unsupervised region proposing method can generate regions as predictions of lesions in histopathological WSI. The region proposals can also serve as the training samples to train machine-learning models for image retrieval. The proposed hashing method can achieve fast and precise image retrieval with small amount of labels. Furthermore, the proposed methods can be potentially applied in online computer-aided-diagnosis systems. Copyright

  12. Automatic Model Generation Framework for Computational Simulation of Cochlear Implantation

    DEFF Research Database (Denmark)

    Mangado Lopez, Nerea; Ceresa, Mario; Duchateau, Nicolas

    2016-01-01

    . To address such a challenge, we propose an automatic framework for the generation of patient-specific meshes for finite element modeling of the implanted cochlea. First, a statistical shape model is constructed from high-resolution anatomical μCT images. Then, by fitting the statistical model to a patient......'s CT image, an accurate model of the patient-specific cochlea anatomy is obtained. An algorithm based on the parallel transport frame is employed to perform the virtual insertion of the cochlear implant. Our automatic framework also incorporates the surrounding bone and nerve fibers and assigns......Recent developments in computational modeling of cochlear implantation are promising to study in silico the performance of the implant before surgery. However, creating a complete computational model of the patient's anatomy while including an external device geometry remains challenging...

  13. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...

  14. Computer generated holography with intensity-graded patterns

    Directory of Open Access Journals (Sweden)

    Rossella Conti

    2016-10-01

    Full Text Available Computer Generated Holography achieves patterned illumination at the sample plane through phase modulation of the laser beam at the objective back aperture. This is obtained by using liquid crystal-based spatial light modulators (LC-SLMs, which modulate the spatial phase of the incident laser beam. A variety of algorithms are employed to calculate the phase modulation masks addressed to the LC-SLM. These algorithms range from simple gratings-and-lenses to generate multiple diffraction-limited spots, to iterative Fourier-transform algorithms capable of generating arbitrary illumination shapes perfectly tailored on the base of the target contour. Applications for holographic light patterning include multi-trap optical tweezers, patterned voltage imaging and optical control of neuronal excitation using uncaging or optogenetics. These past implementations of computer generated holography used binary input profile to generate binary light distribution at the sample plane. Here we demonstrate that using graded input sources, enables generating intensity graded light patterns and extend the range of application of holographic light illumination. At first, we use intensity-graded holograms to compensate for LC-SLM position dependent diffraction efficiency or sample fluorescence inhomogeneity. Finally we show that intensity-graded holography can be used to equalize photo evoked currents from cells expressing different level of chanelrhodopsin2 (ChR2, one of the most commonly used optogenetics light gated channels, taking into account the non-linear dependence of channel opening on incident light.

  15. Topics in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2013-01-01

      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  16. Parotid lymphomas - clinical and computed tomogrphic imaging ...

    African Journals Online (AJOL)

    Parotid lymphomas - clinical and computed tomogrphic imaging features. ... South African Journal of Surgery ... Lymphoma has a clinical presentation similar ... CT scanning is a useful adjunctive investigation to determine the site and extent of ...

  17. Image quality in coronary computed tomography angiography

    DEFF Research Database (Denmark)

    Precht, Helle; Gerke, Oke; Thygesen, Jesper

    2018-01-01

    Background Computed tomography (CT) technology is rapidly evolving and software solution developed to optimize image quality and/or lower radiation dose. Purpose To investigate the influence of adaptive statistical iterative reconstruction (ASIR) at different radiation doses in coronary CT...

  18. Computer vision for biomedical image applications. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yanxi [Carnegie Mellon Univ., Pittsburgh, PA (United States). School of Computer Science, The Robotics Institute; Jiang, Tianzi [Chinese Academy of Sciences, Beijing (China). National Lab. of Pattern Recognition, Inst. of Automation; Zhang, Changshui (eds.) [Tsinghua Univ., Beijing, BJ (China). Dept. of Automation

    2005-07-01

    This book constitutes the refereed proceedings of the First International Workshop on Computer Vision for Biomedical Image Applications: Current Techniques and Future Trends, CVBIA 2005, held in Beijing, China, in October 2005 within the scope of ICCV 20. (orig.)

  19. Computational ghost imaging using deep learning

    Science.gov (United States)

    Shimobaba, Tomoyoshi; Endo, Yutaka; Nishitsuji, Takashi; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Shiraki, Atsushi; Ito, Tomoyoshi

    2018-04-01

    Computational ghost imaging (CGI) is a single-pixel imaging technique that exploits the correlation between known random patterns and the measured intensity of light transmitted (or reflected) by an object. Although CGI can obtain two- or three-dimensional images with a single or a few bucket detectors, the quality of the reconstructed images is reduced by noise due to the reconstruction of images from random patterns. In this study, we improve the quality of CGI images using deep learning. A deep neural network is used to automatically learn the features of noise-contaminated CGI images. After training, the network is able to predict low-noise images from new noise-contaminated CGI images.

  20. From Digital Imaging to Computer Image Analysis of Fine Art

    Science.gov (United States)

    Stork, David G.

    An expanding range of techniques from computer vision, pattern recognition, image analysis, and computer graphics are being applied to problems in the history of art. The success of these efforts is enabled by the growing corpus of high-resolution multi-spectral digital images of art (primarily paintings and drawings), sophisticated computer vision methods, and most importantly the engagement of some art scholars who bring questions that may be addressed through computer methods. This paper outlines some general problem areas and opportunities in this new inter-disciplinary research program.

  1. Computational acceleration for MR image reconstruction in partially parallel imaging.

    Science.gov (United States)

    Ye, Xiaojing; Chen, Yunmei; Huang, Feng

    2011-05-01

    In this paper, we present a fast numerical algorithm for solving total variation and l(1) (TVL1) based image reconstruction with application in partially parallel magnetic resonance imaging. Our algorithm uses variable splitting method to reduce computational cost. Moreover, the Barzilai-Borwein step size selection method is adopted in our algorithm for much faster convergence. Experimental results on clinical partially parallel imaging data demonstrate that the proposed algorithm requires much fewer iterations and/or less computational cost than recently developed operator splitting and Bregman operator splitting methods, which can deal with a general sensing matrix in reconstruction framework, to get similar or even better quality of reconstructed images.

  2. Image characterization of computed radiography

    International Nuclear Information System (INIS)

    Candeias, Janaina P.; Saddock, Aline; Oliveira, Davi F.; Lopes, Ricardo T.

    2007-01-01

    The digital radiographic image became a reality as of the 80's decade. Since then, several works have been developed with the aim of reducing the exposure time to ionizing radiation obtaining in this way an excellent image quality with a minimum exposure. In the Computerized Radiography, the conventional film is substituted for Image Plate (IP) which consists of a radiosensitive layer of phosphor crystals on a polyester backing plate. The unique design makes it reusable and easy to handle. When exposed, the IP accumulates and stores the irradiated radioactive energy. In order to qualify a computerized radiography system it is necessary to evaluate the Image Plate. In this work it was performed a series of experimental procedures with the aim of evaluating the responses characteristics for different plates. For this purpose it was used a computerized radiographic system CR Tower Scanner - GE, with three different types of IPs, all of them manufactured by GE, whose nomenclatures are IPC, IPX and IPS. It was used the Rhythm Acquire and Review programs for image acquisition and treatment, respectively. (author)

  3. Fifth generation computer systems. Proceedings of the International conference

    Energy Technology Data Exchange (ETDEWEB)

    Moto-oka, T

    1982-01-01

    The following topics were dealt with: Fifth Generation Computer Project-social needs and impact; knowledge information processing research plan; architecture research plan; knowledge information processing topics; fifth generation computer architecture considerations.

  4. Optical encryption with selective computational ghost imaging

    International Nuclear Information System (INIS)

    Zafari, Mohammad; Kheradmand, Reza; Ahmadi-Kandjani, Sohrab

    2014-01-01

    Selective computational ghost imaging (SCGI) is a technique which enables the reconstruction of an N-pixel image from N measurements or less. In this paper we propose an optical encryption method based on SCGI and experimentally demonstrate that this method has much higher security under eavesdropping and unauthorized accesses compared with previous reported methods. (paper)

  5. Biomedical Imaging and Computational Modeling in Biomechanics

    CERN Document Server

    Iacoviello, Daniela

    2013-01-01

    This book collects the state-of-art and new trends in image analysis and biomechanics. It covers a wide field of scientific and cultural topics, ranging from remodeling of bone tissue under the mechanical stimulus up to optimizing the performance of sports equipment, through the patient-specific modeling in orthopedics, microtomography and its application in oral and implant research, computational modeling in the field of hip prostheses, image based model development and analysis of the human knee joint, kinematics of the hip joint, micro-scale analysis of compositional and mechanical properties of dentin, automated techniques for cervical cell image analysis, and iomedical imaging and computational modeling in cardiovascular disease.   The book will be of interest to researchers, Ph.D students, and graduate students with multidisciplinary interests related to image analysis and understanding, medical imaging, biomechanics, simulation and modeling, experimental analysis.

  6. Generation of nuclear magnetic resonance images

    International Nuclear Information System (INIS)

    Beckmann, N.X.

    1986-01-01

    Two generation techniques of nuclear magnetic resonance images, the retro-projection and the direct transformation method are studied these techniques are based on the acquisition of NMR signals which phases and frequency components are codified in space by application of magnetic field gradients. The construction of magnet coils is discussed, in particular a suitable magnet geometry with polar pieces and air gap. The obtention of image contrast by T1 and T2 relaxation times reconstructed from generated signals using sequences such as spin-echo, inversion-recovery and stimulated echo, is discussed. The mathematical formalism of matrix solution for Bloch equations is also presented. (M.C.K.)

  7. Computer Generated Inputs for NMIS Processor Verification

    International Nuclear Information System (INIS)

    J. A. Mullens; J. E. Breeding; J. A. McEvers; R. W. Wysor; L. G. Chiang; J. R. Lenarduzzi; J. T. Mihalczo; J. K. Mattingly

    2001-01-01

    Proper operation of the Nuclear Identification Materials System (NMIS) processor can be verified using computer-generated inputs [BIST (Built-In-Self-Test)] at the digital inputs. Preselected sequences of input pulses to all channels with known correlation functions are compared to the output of the processor. These types of verifications have been utilized in NMIS type correlation processors at the Oak Ridge National Laboratory since 1984. The use of this test confirmed a malfunction in a NMIS processor at the All-Russian Scientific Research Institute of Experimental Physics (VNIIEF) in 1998. The NMIS processor boards were returned to the U.S. for repair and subsequently used in NMIS passive and active measurements with Pu at VNIIEF in 1999

  8. Computational biomechanics for medicine imaging, modeling and computing

    CERN Document Server

    Doyle, Barry; Wittek, Adam; Nielsen, Poul; Miller, Karol

    2016-01-01

    The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologies and advancements. This volume comprises eighteen of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, UK, Switzerland, Scotland, France and Russia. Some of the interesting topics discussed are: tailored computational models; traumatic brain injury; soft-tissue mechanics; medical image analysis; and clinically-relevant simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.

  9. Sparse Image Reconstruction in Computed Tomography

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Sauer

    In recent years, increased focus on the potentially harmful effects of x-ray computed tomography (CT) scans, such as radiation-induced cancer, has motivated research on new low-dose imaging techniques. Sparse image reconstruction methods, as studied for instance in the field of compressed sensing...... applications. This thesis takes a systematic approach toward establishing quantitative understanding of conditions for sparse reconstruction to work well in CT. A general framework for analyzing sparse reconstruction methods in CT is introduced and two sets of computational tools are proposed: 1...... contributions to a general set of computational characterization tools. Thus, the thesis contributions help advance sparse reconstruction methods toward routine use in...

  10. Computed tomography and three-dimensional imaging

    International Nuclear Information System (INIS)

    Harris, L.D.; Ritman, E.L.; Robb, R.A.

    1987-01-01

    Presented here is a brief introduction to two-, three-, and four-dimensional computed tomography. More detailed descriptions of the mathematics of reconstruction and of CT scanner operation are presented elsewhere. The complementary tomographic imaging methods of single-photon-emission tomography (SPECT) positron-emission tomography (PET), nuclear magnetic resonance (NMR) imaging, ulltrasound sector scanning, and ulltrasound computer-assisted tomography [UCAT] are only named here. Each imaging modality ''probes'' the body with a different energy form, yielding unique and useful information about tomographic sections through the body

  11. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  12. Study of check image using computed radiography

    International Nuclear Information System (INIS)

    Sato, Hiroshi

    2002-01-01

    There are two image forming methods both a check image and a portal image in the linacogram. It has been established the image forming method in the check image using computed radiography (CR). On the other hand, it is not established the image forming method in the portal image using CR yet. Usually, in the electric portal imaging device (EPID) is mainly used just before radiotherapy start. The usefulness of the portal image forming method by CR using in place of EPID is possible to confirm the precision for determining to specific position at the irradiate part and to the irradiate method for the human organs. There are some technical problems that, since in the early time, the linac graphy (LG) image have low resolution power. In order to improve to the resolution power in LG image, CR image technologies have been introduced to the check image forming method. Heavy metallic sheet (HMS) is used to the front side of CR-IP cassette, and high contactness sponge is used to the back side of the cassette. Improved contactness between HMS and imaging plate (IP) by means of the high contactness sponge contributed to improve the resolution power in the check images. A lot of paper which is connected with these information have been reported. Imaging plate ST-III should be used to maintain high sensitivity in the check film image forming method. The same image forming method in the check image established by CR has been introduced into the portal image forming method in order to improve the resolution power. However, as a result, it couldn't acquired high resolution image forming in the portal images because of the combination of ST-III and radiotherapy dose. After several trials, it has been recognized that HR-V imaging plate for mammography is the most useful application to maintain high resolution power in the portal images. Also, it is possible to modify the image quality by changing GS parameter which is one of image processing parameters in CR. Furthermore, in case

  13. Computational mesh generation for vascular structures with deformable surfaces

    International Nuclear Information System (INIS)

    Putter, S. de; Laffargue, F.; Breeuwer, M.; Vosse, F.N. van de; Gerritsen, F.A.; Philips Medical Systems, Best

    2006-01-01

    Computational blood flow and vessel wall mechanics simulations for vascular structures are becoming an important research tool for patient-specific surgical planning and intervention. An important step in the modelling process for patient-specific simulations is the creation of the computational mesh based on the segmented geometry. Most known solutions either require a large amount of manual processing or lead to a substantial difference between the segmented object and the actual computational domain. We have developed a chain of algorithms that lead to a closely related implementation of image segmentation with deformable models and 3D mesh generation. The resulting processing chain is very robust and leads both to an accurate geometrical representation of the vascular structure as well as high quality computational meshes. The chain of algorithms has been tested on a wide variety of shapes. A benchmark comparison of our mesh generation application with five other available meshing applications clearly indicates that the new approach outperforms the existing methods in the majority of cases. (orig.)

  14. Computer assisted visualization of digital mammography images

    International Nuclear Information System (INIS)

    Funke, M.; Breiter, N.; Grabbe, E.; Netsch, T.; Biehl, M.; Peitgen, H.O.

    1999-01-01

    Purpose: In a clinical study, the feasibility of using a mammography workstation for the display and interpretation of digital mammography images was evaluated and the results were compared with the corresponding laser film hard copies. Materials and Methods: Digital phosphorous plate radiographs of the entire breast were obtained in 30 patients using a direct magnification mammography system. The images were displayed for interpretation on the computer monitor of a dedicated mammography workstation and also presented as laser film hard copies on a film view box for comparison. The images were evaluted with respect to the image handling, the image quality and the visualization of relevant structures by 3 readers. Results: Handling and contrast of the monitor displayed images were found to be superior compared with the film hard copies. Image noise was found in some cases but did not compromise the interpretation of the monitor images. The visualization of relevant structures was equal with both modalities. Altogether, image interpretation with the mammography workstation was considered to be easy, quick and confident. Conclusions: Computer-assisted visualization and interpretation of digital mammography images using a dedicated workstation can be performed with sufficiently high diagnostic accuracy. (orig.) [de

  15. Computing Hypercrossed Complex Pairings in Digital Images

    Directory of Open Access Journals (Sweden)

    Simge Öztunç

    2013-01-01

    Full Text Available We consider an additive group structure in digital images and introduce the commutator in digital images. Then we calculate the hypercrossed complex pairings which generates a normal subgroup in dimension 2 and in dimension 3 by using 8-adjacency and 26-adjacency.

  16. EXPRESS METHOD OF BARCODE GENERATION FROM FACIAL IMAGES

    Directory of Open Access Journals (Sweden)

    G. A. Kukharev

    2014-03-01

    Full Text Available In the paper a method of generating of standard type linear barcodes from facial images is proposed. The method is based on use of the histogram of facial image brightness, averaging the histogram on a limited number of intervals, quantization of results in a range of decimal numbers from 0 to 9 and table conversion into the final barcode. The proposed solution is computationally low-cost and not requires the use of specialized software on image processing that allows generating of facial barcodes in mobile systems, and thus the proposed method can be interpreted as an express method. Results of tests on the Face94 and CUHK Face Sketch FERET Databases showed that the proposed method is a new solution for use in the real-world practice and ensures the stability of generated barcodes in changes of scale, pose and mirroring of a facial image, and also changes of a facial expression and shadows on faces from local lighting. The proposed method is based on generating of a standard barcode directly from the facial image, and thus contains the subjective information about a person's face.

  17. Real-time Image Generation for Compressive Light Field Displays

    International Nuclear Information System (INIS)

    Wetzstein, G; Lanman, D; Hirsch, M; Raskar, R

    2013-01-01

    With the invention of integral imaging and parallax barriers in the beginning of the 20th century, glasses-free 3D displays have become feasible. Only today—more than a century later—glasses-free 3D displays are finally emerging in the consumer market. The technologies being employed in current-generation devices, however, are fundamentally the same as what was invented 100 years ago. With rapid advances in optical fabrication, digital processing power, and computational perception, a new generation of display technology is emerging: compressive displays exploring the co-design of optical elements and computational processing while taking particular characteristics of the human visual system into account. In this paper, we discuss real-time implementation strategies for emerging compressive light field displays. We consider displays composed of multiple stacked layers of light-attenuating or polarization-rotating layers, such as LCDs. The involved image generation requires iterative tomographic image synthesis. We demonstrate that, for the case of light field display, computed tomographic light field synthesis maps well to operations included in the standard graphics pipeline, facilitating efficient GPU-based implementations with real-time framerates.

  18. Wavefront reconstruction using computer-generated holograms

    Science.gov (United States)

    Schulze, Christian; Flamm, Daniel; Schmidt, Oliver A.; Duparré, Michael

    2012-02-01

    We propose a new method to determine the wavefront of a laser beam, based on modal decomposition using computer-generated holograms (CGHs). Thereby the beam under test illuminates the CGH with a specific, inscribed transmission function that enables the measurement of modal amplitudes and phases by evaluating the first diffraction order of the hologram. Since we use an angular multiplexing technique, our method is innately capable of real-time measurements of amplitude and phase, yielding the complete information about the optical field. A measurement of the Stokes parameters, respectively of the polarization state, provides the possibility to calculate the Poynting vector. Two wavefront reconstruction possibilities are outlined: reconstruction from the phase for scalar beams and reconstruction from the Poynting vector for inhomogeneously polarized beams. To quantify single aberrations, the reconstructed wavefront is decomposed into Zernike polynomials. Our technique is applied to beams emerging from different kinds of multimode optical fibers, such as step-index, photonic crystal and multicore fibers, whereas in this work results are exemplarily shown for a step-index fiber and compared to a Shack-Hartmann measurement that serves as a reference.

  19. Speeding up image reconstruction in computed tomography

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Computed tomography (CT) is a technique for imaging cross-sections of an object using X-ray measurements taken from different angles. In last decades a significant progress has happened there: today advanced algorithms allow fast image reconstruction and obtaining high-quality images even with missing or dirty data, modern detectors provide high resolution without increasing radiation dose, and high-performance multi-core computing devices are there to help us solving such tasks even faster. I will start with CT basics, then briefly present existing classes of reconstruction algorithms and their differences. After that I will proceed to employing distinctive architectural features of modern multi-core devices (CPUs and GPUs) and popular program interfaces (OpenMP, MPI, CUDA, OpenCL) for developing effective parallel realizations of image reconstruction algorithms. Decreasing full reconstruction time from long hours up to minutes or even seconds has a revolutionary impact in diagnostic medicine and industria...

  20. On the pinned field image binarization for signature generation in image ownership verification method

    Directory of Open Access Journals (Sweden)

    Chang Hsuan

    2011-01-01

    Full Text Available Abstract The issue of pinned field image binarization for signature generation in the ownership verification of the protected image is investigated. The pinned field explores the texture information of the protected image and can be employed to enhance the watermark robustness. In the proposed method, four optimization schemes are utilized to determine the threshold values for transforming the pinned field into a binary feature image, which is then utilized to generate an effective signature image. Experimental results show that the utilization of optimization schemes can significantly improve the signature robustness from the previous method (Lee and Chang, Opt. Eng. 49 (9, 097005, 2010. While considering both the watermark retrieval rate and the computation speed, the genetic algorithm is strongly recommended. In addition, compared with Chang and Lin's scheme (J. Syst. Softw. 81 (7, 1118-1129, 2008, the proposed scheme also has better performance.

  1. Accuracy in Robot Generated Image Data Sets

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Dahl, Anders Bjorholm

    2015-01-01

    In this paper we present a practical innovation concerning how to achieve high accuracy of camera positioning, when using a 6 axis industrial robots to generate high quality data sets for computer vision. This innovation is based on the realization that to a very large extent the robots positioning...... error is deterministic, and can as such be calibrated away. We have successfully used this innovation in our efforts for creating data sets for computer vision. Since the use of this innovation has a significant effect on the data set quality, we here present it in some detail, to better aid others...

  2. 2nd Generation QUATARA Flight Computer

    Data.gov (United States)

    National Aeronautics and Space Administration — The primary objective of this activity is to develop, design, and test (DD&T) the QUAD-core siTARA (QUATARA) computer to distribute computationally intensive...

  3. Affective Computing used in an imaging interaction paradigm

    DEFF Research Database (Denmark)

    Schultz, Nette

    2003-01-01

    This paper combines affective computing with an imaging interaction paradigm. An imaging interaction paradigm means that human and computer communicates primarily by images. Images evoke emotions in humans, so the computer must be able to behave emotionally intelligent. An affective image selection...

  4. Computational surgery and dual training computing, robotics and imaging

    CERN Document Server

    Bass, Barbara; Berceli, Scott; Collet, Christophe; Cerveri, Pietro

    2014-01-01

    This critical volume focuses on the use of medical imaging, medical robotics, simulation, and information technology in surgery. It offers a road map for computational surgery success,  discusses the computer-assisted management of disease and surgery, and provides a rational for image processing and diagnostic. This book also presents some advances on image-driven intervention and robotics, as well as evaluates models and simulations for a broad spectrum of cancers as well as cardiovascular, neurological, and bone diseases. Training and performance analysis in surgery assisted by robotic systems is also covered. This book also: ·         Provides a comprehensive overview of the use of computational surgery and disease management ·         Discusses the design and use of medical robotic tools for orthopedic surgery, endoscopic surgery, and prostate surgery ·         Provides practical examples and case studies in the areas of image processing, virtual surgery, and simulation traini...

  5. Patient Dose From Megavoltage Computed Tomography Imaging

    International Nuclear Information System (INIS)

    Shah, Amish P.; Langen, Katja M.; Ruchala, Kenneth J.; Cox, Andrea; Kupelian, Patrick A.; Meeks, Sanford L.

    2008-01-01

    Purpose: Megavoltage computed tomography (MVCT) can be used daily for imaging with a helical tomotherapy unit for patient alignment before treatment delivery. The purpose of this investigation was to show that the MVCT dose can be computed in phantoms, and further, that the dose can be reported for actual patients from MVCT on a helical tomotherapy unit. Methods and Materials: An MVCT beam model was commissioned and verified through a series of absorbed dose measurements in phantoms. This model was then used to retrospectively calculate the imaging doses to the patients. The MVCT dose was computed for five clinical cases: prostate, breast, head/neck, lung, and craniospinal axis. Results: Validation measurements in phantoms verified that the computed dose can be reported to within 5% of the measured dose delivered at the helical tomotherapy unit. The imaging dose scaled inversely with changes to the CT pitch. Relative to a normal pitch of 2.0, the organ dose can be scaled by 0.67 and 2.0 for scans done with a pitch of 3.0 and 1.0, respectively. Typical doses were in the range of 1.0-2.0 cGy, if imaged with a normal pitch. The maximal organ dose calculated was 3.6 cGy in the neck region of the craniospinal patient, if imaged with a pitch of 1.0. Conclusion: Calculation of the MVCT dose has shown that the typical imaging dose is approximately 1.5 cGy per image. The uniform MVCT dose delivered using helical tomotherapy is greatest when the anatomic thickness is the smallest and the pitch is set to the lowest value

  6. The Computer Image Generation Applications Study.

    Science.gov (United States)

    1980-07-01

    1059 7 T62 Tank 759 0 Lexington Carrier 1485 19 Sea Scape 600 1680 Fresnel Lens Optical Landing System (FLOLS) 20 0 Meatball 9 0 T37 Aircraft (LOD#3... Meatball T37 Aircraft NATO 4655 1914 33 new eye point. See also 7.1.5.5 for definition of monocular movement parallax. (g) Multiple Simulations

  7. Dense image correspondences for computer vision

    CERN Document Server

    Liu, Ce

    2016-01-01

    This book describes the fundamental building-block of many new computer vision systems: dense and robust correspondence estimation. Dense correspondence estimation techniques are now successfully being used to solve a wide range of computer vision problems, very different from the traditional applications such techniques were originally developed to solve. This book introduces the techniques used for establishing correspondences between challenging image pairs, the novel features used to make these techniques robust, and the many problems dense correspondences are now being used to solve. The book provides information to anyone attempting to utilize dense correspondences in order to solve new or existing computer vision problems. The editors describe how to solve many computer vision problems by using dense correspondence estimation. Finally, it surveys resources, code, and data necessary for expediting the development of effective correspondence-based computer vision systems.   ·         Provides i...

  8. Computer Generated Hologram System for Wavefront Measurement System Calibration

    Science.gov (United States)

    Olczak, Gene

    2011-01-01

    Computer Generated Holograms (CGHs) have been used for some time to calibrate interferometers that require nulling optics. A typical scenario is the testing of aspheric surfaces with an interferometer placed near the paraxial center of curvature. Existing CGH technology suffers from a reduced capacity to calibrate middle and high spatial frequencies. The root cause of this shortcoming is as follows: the CGH is not placed at an image conjugate of the asphere due to limitations imposed by the geometry of the test and the allowable size of the CGH. This innovation provides a calibration system where the imaging properties in calibration can be made comparable to the test configuration. Thus, if the test is designed to have good imaging properties, then middle and high spatial frequency errors in the test system can be well calibrated. The improved imaging properties are provided by a rudimentary auxiliary optic as part of the calibration system. The auxiliary optic is simple to characterize and align to the CGH. Use of the auxiliary optic also reduces the size of the CGH required for calibration and the density of the lines required for the CGH. The resulting CGH is less expensive than the existing technology and has reduced write error and alignment error sensitivities. This CGH system is suitable for any kind of calibration using an interferometer when high spatial resolution is required. It is especially well suited for tests that include segmented optical components or large apertures.

  9. Nuclear imaging using Fuji Computed Radiography

    International Nuclear Information System (INIS)

    Yodono, Hiraku; Tarusawa, Nobuko; Katto, Keiichi; Miyakawa, Takayoshi; Watanabe, Sadao; Shinozaki, Tatsuyo

    1988-01-01

    We studied the feasibility of the Fuji Computed Radiography system (FCR) in nuclear medicine. The basic principle of the system is the conversion of the X-ray energy pattern into digital signals utilizing scanning laser stimulated luminescence. A Rollo phantom filled with 12 mCi of Tc-99m pertechnetate was used in this study. In imaging by the FCR, a low energy high resolution parallel hole collimator for a gamma camera was placed over the phantom and photons through the collimator were stored on a single imaging plate (IP) or 3 IPs covered by the lead plate, 0.3 mm in thickness. For imaging, it took 30 minutes by a single IP and 20 minutes by 3 IPs with the lead plate respectively. Each image of the phantom by the FCR was compared with that of obtained by a gamma camera. The image by a single IP was inferior in quality than that of by a gamma camera. However using 3 IPs with the lead plate, same quality image as by a gamma camera was obtained. The image by 3 IPs is similar to that of by 3 IPs with the lead plate. Based on the results, we performed liver and lung imaging by FCR using 3 IPs. The imaging time is twenty minutes. The images obtained with FCR are as good as the scinticamera image. However it has two major flawes in that the sensitivity is poor and the imaging time is long. Furthermore, at present this method can only be employed for static imaging. However we feel that future improvements in the FCR system will overcome these problems. (author)

  10. Soil structure characterized using computed tomographic images

    Science.gov (United States)

    Zhanqi Cheng; Stephen H. Anderson; Clark J. Gantzer; J. W. Van Sambeek

    2003-01-01

    Fractal analysis of soil structure is a relatively new method for quantifying the effects of management systems on soil properties and quality. The objective of this work was to explore several methods of studying images to describe and quantify structure of soils under forest management. This research uses computed tomography and a topological method called Multiple...

  11. Parotid lymphomas - clinical and computed tomogrphic imaging ...

    African Journals Online (AJOL)

    Objective. To review the clinical presentation and computed tomography (CT) imaging characteristics of all parotid lymphomas diagnosed at the study institution over a 7-year period. Design. Retrospective chart review of parotid lymphomas diagnosed between 1997 and 2004. Subjects. A total of 121 patients with parotid ...

  12. Computer-Controlled Force Generator, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — TDA Research, Inc. is developing a compact, low power, Next-Generation Exercise Device (NGRED) that can generate any force between 5 and 600 lbf. We use a closed...

  13. Prior image constrained scatter correction in cone-beam computed tomography image-guided radiation therapy.

    Science.gov (United States)

    Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong

    2011-02-21

    X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.

  14. Computer generation of integrands for Feynman parametric integrals

    International Nuclear Information System (INIS)

    Cvitanovic, Predrag

    1973-01-01

    TECO text editing language, available on PDP-10 computers, is used for the generation and simplification of Feynman integrals. This example shows that TECO can be a useful computational tool in complicated calculations where similar algebraic structures recur many times

  15. Synchrotron Imaging Computations on the Grid without the Computing Element

    International Nuclear Information System (INIS)

    Curri, A; Pugliese, R; Borghes, R; Kourousias, G

    2011-01-01

    Besides the heavy use of the Grid in the Synchrotron Radiation Facility (SRF) Elettra, additional special requirements from the beamlines had to be satisfied through a novel solution that we present in this work. In the traditional Grid Computing paradigm the computations are performed on the Worker Nodes of the grid element known as the Computing Element. A Grid middleware extension that our team has been working on, is that of the Instrument Element. In general it is used to Grid-enable instrumentation; and it can be seen as a neighbouring concept to that of the traditional Control Systems. As a further extension we demonstrate the Instrument Element as the steering mechanism for a series of computations. In our deployment it interfaces a Control System that manages a series of computational demanding Scientific Imaging tasks in an online manner. The instrument control in Elettra is done through a suitable Distributed Control System, a common approach in the SRF community. The applications that we present are for a beamline working in medical imaging. The solution resulted to a substantial improvement of a Computed Tomography workflow. The near-real-time requirements could not have been easily satisfied from our Grid's middleware (gLite) due to the various latencies often occurred during the job submission and queuing phases. Moreover the required deployment of a set of TANGO devices could not have been done in a standard gLite WN. Besides the avoidance of certain core Grid components, the Grid Security infrastructure has been utilised in the final solution.

  16. Automatic Model Generation Framework for Computational Simulation of Cochlear Implantation.

    Science.gov (United States)

    Mangado, Nerea; Ceresa, Mario; Duchateau, Nicolas; Kjer, Hans Martin; Vera, Sergio; Dejea Velardo, Hector; Mistrik, Pavel; Paulsen, Rasmus R; Fagertun, Jens; Noailly, Jérôme; Piella, Gemma; González Ballester, Miguel Ángel

    2016-08-01

    Recent developments in computational modeling of cochlear implantation are promising to study in silico the performance of the implant before surgery. However, creating a complete computational model of the patient's anatomy while including an external device geometry remains challenging. To address such a challenge, we propose an automatic framework for the generation of patient-specific meshes for finite element modeling of the implanted cochlea. First, a statistical shape model is constructed from high-resolution anatomical μCT images. Then, by fitting the statistical model to a patient's CT image, an accurate model of the patient-specific cochlea anatomy is obtained. An algorithm based on the parallel transport frame is employed to perform the virtual insertion of the cochlear implant. Our automatic framework also incorporates the surrounding bone and nerve fibers and assigns constitutive parameters to all components of the finite element model. This model can then be used to study in silico the effects of the electrical stimulation of the cochlear implant. Results are shown on a total of 25 models of patients. In all cases, a final mesh suitable for finite element simulations was obtained, in an average time of 94 s. The framework has proven to be fast and robust, and is promising for a detailed prognosis of the cochlear implantation surgery.

  17. Visual computing scientific visualization and imaging systems

    CERN Document Server

    2014-01-01

    This volume aims to stimulate discussions on research involving the use of data and digital images as an understanding approach for analysis and visualization of phenomena and experiments. The emphasis is put not only on graphically representing data as a way of increasing its visual analysis, but also on the imaging systems which contribute greatly to the comprehension of real cases. Scientific Visualization and Imaging Systems encompass multidisciplinary areas, with applications in many knowledge fields such as Engineering, Medicine, Material Science, Physics, Geology, Geographic Information Systems, among others. This book is a selection of 13 revised and extended research papers presented in the International Conference on Advanced Computational Engineering and Experimenting -ACE-X conferences 2010 (Paris), 2011 (Algarve), 2012 (Istanbul) and 2013 (Madrid). The examples were particularly chosen from materials research, medical applications, general concepts applied in simulations and image analysis and ot...

  18. Automatic speech recognition for report generation in computed tomography

    International Nuclear Information System (INIS)

    Teichgraeber, U.K.M.; Ehrenstein, T.; Lemke, M.; Liebig, T.; Stobbe, H.; Hosten, N.; Keske, U.; Felix, R.

    1999-01-01

    Purpose: A study was performed to compare the performance of automatic speech recognition (ASR) with conventional transcription. Materials and Methods: 100 CT reports were generated by using ASR and 100 CT reports were dictated and written by medical transcriptionists. The time for dictation and correction of errors by the radiologist was assessed and the type of mistakes was analysed. The text recognition rate was calculated in both groups and the average time between completion of the imaging study by the technologist and generation of the written report was assessed. A commercially available speech recognition technology (ASKA Software, IBM Via Voice) running of a personal computer was used. Results: The time for the dictation using digital voice recognition was 9.4±2.3 min compared to 4.5±3.6 min with an ordinary Dictaphone. The text recognition rate was 97% with digital voice recognition and 99% with medical transcriptionists. The average time from imaging completion to written report finalisation was reduced from 47.3 hours with medical transcriptionists to 12.7 hours with ASR. The analysis of misspellings demonstrated (ASR vs. medical transcriptionists): 3 vs. 4 for syntax errors, 0 vs. 37 orthographic mistakes, 16 vs. 22 mistakes in substance and 47 vs. erroneously applied terms. Conclusions: The use of digital voice recognition as a replacement for medical transcription is recommendable when an immediate availability of written reports is necessary. (orig.) [de

  19. Review methods for image segmentation from computed tomography images

    International Nuclear Information System (INIS)

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik; Mahmud, Rozi

    2014-01-01

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affect the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan

  20. The RANDOM computer program: A linear congruential random number generator

    Science.gov (United States)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  1. Computed image analysis of neutron radiographs

    International Nuclear Information System (INIS)

    Dinca, M.; Anghel, E.; Preda, M.; Pavelescu, M.

    2008-01-01

    Similar with X-radiography, using neutron like penetrating particle, there is in practice a nondestructive technique named neutron radiology. When the registration of information is done on a film with the help of a conversion foil (with high cross section for neutrons) that emits secondary radiation (β,γ) that creates a latent image, the technique is named neutron radiography. A radiographic industrial film that contains the image of the internal structure of an object, obtained by neutron radiography, must be subsequently analyzed to obtain qualitative and quantitative information about the structural integrity of that object. There is possible to do a computed analysis of a film using a facility with next main components: an illuminator for film, a CCD video camera and a computer (PC) with suitable software. The qualitative analysis intends to put in evidence possibly anomalies of the structure due to manufacturing processes or induced by working processes (for example, the irradiation activity in the case of the nuclear fuel). The quantitative determination is based on measurements of some image parameters: dimensions, optical densities. The illuminator has been built specially to perform this application but can be used for simple visual observation. The illuminated area is 9x40 cm. The frame of the system is a comparer of Abbe Carl Zeiss Jena type, which has been adapted to achieve this application. The video camera assures the capture of image that is stored and processed by computer. A special program SIMAG-NG has been developed at INR Pitesti that beside of the program SMTV II of the special acquisition module SM 5010 can analyze the images of a film. The major application of the system was the quantitative analysis of a film that contains the images of some nuclear fuel pins beside a dimensional standard. The system was used to measure the length of the pellets of the TRIGA nuclear fuel. (authors)

  2. 2nd Generation QUATARA Flight Computer Project

    Science.gov (United States)

    Falker, Jay; Keys, Andrew; Fraticelli, Jose Molina; Capo-Iugo, Pedro; Peeples, Steven

    2015-01-01

    Single core flight computer boards have been designed, developed, and tested (DD&T) to be flown in small satellites for the last few years. In this project, a prototype flight computer will be designed as a distributed multi-core system containing four microprocessors running code in parallel. This flight computer will be capable of performing multiple computationally intensive tasks such as processing digital and/or analog data, controlling actuator systems, managing cameras, operating robotic manipulators and transmitting/receiving from/to a ground station. In addition, this flight computer will be designed to be fault tolerant by creating both a robust physical hardware connection and by using a software voting scheme to determine the processor's performance. This voting scheme will leverage on the work done for the Space Launch System (SLS) flight software. The prototype flight computer will be constructed with Commercial Off-The-Shelf (COTS) components which are estimated to survive for two years in a low-Earth orbit.

  3. Proton computed tomography images with algebraic reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Bruzzi, M. [Physics and Astronomy Department, University of Florence, Florence (Italy); Civinini, C.; Scaringella, M. [INFN - Florence Division, Florence (Italy); Bonanno, D. [INFN - Catania Division, Catania (Italy); Brianzi, M. [INFN - Florence Division, Florence (Italy); Carpinelli, M. [INFN - Laboratori Nazionali del Sud, Catania (Italy); Chemistry and Pharmacy Department, University of Sassari, Sassari (Italy); Cirrone, G.A.P.; Cuttone, G. [INFN - Laboratori Nazionali del Sud, Catania (Italy); Presti, D. Lo [INFN - Catania Division, Catania (Italy); Physics and Astronomy Department, University of Catania, Catania (Italy); Maccioni, G. [INFN – Cagliari Division, Cagliari (Italy); Pallotta, S. [INFN - Florence Division, Florence (Italy); Department of Biomedical, Experimental and Clinical Sciences, University of Florence, Florence (Italy); SOD Fisica Medica, Azienda Ospedaliero-Universitaria Careggi, Firenze (Italy); Randazzo, N. [INFN - Catania Division, Catania (Italy); Romano, F. [INFN - Laboratori Nazionali del Sud, Catania (Italy); Sipala, V. [INFN - Laboratori Nazionali del Sud, Catania (Italy); Chemistry and Pharmacy Department, University of Sassari, Sassari (Italy); Talamonti, C. [INFN - Florence Division, Florence (Italy); Department of Biomedical, Experimental and Clinical Sciences, University of Florence, Florence (Italy); SOD Fisica Medica, Azienda Ospedaliero-Universitaria Careggi, Firenze (Italy); Vanzi, E. [Fisica Sanitaria, Azienda Ospedaliero-Universitaria Senese, Siena (Italy)

    2017-02-11

    A prototype of proton Computed Tomography (pCT) system for hadron-therapy has been manufactured and tested in a 175 MeV proton beam with a non-homogeneous phantom designed to simulate high-contrast material. BI-SART reconstruction algorithms have been implemented with GPU parallelism, taking into account of most likely paths of protons in matter. Reconstructed tomography images with density resolutions r.m.s. down to ~1% and spatial resolutions <1 mm, achieved within processing times of ~15′ for a 512×512 pixels image prove that this technique will be beneficial if used instead of X-CT in hadron-therapy.

  4. X-ray image signal generator

    International Nuclear Information System (INIS)

    Dalton, B.L.; Lill, B.H.

    1981-01-01

    This patent claim on behalf on EMI Ltd. relates to a flat plate X-ray detector which uses a plate detector exhibiting so-called permanent induced electric polarization in response to a pattern of radiation emergent from a patient to generate a polarization pattern which is scanned by means of a laser to cause discharge of the polarization through the plate and so generate electric signals representative of the X-ray image of the patient. In addition a second laser operating at a different wavelength e.g. infra-red, also scans or floods the plate detector to move 'dark polarisation'. The plate detector may be a phosphor screen or a phosphor screen in combination with a scintillator. (author)

  5. Evaluation of the Next Generation Gamma Imager

    International Nuclear Information System (INIS)

    Amgarou, Khalil; Timi, Tebug; Blanc de Lanaute, Nicolas; Patoz, Audrey; Talent, Philippe; Menaa, Nabil; Carrel, Frederick; Schoepff, Vincent; Lemaire, Hermine; Gmar, Mehdi; Abou Khalil, Roger; Dogny, Stephane; Varet, Thierry

    2013-06-01

    Towards the end of their life-cycle, nuclear facilities are generally associated with high levels of radiation exposure. The implementation of the ALARA principle requires limiting the radiation exposure of the operating people during the different tasks of maintenance, decontamination and decommissioning. Canberra's latest involvement in the provision of nuclear measurement solutions has led, in the framework of a partnership agreement with CEA LIST, to the development of a new generation gamma imager. The latter, which is designed for an accurate localization of radioactive hotspots, consists of a pixilated chip hybridized to a 1 mm thick CdTe substrate to record photon pulses and a coded mask aperture allowing for background noise subtraction by means of a technique called mask/anti-mask procedure. This greatly contributes to the reduced size and weight of the gamma imager as gamma shielding around the detector is less required. The spatial radioactivity map is automatically superimposed onto a pre-recorded photographic (visible) image of the scene of interest. In an effort to evaluate the performances of the new gamma imager, several experimental tests have been performed on a industrial prototype to investigate its detection response, including gamma imaging sensitivity and angular resolutions, over a wide energy range (at least from 59 keV to 1330 keV). The impact of the background noise was also evaluated together with some future features like energy discrimination and parallax correction. This paper presents and discusses the main results obtained in the above experimental study. A comparison with Monte Carlo simulations using the MCNP code is provided as well. (authors)

  6. Generating Computational Models for Serious Gaming

    NARCIS (Netherlands)

    Westera, Wim

    2018-01-01

    Many serious games include computational models that simulate dynamic systems. These models promote enhanced interaction and responsiveness. Under the social web paradigm more and more usable game authoring tools become available that enable prosumers to create their own games, but the inclusion of

  7. Examination of concept of next generation computer. Progress report 1999

    Energy Technology Data Exchange (ETDEWEB)

    Higuchi, Kenji; Hasegawa, Yukihiro; Hirayama, Toshio

    2000-12-01

    The Center for Promotion of Computational Science and Engineering has conducted R and D works on the technology of parallel processing and has started the examination of the next generation computer in 1999. This report describes the behavior analyses of quantum calculation codes. It also describes the consideration for the analyses and examination results for the method to reduce cash misses. Furthermore, it describes a performance simulator that is being developed to quantitatively examine the concept of the next generation computer. (author)

  8. Computer-generated movies as an analytic tool

    International Nuclear Information System (INIS)

    Elliott, R.L.

    1978-01-01

    One of the problems faced by the users of large, sophisticated modeling programs at the Los Alamos Scientific Laboratory (LASL) is the analysis of the results of their calculations. One of the more productive and frequently spectacular methods is the production of computer-generated movies. An overview of the generation of computer movies at LASL is presented. The hardware, software, and generation techniques are briefly discussed

  9. Cone Beam Computed Tomographic imaging in orthodontics.

    Science.gov (United States)

    Scarfe, W C; Azevedo, B; Toghyani, S; Farman, A G

    2017-03-01

    Over the last 15 years, cone beam computed tomographic (CBCT) imaging has emerged as an important supplemental radiographic technique for orthodontic diagnosis and treatment planning, especially in situations which require an understanding of the complex anatomic relationships and surrounding structures of the maxillofacial skeleton. CBCT imaging provides unique features and advantages to enhance orthodontic practice over conventional extraoral radiographic imaging. While it is the responsibility of each practitioner to make a decision, in tandem with the patient/family, consensus-derived, evidence-based clinical guidelines are available to assist the clinician in the decision-making process. Specific recommendations provide selection guidance based on variables such as phase of treatment, clinically-assessed treatment difficulty, the presence of dental and/or skeletal modifying conditions, and pathology. CBCT imaging in orthodontics should always be considered wisely as children have conservatively, on average, a three to five times greater radiation risk compared with adults for the same exposure. The purpose of this paper is to provide an understanding of the operation of CBCT equipment as it relates to image quality and dose, highlight the benefits of the technique in orthodontic practice, and provide guidance on appropriate clinical use with respect to radiation dose and relative risk, particularly for the paediatric patient. © 2017 Australian Dental Association.

  10. Causal Set Generator and Action Computer

    OpenAIRE

    Cunningham, William; Krioukov, Dmitri

    2017-01-01

    The causal set approach to quantum gravity has gained traction over the past three decades, but numerical experiments involving causal sets have been limited to relatively small scales. The software suite presented here provides a new framework for the generation and study of causal sets. Its efficiency surpasses previous implementations by several orders of magnitude. We highlight several important features of the code, including the compact data structures, the $O(N^2)$ causal set generatio...

  11. Computer generation and manipulation of sounds

    DEFF Research Database (Denmark)

    Serafin, Stefania

    2007-01-01

    field of investigation, taking in historical movements like musique concrète and elecktronische musik, and contemporary trends such as electronic dance music. A fascinating array of composers and inventors have contributed to a diverse set of technologies, practices and music. This book brings together......Musicians are always quick to adopt and explore new technologies. The fast-paced changes wrought by electrification, from the microphone via the analogue synthesiser to the laptop computer, have led to a wide diversity of new musical styles and techniques. Electronic music has grown to a broad....... Recent areas of intense activity such as audiovisuals, live electronic music, interactivity and network music are actively promoted....

  12. BLAST Ring Image Generator (BRIG: simple prokaryote genome comparisons

    Directory of Open Access Journals (Sweden)

    Beatson Scott A

    2011-08-01

    Full Text Available Abstract Background Visualisation of genome comparisons is invaluable for helping to determine genotypic differences between closely related prokaryotes. New visualisation and abstraction methods are required in order to improve the validation, interpretation and communication of genome sequence information; especially with the increasing amount of data arising from next-generation sequencing projects. Visualising a prokaryote genome as a circular image has become a powerful means of displaying informative comparisons of one genome to a number of others. Several programs, imaging libraries and internet resources already exist for this purpose, however, most are either limited in the number of comparisons they can show, are unable to adequately utilise draft genome sequence data, or require a knowledge of command-line scripting for implementation. Currently, there is no freely available desktop application that enables users to rapidly visualise comparisons between hundreds of draft or complete genomes in a single image. Results BLAST Ring Image Generator (BRIG can generate images that show multiple prokaryote genome comparisons, without an arbitrary limit on the number of genomes compared. The output image shows similarity between a central reference sequence and other sequences as a set of concentric rings, where BLAST matches are coloured on a sliding scale indicating a defined percentage identity. Images can also include draft genome assembly information to show read coverage, assembly breakpoints and collapsed repeats. In addition, BRIG supports the mapping of unassembled sequencing reads against one or more central reference sequences. Many types of custom data and annotations can be shown using BRIG, making it a versatile approach for visualising a range of genomic comparison data. BRIG is readily accessible to any user, as it assumes no specialist computational knowledge and will perform all required file parsing and BLAST comparisons

  13. BLAST Ring Image Generator (BRIG): simple prokaryote genome comparisons.

    Science.gov (United States)

    Alikhan, Nabil-Fareed; Petty, Nicola K; Ben Zakour, Nouri L; Beatson, Scott A

    2011-08-08

    Visualisation of genome comparisons is invaluable for helping to determine genotypic differences between closely related prokaryotes. New visualisation and abstraction methods are required in order to improve the validation, interpretation and communication of genome sequence information; especially with the increasing amount of data arising from next-generation sequencing projects. Visualising a prokaryote genome as a circular image has become a powerful means of displaying informative comparisons of one genome to a number of others. Several programs, imaging libraries and internet resources already exist for this purpose, however, most are either limited in the number of comparisons they can show, are unable to adequately utilise draft genome sequence data, or require a knowledge of command-line scripting for implementation. Currently, there is no freely available desktop application that enables users to rapidly visualise comparisons between hundreds of draft or complete genomes in a single image. BLAST Ring Image Generator (BRIG) can generate images that show multiple prokaryote genome comparisons, without an arbitrary limit on the number of genomes compared. The output image shows similarity between a central reference sequence and other sequences as a set of concentric rings, where BLAST matches are coloured on a sliding scale indicating a defined percentage identity. Images can also include draft genome assembly information to show read coverage, assembly breakpoints and collapsed repeats. In addition, BRIG supports the mapping of unassembled sequencing reads against one or more central reference sequences. Many types of custom data and annotations can be shown using BRIG, making it a versatile approach for visualising a range of genomic comparison data. BRIG is readily accessible to any user, as it assumes no specialist computational knowledge and will perform all required file parsing and BLAST comparisons automatically. There is a clear need for a user

  14. ADGEN: ADjoint GENerator for computer models

    Energy Technology Data Exchange (ETDEWEB)

    Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.

    1989-05-01

    This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs.

  15. ADGEN: ADjoint GENerator for computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.

    1989-05-01

    This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs

  16. Computing volume potentials for noninvasive imaging of cardiac excitation.

    Science.gov (United States)

    van der Graaf, A W Maurits; Bhagirath, Pranav; van Driel, Vincent J H M; Ramanna, Hemanth; de Hooge, Jacques; de Groot, Natasja M S; Götte, Marco J W

    2015-03-01

    In noninvasive imaging of cardiac excitation, the use of body surface potentials (BSP) rather than body volume potentials (BVP) has been favored due to enhanced computational efficiency and reduced modeling effort. Nowadays, increased computational power and the availability of open source software enable the calculation of BVP for clinical purposes. In order to illustrate the possible advantages of this approach, the explanatory power of BVP is investigated using a rectangular tank filled with an electrolytic conductor and a patient specific three dimensional model. MRI images of the tank and of a patient were obtained in three orthogonal directions using a turbo spin echo MRI sequence. MRI images were segmented in three dimensional using custom written software. Gmsh software was used for mesh generation. BVP were computed using a transfer matrix and FEniCS software. The solution for 240,000 nodes, corresponding to a resolution of 5 mm throughout the thorax volume, was computed in 3 minutes. The tank experiment revealed that an increased electrode surface renders the position of the 4 V equipotential plane insensitive to mesh cell size and reduces simulated deviations. In the patient-specific model, the impact of assigning a different conductivity to lung tissue on the distribution of volume potentials could be visualized. Generation of high quality volume meshes and computation of BVP with a resolution of 5 mm is feasible using generally available software and hardware. Estimation of BVP may lead to an improved understanding of the genesis of BSP and sources of local inaccuracies. © 2014 Wiley Periodicals, Inc.

  17. Automated quadrilateral mesh generation for digital image structures

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    With the development of advanced imaging technology, digital images are widely used. This paper proposes an automatic quadrilateral mesh generation algorithm for multi-colour imaged structures. It takes an original arbitrary digital image as an input for automatic quadrilateral mesh generation, this includes removing the noise, extracting and smoothing the boundary geometries between different colours, and automatic all-quad mesh generation with the above boundaries as constraints. An application example is...

  18. Advanced proton imaging in computed tomography

    CERN Document Server

    Mattiazzo, S; Giubilato, P; Pantano, D; Pozzobon, N; Snoeys, W; Wyss, J

    2015-01-01

    In recent years the use of hadrons for cancer radiation treatment has grown in importance, and many facilities are currently operational or under construction worldwide. To fully exploit the therapeutic advantages offered by hadron therapy, precise body imaging for accurate beam delivery is decisive. Proton computed tomography (pCT) scanners, currently in their R&D phase, provide the ultimate 3D imaging for hadrons treatment guidance. A key component of a pCT scanner is the detector used to track the protons, which has great impact on the scanner performances and ultimately limits its maximum speed. In this article, a novel proton-tracking detector was presented that would have higher scanning speed, better spatial resolution and lower material budget with respect to present state-of-the-art detectors, leading to enhanced performances. This advancement in performances is achieved by employing the very latest development in monolithic active pixel detectors (to build high granularity, low material budget, ...

  19. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  20. Computing Challenges in Coded Mask Imaging

    Science.gov (United States)

    Skinner, Gerald

    2009-01-01

    This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.

  1. A three-dimensional computer graphic imaging for neurosurgery

    International Nuclear Information System (INIS)

    Uchino, Masafumi; Onagi, Atsuo; Seiki, Yoshikatsu

    1987-01-01

    Information offered by conventional diagnostic tools for medical use, including X-ray films, CT, MRI, RI images and PET, are usually two-dimensional. However, the human body and pathological lesions are really extended in 3 dimensions. Interpreters have to reconstruct an imaginative, 3-dimensional configuration of lesions from 2-dimensional information on many films, according to their knowledge and experience. All this sometimes wastes a lot of time and gives rise to inconclusive discussion among interpreters. The advent and rapid progress of new computer graphic techniques, however, makes it possible to draw an apparent 3-dimensional image of a lesion on the basis of a 2-dimensional display; this is named a pseudo-3-dimensional image. After the region of interest of the CT-sliced image has been extracted by means of a semi-automatic contour extraction algorithm, multi-slice CT images are constructed by the voxel method. A 3-dimensional image is then generated by the use of the Z-buffer. Subsequently, transparent, semi-transparent, and color display are provided. This new method of display was used for CT-scan films of various intracerebral pathological lesions, including tumors, hematomas, and congenital anomalies: The benefits, prospects, and technical limits of this imaging technique for clinical use were discussed. (author)

  2. Computer simulation on generation of nitrogen clusters

    International Nuclear Information System (INIS)

    Yano, Katsuki

    1975-01-01

    Numerical calculations were made for supersonic flow of nitrogen gas accompanied by homogeneous condensation through a nozzle. It was demonstrated that nitrogen clusters are generated in a nozzle and, by comparing the experimental results, the surface tension of the clusters was obtained at 0.68 sigmasub(infinity) and the condensation coefficient at 0.1--0.2, where sigmasub(infinity) is the surface tension of plane surface of liquid nitrogen. Numerical results calculated with the above values show that large clusters are produced under conditions of high degree of saturation and high temperature in a gas reservoir, and also when a nozzle with small open angle and/or large throat is used. These results agree well with the experimental results. (auth.)

  3. Medical image computing for computer-supported diagnostics and therapy. Advances and perspectives.

    Science.gov (United States)

    Handels, H; Ehrhardt, J

    2009-01-01

    Medical image computing has become one of the most challenging fields in medical informatics. In image-based diagnostics of the future software assistance will become more and more important, and image analysis systems integrating advanced image computing methods are needed to extract quantitative image parameters to characterize the state and changes of image structures of interest (e.g. tumors, organs, vessels, bones etc.) in a reproducible and objective way. Furthermore, in the field of software-assisted and navigated surgery medical image computing methods play a key role and have opened up new perspectives for patient treatment. However, further developments are needed to increase the grade of automation, accuracy, reproducibility and robustness. Moreover, the systems developed have to be integrated into the clinical workflow. For the development of advanced image computing systems methods of different scientific fields have to be adapted and used in combination. The principal methodologies in medical image computing are the following: image segmentation, image registration, image analysis for quantification and computer assisted image interpretation, modeling and simulation as well as visualization and virtual reality. Especially, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients and will gain importance in diagnostic and therapy of the future. From a methodical point of view the authors identify the following future trends and perspectives in medical image computing: development of optimized application-specific systems and integration into the clinical workflow, enhanced computational models for image analysis and virtual reality training systems, integration of different image computing methods, further integration of multimodal image data and biosignals and advanced methods for 4D medical image computing. The development of image analysis systems for diagnostic support or

  4. Crowdsourcing for reference correspondence generation in endoscopic images.

    Science.gov (United States)

    Maier-Hein, Lena; Mersmann, Sven; Kondermann, Daniel; Stock, Christian; Kenngott, Hannes Gotz; Sanchez, Alexandro; Wagner, Martin; Preukschas, Anas; Wekerle, Anna-Laura; Helfert, Stefanie; Bodenstedt, Sebastian; Speidel, Stefanie

    2014-01-01

    Computer-assisted minimally-invasive surgery (MIS) is often based on algorithms that require establishing correspondences between endoscopic images. However, reference annotations frequently required to train or validate a method are extremely difficult to obtain because they are typically made by a medical expert with very limited resources, and publicly available data sets are still far too small to capture the wide range of anatomical/scene variance. Crowdsourcing is a new trend that is based on outsourcing cognitive tasks to many anonymous untrained individuals from an online community. To our knowledge, this paper is the first to investigate the concept of crowdsourcing in the context of endoscopic video image annotation for computer-assisted MIS. According to our study on publicly available in vivo data with manual reference annotations, anonymous non-experts obtain a median annotation error of 2 px (n = 10,000). By applying cluster analysis to multiple annotations per correspondence, this error can be reduced to about 1 px, which is comparable to that obtained by medical experts (n = 500). We conclude that crowdsourcing is a viable method for generating high quality reference correspondences in endoscopic video images.

  5. Innovations in Computer Generated Autonomy at the MOVES Institute

    National Research Council Canada - National Science Library

    Hiles, John

    2001-01-01

    The M6VES Institute's Computer-Generated Autonomy Group has focused on a research goal of modeling intensely complex and adaptive behavior while at the same time making the behavior far easier to create and control...

  6. New Generation General Purpose Computer (GPC) compact IBM unit

    Science.gov (United States)

    1991-01-01

    New Generation General Purpose Computer (GPC) compact IBM unit replaces a two-unit earlier generation computer. The new IBM unit is documented in table top views alone (S91-26867, S91-26868), with the onboard equipment it supports including the flight deck CRT screen and keypad (S91-26866), and next to the two earlier versions it replaces (S91-26869).

  7. Feasible Dose Reduction in Routine Chest Computed Tomography Maintaining Constant Image Quality Using the Last Three Scanner Generations: From Filtered Back Projection to Sinogram-affirmed Iterative Reconstruction and Impact of the Novel Fully Integrated Detector Design Minimizing Electronic Noise

    Directory of Open Access Journals (Sweden)

    Lukas Ebner

    2014-01-01

    Full Text Available Objective:The aim of the present study was to evaluate a dose reduction in contrast-enhanced chest computed tomography (CT by comparing the three latest generations of Siemens CT scanners used in clinical practice. We analyzed the amount of radiation used with filtered back projection (FBP and an iterative reconstruction (IR algorithm to yield the same image quality. Furthermore, the influence on the radiation dose of the most recent integrated circuit detector (ICD; Stellar detector, Siemens Healthcare, Erlangen, Germany was investigated. Materials and Methods: 136 Patients were included. Scan parameters were set to a thorax routine: SOMATOM Sensation 64 (FBP, SOMATOM Definition Flash (IR, and SOMATOM Definition Edge (ICD and IR. Tube current was set constantly to the reference level of 100 mA automated tube current modulation using reference milliamperes. Care kV was used on the Flash and Edge scanner, while tube potential was individually selected between 100 and 140 kVp by the medical technologists at the SOMATOM Sensation. Quality assessment was performed on soft-tissue kernel reconstruction. Dose was represented by the dose length product. Results: Dose-length product (DLP with FBP for the average chest CT was 308 mGycm ± 99.6. In contrast, the DLP for the chest CT with IR algorithm was 196.8 mGycm ± 68.8 (P = 0.0001. Further decline in dose can be noted with IR and the ICD: DLP: 166.4 mGycm ± 54.5 (P = 0.033. The dose reduction compared to FBP was 36.1% with IR and 45.6% with IR/ICD. Signal-to-noise ratio (SNR was favorable in the aorta, bone, and soft tissue for IR/ICD in combination compared to FBP (the P values ranged from 0.003 to 0.048. Overall contrast-to-noise ratio (CNR improved with declining DLP. Conclusion: The most recent technical developments, namely IR in combination with integrated circuit detectors, can significantly lower radiation dose in chest CT examinations.

  8. A computational note on finite groups with two generators

    International Nuclear Information System (INIS)

    Saeed-ul-Islam, M.

    1983-12-01

    Finite groups with two independent generators attracted the attention of mathematicians during 1940-1959. These groups are subgroups of SU(n) and an interest is now being shown in these groups by particle physicists. In this note we give a brief history of these groups and announce some of the computations done by using a computer. (author)

  9. Computed tomography with selectable image resolution

    International Nuclear Information System (INIS)

    Dibianca, F.A.; Dallapiazza, D.G.

    1981-01-01

    A computed tomography system x-ray detector has a central group of half-width detector elements and groups of full-width elements on each side of the central group. To obtain x-ray attenuation data for whole body layers, the half-width elements are switched effectively into paralleled pairs so all elements act like full-width elements and an image of normal resolution is obtained. For narrower head layers, the elements in the central group are used as half-width elements so resolution which is twice as great as normal is obtained. The central group is also used in the half-width mode and the outside groups are used in the full-width mode to obtain a high resolution image of a body zone within a full body layer. In one embodiment data signals from the detector are switched by electronic multiplexing and in another embodiment a processor chooses the signals for the various kinds of images that are to be reconstructed. (author)

  10. A computer program for the pointwise functions generation

    International Nuclear Information System (INIS)

    Caldeira, Alexandre D.

    1995-01-01

    A computer program that was developed with the objective of generating pointwise functions, by a combination of tabulated values and/or mathematical expressions, to be used as weighting functions for nuclear data is presented. This simple program can be an important tool for researchers involved in group constants generation. (author). 5 refs, 2 figs

  11. Generation of synthetic Kinect depth images based on empirical noise model

    DEFF Research Database (Denmark)

    Iversen, Thorbjørn Mosekjær; Kraft, Dirk

    2017-01-01

    The development, training and evaluation of computer vision algorithms rely on the availability of a large number of images. The acquisition of these images can be time-consuming if they are recorded using real sensors. An alternative is to rely on synthetic images which can be rapidly generated....... This Letter describes a novel method for the simulation of Kinect v1 depth images. The method is based on an existing empirical noise model from the literature. The authors show that their relatively simple method is able to provide depth images which have a high similarity with real depth images....

  12. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing

    Directory of Open Access Journals (Sweden)

    Fan Zhang

    2016-04-01

    Full Text Available With the development of synthetic aperture radar (SAR technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO. However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  13. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.

    Science.gov (United States)

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-04-07

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  14. Calculating computer-generated optical elements to produce arbitrary intensity distributions

    International Nuclear Information System (INIS)

    Findlay, S.; Nugent, K.A.; Scholten, R.E.

    2000-01-01

    Full text: We describe preliminary investigation into using a computer to generate optical elements (CGOEs) with phase-only variation, that will produce an arbitrary intensity distribution in a given image plane. An iterative calculation cycles between the CGOE and the image plane and modifies each according to the appropriate constraints. We extend this to the calculation of defined intensity distributions in two separated planes by modifying both phase and intensity at the CGOE

  15. Teaching French Transformational Grammar by Means of Computer-Generated Video-Tapes.

    Science.gov (United States)

    Adler, Alfred; Thomas, Jean Jacques

    This paper describes a pilot program in an integrated media presentation of foreign languages and the production and usage of seven computer-generated video tapes which demonstrate various aspects of French syntax. This instructional set could form the basis for CAI lessons in which the student is presented images identical to those on the video…

  16. Computational Phase Imaging for Biomedical Applications

    Science.gov (United States)

    Nguyen, Tan Huu

    When a sample is illuminated by an imaging field, its fingerprints are left on the amplitude and the phase of the emerging wave. Capturing the information of the wavefront grants us a deeper understanding of the optical properties of the sample, and of the light-matter interaction. While the amplitude information has been intensively studied, the use of the phase information has been less common. Because all detectors are sensitive to intensity, not phase, wavefront measurements are significantly more challenging. Deploying optical interferometry to measure phase through phase-intensity conversion, quantitative phase imaging (QPI) has recently gained tremendous success in material and life sciences. The first topic of this dissertation describes our effort to develop a new QPI setup, named transmission Spatial Light Interference Microscopy (tSLIM), that uses the twisted nematic liquid-crystal (TNLC) modulators. Compared to the established SLIM technique, tSLIM is much less expensive to build than its predecessor (SLIM) while maintaining significant performance. The tSLIM system uses parallel aligned liquid-crystal (PANLC) modulators, has a slightly smaller signal-to-noise Ratio (SNR), and a more complicated model for the image formation. However, such complexity is well addressed by computing. Most importantly, tSLIM uses TNLC modulators that are popular in display LCDs. Therefore, the total cost of the system is significantly reduced. Alongside developing new imaging modalities, we also improved current QPI imaging systems. In practice, an incident field to the sample is rarely perfectly spatially coherent, i.e., plane wave. It is generally partially coherent; i.e., it comprises of many incoherent plane waves coming from multiple directions. This illumination yields artifacts in the phase measurement results, e.g., halo and phase-underestimation. One solution is using a very bright source, e.g., a laser, which can be spatially filtered very well. However, the

  17. Computed tomography of x-ray images using neural networks

    Science.gov (United States)

    Allred, Lloyd G.; Jones, Martin H.; Sheats, Matthew J.; Davis, Anthony W.

    2000-03-01

    Traditional CT reconstruction is done using the technique of Filtered Backprojection. While this technique is widely employed in industrial and medical applications, it is not generally understood that FB has a fundamental flaw. Gibbs phenomena states any Fourier reconstruction will produce errors in the vicinity of all discontinuities, and that the error will equal 28 percent of the discontinuity. A number of years back, one of the authors proposed a biological perception model whereby biological neural networks perceive 3D images from stereo vision. The perception model proports an internal hard-wired neural network which emulates the external physical process. A process is repeated whereby erroneous unknown internal values are used to generate an emulated signal with is compared to external sensed data, generating an error signal. Feedback from the error signal is then sued to update the erroneous internal values. The process is repeated until the error signal no longer decrease. It was soon realized that the same method could be used to obtain CT from x-rays without having to do Fourier transforms. Neural networks have the additional potential for handling non-linearities and missing data. The technique has been applied to some coral images, collected at the Los Alamos high-energy x-ray facility. The initial images show considerable promise, in some instances showing more detail than the FB images obtained from the same data. Although routine production using this new method would require a massively parallel computer, the method shows promise, especially where refined detail is required.

  18. High-efficiency photorealistic computer-generated holograms based on the backward ray-tracing technique

    Science.gov (United States)

    Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin

    2018-03-01

    Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.

  19. Kimura's disease: imaging patterns on computed tomography

    International Nuclear Information System (INIS)

    Gopinathan, Anil; Tan, T.Y.

    2009-01-01

    Aim: To define the role of computed tomography (CT) in identifying and classifying the imaging patterns of diagnostic value in Kimura's disease of the head and neck. Methods: A retrospective study was undertaken comprising 13 patients with histopathological evidence of Kimura's disease. The patients' clinical and pathological records were reviewed against a detailed analysis of their CT images performed from the base of the skull to the arch of the aorta. Results: Both well-defined, nodular masses, as well as ill-defined plaque-like infiltrative masses were seen in the subcutaneous tissue of the head and neck region. All patients had lesions adjacent to the major salivary glands. The parotid gland was affected in 10 of the 13 cases and the submandibular gland was affected in the rest. Contrast enhancement was variable. More than half of the cases had associated lymphadenopathy. Some of them showed atrophy of the skin and subcutaneous fat overlying the subcutaneous masses. Blood eosinophilia was a consistent feature in all the cases. Conclusion: The patterns of distribution, morphology, and enhancement of the lesions in Kimura's disease that can be demonstrated at CT, enables a confident, non-invasive diagnosis of this condition, in an appropriate clinical context.

  20. Edge detection based on computational ghost imaging with structured illuminations

    Science.gov (United States)

    Yuan, Sheng; Xiang, Dong; Liu, Xuemei; Zhou, Xin; Bing, Pibin

    2018-03-01

    Edge detection is one of the most important tools to recognize the features of an object. In this paper, we propose an optical edge detection method based on computational ghost imaging (CGI) with structured illuminations which are generated by an interference system. The structured intensity patterns are designed to make the edge of an object be directly imaged from detected data in CGI. This edge detection method can extract the boundaries for both binary and grayscale objects in any direction at one time. We also numerically test the influence of distance deviations in the interference system on edge extraction, i.e., the tolerance of the optical edge detection system to distance deviation. Hopefully, it may provide a guideline for scholars to build an experimental system.

  1. New challenges in grid generation and adaptivity for scientific computing

    CERN Document Server

    Formaggia, Luca

    2015-01-01

    This volume collects selected contributions from the “Fourth Tetrahedron Workshop on Grid Generation for Numerical Computations”, which was held in Verbania, Italy in July 2013. The previous editions of this Workshop were hosted by the Weierstrass Institute in Berlin (2005), by INRIA Rocquencourt in Paris (2007), and by Swansea University (2010). This book covers different, though related, aspects of the field: the generation of quality grids for complex three-dimensional geometries; parallel mesh generation algorithms; mesh adaptation, including both theoretical and implementation aspects; grid generation and adaptation on surfaces – all with an interesting mix of numerical analysis, computer science and strongly application-oriented problems.

  2. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  3. Pseudo-random number generator for the Sigma 5 computer

    Science.gov (United States)

    Carroll, S. N.

    1983-01-01

    A technique is presented for developing a pseudo-random number generator based on the linear congruential form. The two numbers used for the generator are a prime number and a corresponding primitive root, where the prime is the largest prime number that can be accurately represented on a particular computer. The primitive root is selected by applying Marsaglia's lattice test. The technique presented was applied to write a random number program for the Sigma 5 computer. The new program, named S:RANDOM1, is judged to be superior to the older program named S:RANDOM. For applications requiring several independent random number generators, a table is included showing several acceptable primitive roots. The technique and programs described can be applied to any computer having word length different from that of the Sigma 5.

  4. Parallel grid generation algorithm for distributed memory computers

    Science.gov (United States)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  5. Developing the next generation of diverse computer scientists: the need for enhanced, intersectional computing identity theory

    Science.gov (United States)

    Rodriguez, Sarah L.; Lehman, Kathleen

    2017-10-01

    This theoretical paper explores the need for enhanced, intersectional computing identity theory for the purpose of developing a diverse group of computer scientists for the future. Greater theoretical understanding of the identity formation process specifically for computing is needed in order to understand how students come to understand themselves as computer scientists. To ensure that the next generation of computer scientists is diverse, this paper presents a case for examining identity development intersectionally, understanding the ways in which women and underrepresented students may have difficulty identifying as computer scientists and be systematically oppressed in their pursuit of computer science careers. Through a review of the available scholarship, this paper suggests that creating greater theoretical understanding of the computing identity development process will inform the way in which educational stakeholders consider computer science practices and policies.

  6. Imaging workstations for computer-aided primatology: promises and pitfalls.

    Science.gov (United States)

    Vannier, M W; Conroy, G C

    1989-01-01

    In this paper, the application of biomedical imaging workstations to primatology will be explained and evaluated. The technological basis, computer hardware and software aspects, and the various uses of several types of workstations will all be discussed. The types of workstations include: (1) Simple - these display-only workstations, which function as electronic light boxes, have applications as terminals to picture archiving and communication (PAC) systems. (2) Diagnostic reporting - image-processing workstations that include the ability to perform straightforward manipulations of gray scale and raw data values will be considered for operations such as histogram equalization (whether adaptive or global), gradient edge finders, contour generation, and region of interest, as well as other related functions. (3) Manipulation systems - three-dimensional modeling and computer graphics with application to radiation therapy treatment planning, and surgical planning and evaluation will be considered. A technology of prime importance in the function of these workstations lies in communications and networking. The hierarchical organization of an electronic computer network and workstation environment with the interrelationship of simple, diagnostic reporting, and manipulation workstations to a coaxial or fiber optic network will be analyzed.

  7. Medical Imaging Informatics: Towards a Personalized Computational Patient.

    Science.gov (United States)

    Ayache, N

    2016-05-20

    Medical Imaging Informatics has become a fast evolving discipline at the crossing of Informatics, Computational Sciences, and Medicine that is profoundly changing medical practices, for the patients' benefit.

  8. Automated Generation of User Guidance by Combining Computation and Deduction

    Directory of Open Access Journals (Sweden)

    Walther Neuper

    2012-02-01

    Full Text Available Herewith, a fairly old concept is published for the first time and named "Lucas Interpretation". This has been implemented in a prototype, which has been proved useful in educational practice and has gained academic relevance with an emerging generation of educational mathematics assistants (EMA based on Computer Theorem Proving (CTP. Automated Theorem Proving (ATP, i.e. deduction, is the most reliable technology used to check user input. However ATP is inherently weak in automatically generating solutions for arbitrary problems in applied mathematics. This weakness is crucial for EMAs: when ATP checks user input as incorrect and the learner gets stuck then the system should be able to suggest possible next steps. The key idea of Lucas Interpretation is to compute the steps of a calculation following a program written in a novel CTP-based programming language, i.e. computation provides the next steps. User guidance is generated by combining deduction and computation: the latter is performed by a specific language interpreter, which works like a debugger and hands over control to the learner at breakpoints, i.e. tactics generating the steps of calculation. The interpreter also builds up logical contexts providing ATP with the data required for checking user input, thus combining computation and deduction. The paper describes the concepts underlying Lucas Interpretation so that open questions can adequately be addressed, and prerequisites for further work are provided.

  9. Legal issues of computer imaging in plastic surgery: a primer.

    Science.gov (United States)

    Chávez, A E; Dagum, P; Koch, R J; Newman, J P

    1997-11-01

    Although plastic surgeons are increasingly incorporating computer imaging techniques into their practices, many fear the possibility of legally binding themselves to achieve surgical results identical to those reflected in computer images. Computer imaging allows surgeons to manipulate digital photographs of patients to project possible surgical outcomes. Some of the many benefits imaging techniques pose include improving doctor-patient communication, facilitating the education and training of residents, and reducing administrative and storage costs. Despite the many advantages computer imaging systems offer, however, surgeons understandably worry that imaging systems expose them to immense legal liability. The possible exploitation of computer imaging by novice surgeons as a marketing tool, coupled with the lack of consensus regarding the treatment of computer images, adds to the concern of surgeons. A careful analysis of the law, however, reveals that surgeons who use computer imaging carefully and conservatively, and adopt a few simple precautions, substantially reduce their vulnerability to legal claims. In particular, surgeons face possible claims of implied contract, failure to instruct, and malpractice from their use or failure to use computer imaging. Nevertheless, legal and practical obstacles frustrate each of those causes of actions. Moreover, surgeons who incorporate a few simple safeguards into their practice may further reduce their legal susceptibility.

  10. Installation of new Generation General Purpose Computer (GPC) compact unit

    Science.gov (United States)

    1991-01-01

    In the Kennedy Space Center's (KSC's) Orbiter Processing Facility (OPF) high bay 2, Spacecraft Electronics technician Ed Carter (right), wearing clean suit, prepares for (26864) and installs (26865) the new Generation General Purpose Computer (GPC) compact IBM unit in Atlantis', Orbiter Vehicle (OV) 104's, middeck avionics bay as Orbiter Systems Quality Control technician Doug Snider looks on. Both men work for NASA contractor Lockheed Space Operations Company. All three orbiters are being outfitted with the compact IBM unit, which replaces a two-unit earlier generation computer.

  11. Synthetic SAR Image Generation using Sensor, Terrain and Target Models

    DEFF Research Database (Denmark)

    Kusk, Anders; Abulaitijiang, Adili; Dall, Jørgen

    2016-01-01

    A tool to generate synthetic SAR images of objects set on a clutter background is described. The purpose is to generate images for training Automatic Target Recognition and Identification algorithms. The tool employs a commercial electromagnetic simulation program to calculate radar cross section...

  12. Challenge for knowledge information processing systems (preliminary report on Fifth Generation Computer Systems)

    Energy Technology Data Exchange (ETDEWEB)

    Moto-oka, T

    1982-01-01

    The author explains the reasons, aims and strategies for the Fifth Generation Computer Project in Japan. The project aims to introduce a radical new breed of computer by 1990. This article outlines the economic and social reasons for the project. It describes the impacts and effects that these computers are expected to have. The areas of technology which will form the contents of the research and development are highlighted. These are areas such as VLSI technology, speech and image understanding systems, artificial intelligence and advanced architecture design. Finally a schedule for completion of research is given which aims for a completed project by 1990.

  13. 16th International Conference on Medical Image Computing and Computer Assisted Intervention

    CERN Document Server

    Klinder, Tobias; Li, Shuo

    2014-01-01

    This book contains the full papers presented at the MICCAI 2013 workshop Computational Methods and Clinical Applications for Spine Imaging. The workshop brought together researchers representing several fields, such as Biomechanics, Engineering, Medicine, Mathematics, Physics and Statistic. The works included in this book present and discuss new trends in those fields, using several methods and techniques in order to address more efficiently different and timely applications involving signal and image acquisition, image processing and analysis, image segmentation, image registration and fusion, computer simulation, image based modelling, simulation and surgical planning, image guided robot assisted surgical and image based diagnosis.

  14. Micro-computer control for super-critical He generation

    International Nuclear Information System (INIS)

    Tamada, Noriharu; Sekine, Takehiro; Tomiyama, Sakutaro

    1979-01-01

    The development of a large scale refrigeration system is being stimulated by new superconducting techniques representated by a superconducting power cable and a magnet. For the practical operation of such a large system, an automatic control system with a computer is required, because it can attain an effective and systematic operation. For this reason, we examined and developed micro-computer control techniques for supercritical He generation, as a simplified control model of the refrigeration system. The experimental results showed that the computer control system can attain fine controlability, even if the control element is only one magnetic valve, but a BASIK program language of micro-computer, which is convinient and generaly used, isn't enough one to control a more complicated system, because of its low calculating speed. Then we conclude that a more effective program language for micro-computer must be developed to realize practical refrigeration control. (author)

  15. Generating and analyzing synthetic finger vein images

    OpenAIRE

    Hillerström, Fieke; Kumar, Ajay; Veldhuis, Raymond N.J.

    2014-01-01

    Abstract: The finger-vein biometric offers higher degree of security, personal privacy and strong anti-spoofing capabilities than most other biometric modalities employed today. Emerging privacy concerns with the database acquisition and lack of availability of large scale finger-vein database have posed challenges in exploring this technology for large scale applications. This paper details the first such attempt to synthesize finger-vein images and presents analysis of synthesized images fo...

  16. Third-generation imaging sensor system concepts

    Science.gov (United States)

    Reago, Donald A.; Horn, Stuart B.; Campbell, James, Jr.; Vollmerhausen, Richard H.

    1999-07-01

    Second generation forward looking infrared sensors, based on either parallel scanning, long wave (8 - 12 um) time delay and integration HgCdTe detectors or mid wave (3 - 5 um), medium format staring (640 X 480 pixels) InSb detectors, are being fielded. The science and technology community is now turning its attention toward the definition of a future third generation of FLIR sensors, based on emerging research and development efforts. Modeled third generation sensor performance demonstrates a significant improvement in performance over second generation, resulting in enhanced lethality and survivability on the future battlefield. In this paper we present the current thinking on what third generation sensors systems will be and the resulting requirements for third generation focal plane array detectors. Three classes of sensors have been identified. The high performance sensor will contain a megapixel or larger array with at least two colors. Higher operating temperatures will also be the goal here so that power and weight can be reduced. A high performance uncooled sensor is also envisioned that will perform somewhere between first and second generation cooled detectors, but at significantly lower cost, weight, and power. The final third generation sensor is a very low cost micro sensor. This sensor can open up a whole new IR market because of its small size, weight, and cost. Future unattended throwaway sensors, micro UAVs, and helmet mounted IR cameras will be the result of this new class.

  17. Imaging and computational considerations for image computed permeability: Operating envelope of Digital Rock Physics

    Science.gov (United States)

    Saxena, Nishank; Hows, Amie; Hofmann, Ronny; Alpak, Faruk O.; Freeman, Justin; Hunter, Sander; Appel, Matthias

    2018-06-01

    This study defines the optimal operating envelope of the Digital Rock technology from the perspective of imaging and numerical simulations of transport properties. Imaging larger volumes of rocks for Digital Rock Physics (DRP) analysis improves the chances of achieving a Representative Elementary Volume (REV) at which flow-based simulations (1) do not vary with change in rock volume, and (2) is insensitive to the choice of boundary conditions. However, this often comes at the expense of image resolution. This trade-off exists due to the finiteness of current state-of-the-art imaging detectors. Imaging and analyzing digital rocks that sample the REV and still sufficiently resolve pore throats is critical to ensure simulation quality and robustness of rock property trends for further analysis. We find that at least 10 voxels are needed to sufficiently resolve pore throats for single phase fluid flow simulations. If this condition is not met, additional analyses and corrections may allow for meaningful comparisons between simulation results and laboratory measurements of permeability, but some cases may fall outside the current technical feasibility of DRP. On the other hand, we find that the ratio of field of view and effective grain size provides a reliable measure of the REV for siliciclastic rocks. If this ratio is greater than 5, the coefficient of variation for single-phase permeability simulations drops below 15%. These imaging considerations are crucial when comparing digitally computed rock flow properties with those measured in the laboratory. We find that the current imaging methods are sufficient to achieve both REV (with respect to numerical boundary conditions) and required image resolution to perform digital core analysis for coarse to fine-grained sandstones.

  18. Medical image computing and computer-assisted intervention - MICCAI 2005. Proceedings; Pt. 1

    International Nuclear Information System (INIS)

    Duncan, J.S.; Gerig, G.

    2005-01-01

    The two-volume set LNCS 3749 and LNCS 3750 constitutes the refereed proceedings of the 8th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2005, held in Palm Springs, CA, USA, in October 2005. Based on rigorous peer reviews the program committee selected 237 carefully revised full papers from 632 submissions for presentation in two volumes. The first volume includes all the contributions related to image analysis and validation, vascular image segmentation, image registration, diffusion tensor image analysis, image segmentation and analysis, clinical applications - validation, imaging systems - visualization, computer assisted diagnosis, cellular and molecular image analysis, physically-based modeling, robotics and intervention, medical image computing for clinical applications, and biological imaging - simulation and modeling. The second volume collects the papers related to robotics, image-guided surgery and interventions, image registration, medical image computing, structural and functional brain analysis, model-based image analysis, image-guided intervention: simulation, modeling and display, and image segmentation and analysis. (orig.)

  19. Medical image computing and computer science intervention. MICCAI 2005. Pt. 2. Proceedings

    International Nuclear Information System (INIS)

    Duncan, J.S.; Yale Univ., New Haven, CT; Gerig, G.

    2005-01-01

    The two-volume set LNCS 3749 and LNCS 3750 constitutes the refereed proceedings of the 8th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2005, held in Palm Springs, CA, USA, in October 2005. Based on rigorous peer reviews the program committee selected 237 carefully revised full papers from 632 submissions for presentation in two volumes. The first volume includes all the contributions related to image analysis and validation, vascular image segmentation, image registration, diffusion tensor image analysis, image segmentation and analysis, clinical applications - validation, imaging systems - visualization, computer assisted diagnosis, cellular and molecular image analysis, physically-based modeling, robotics and intervention, medical image computing for clinical applications, and biological imaging - simulation and modeling. The second volume collects the papers related to robotics, image-guided surgery and interventions, image registration, medical image computing, structural and functional brain analysis, model-based image analysis, image-guided intervention: simulation, modeling and display, and image segmentation and analysis. (orig.)

  20. Medical image computing and computer-assisted intervention - MICCAI 2005. Proceedings; Pt. 1

    Energy Technology Data Exchange (ETDEWEB)

    Duncan, J.S. [Yale Univ., New Haven, CT (United States). Dept. of Biomedical Engineering and Diagnostic Radiology; Gerig, G. (eds.) [North Carolina Univ., Chapel Hill (United States). Dept. of Computer Science

    2005-07-01

    The two-volume set LNCS 3749 and LNCS 3750 constitutes the refereed proceedings of the 8th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2005, held in Palm Springs, CA, USA, in October 2005. Based on rigorous peer reviews the program committee selected 237 carefully revised full papers from 632 submissions for presentation in two volumes. The first volume includes all the contributions related to image analysis and validation, vascular image segmentation, image registration, diffusion tensor image analysis, image segmentation and analysis, clinical applications - validation, imaging systems - visualization, computer assisted diagnosis, cellular and molecular image analysis, physically-based modeling, robotics and intervention, medical image computing for clinical applications, and biological imaging - simulation and modeling. The second volume collects the papers related to robotics, image-guided surgery and interventions, image registration, medical image computing, structural and functional brain analysis, model-based image analysis, image-guided intervention: simulation, modeling and display, and image segmentation and analysis. (orig.)

  1. Medical image computing and computer science intervention. MICCAI 2005. Pt. 2. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Duncan, J.S. [Yale Univ., New Haven, CT (United States). Dept. of Biomedical Engineering]|[Yale Univ., New Haven, CT (United States). Dept. of Diagnostic Radiology; Gerig, G. (eds.) [North Carolina Univ., Chapel Hill, NC (United States). Dept. of Computer Science

    2005-07-01

    The two-volume set LNCS 3749 and LNCS 3750 constitutes the refereed proceedings of the 8th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2005, held in Palm Springs, CA, USA, in October 2005. Based on rigorous peer reviews the program committee selected 237 carefully revised full papers from 632 submissions for presentation in two volumes. The first volume includes all the contributions related to image analysis and validation, vascular image segmentation, image registration, diffusion tensor image analysis, image segmentation and analysis, clinical applications - validation, imaging systems - visualization, computer assisted diagnosis, cellular and molecular image analysis, physically-based modeling, robotics and intervention, medical image computing for clinical applications, and biological imaging - simulation and modeling. The second volume collects the papers related to robotics, image-guided surgery and interventions, image registration, medical image computing, structural and functional brain analysis, model-based image analysis, image-guided intervention: simulation, modeling and display, and image segmentation and analysis. (orig.)

  2. An Empirical Generative Framework for Computational Modeling of Language Acquisition

    Science.gov (United States)

    Waterfall, Heidi R.; Sandbank, Ben; Onnis, Luca; Edelman, Shimon

    2010-01-01

    This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of…

  3. Short generators without quantum computers : the case of multiquadratics

    NARCIS (Netherlands)

    Bauch, J.; Bernstein, D.J.; de Valence, H.; Lange, T.; van Vredendaal, C.; Coron, J.-S.; Nielsen, J.B.

    2017-01-01

    Finding a short element g of a number field, given the ideal generated by g, is a classic problem in computational algebraic number theory. Solving this problem recovers the private key in cryptosystems introduced by Gentry, Smart–Vercauteren, Gentry–Halevi, Garg– Gentry–Halevi, et al. Work over the

  4. Challenges in scaling NLO generators to leadership computers

    Science.gov (United States)

    Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.

    2017-10-01

    Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.

  5. Student Engagement with Computer-Generated Feedback: A Case Study

    Science.gov (United States)

    Zhang, Zhe

    2017-01-01

    In order to benefit from feedback on their writing, students need to engage effectively with it. This article reports a case study on student engagement with computer-generated feedback, known as automated writing evaluation (AWE) feedback, in an EFL context. Differing from previous studies that explored commercially available AWE programs, this…

  6. Use of Computer-Generated Holograms in Security Hologram Applications

    Directory of Open Access Journals (Sweden)

    Bulanovs A.

    2016-10-01

    Full Text Available The article discusses the use of computer-generated holograms (CGHs for the application as one of the security features in the relief-phase protective holograms. An improved method of calculating CGHs is presented, based on ray-tracing approach in the case of interference of parallel rays.

  7. Inkjet printing of transparent sol-gel computer generated holograms

    NARCIS (Netherlands)

    Yakovlev, A.; Pidko, E.A.; Vinogradov, A.

    2016-01-01

    In this paper we report for the first time a method for the production of transparent computer generated holograms by desktop inkjet printing. Here we demonstrate a methodology suitable for the development of a practical approach towards fabrication of diffraction patterns using a desktop inkjet

  8. Computer Generated Optical Illusions: A Teaching and Research Tool.

    Science.gov (United States)

    Bailey, Bruce; Harman, Wade

    Interactive computer-generated simulations that highlight psychological principles were investigated in this study in which 33 female and 19 male undergraduate college student volunteers of median age 21 matched line and circle sizes in six variations of Ponzo's illusion. Prior to working with the illusions, data were collected based on subjects'…

  9. Student generated assignments about electrical circuits in a computer simulation

    NARCIS (Netherlands)

    Vreman-de Olde, Cornelise; de Jong, Anthonius J.M.

    2004-01-01

    In this study we investigated the design of assignments by students as a knowledge-generating activity. Students were required to design assignments for 'other students' in a computer simulation environment about electrical circuits. Assignments consisted of a question, alternatives, and feedback on

  10. 3D computer generated medical holograms using spatial light modulators

    Directory of Open Access Journals (Sweden)

    Ahmed Sheet

    2014-09-01

    Full Text Available The aim of this work is to electronically generate the diffraction patterns of medical images and then trying to optically reconstruct the corresponding holographs to be displayed in space. This method is proposed in a trial to find a smart alternative of the expensive and perishable recording plates.

  11. Design and applications of Computed Industrial Tomographic Imaging System (CITIS)

    Energy Technology Data Exchange (ETDEWEB)

    Ramakrishna, G S; Kumar, Umesh; Datta, S S [Bhabha Atomic Research Centre, Bombay (India). Isotope Div.

    1994-12-31

    This paper highlights the design and development of a prototype Computed Tomographic (CT) imaging system and its software for image reconstruction, simulation and display. It also describes results obtained with several test specimens including Dhruva reactor uranium fuel assembly and possibility of using neutrons as well as high energy x-rays in computed tomography. 5 refs., 4 figs.

  12. Image analysis and modeling in medical image computing. Recent developments and advances.

    Science.gov (United States)

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body

  13. Imaging in hematology. Part 2: Computed tomography, magnetic resonance imaging and nuclear imaging

    International Nuclear Information System (INIS)

    Zhechev, Y.

    2003-01-01

    A dramatic increase of the role of imaging in diagnosis of blood diseases occurred with the development of computed tomography (CT) and magnetic resonance imaging (MRI). At present CT of the chest, abdomen, and pelvis is routinely employed in diagnostic and staging evaluation. The bone marrow may be imaged by one of several methods, including scintigraphy, CT and MRI. Nuclear imaging at diagnosis can clarify findings of uncertain significance on conventional staging and may be very useful in the setting of large masses to follow responses to therapy nad to evaluate the residual tumor in a large mass that has responded to treatment. Recent developments such as helical CT, single proton emission computed tomography (SPECT) and positron-emission tomography (PET) have continued to advance diagnosis and therapy

  14. The Next Generation ARC Middleware and ATLAS Computing Model

    CERN Document Server

    Filipcic, A; The ATLAS collaboration; Smirnova, O; Konstantinov, A; Karpenko, D

    2012-01-01

    The distributed NDGF Tier-1 and associated Nordugrid clusters are well integrated into the ATLAS computing model but follow a slightly different paradigm than other ATLAS resources. The current strategy does not divide the sites as in the commonly used hierarchical model, but rather treats them as a single storage endpoint and a pool of distributed computing nodes. The next generation ARC middleware with its several new technologies provides new possibilities in development of the ATLAS computing model, such as pilot jobs with pre-cached input files, automatic job migration between the sites, integration of remote sites without connected storage elements, and automatic brokering for jobs with non-standard resource requirements. ARC's data transfer model provides an automatic way for the computing sites to participate in ATLAS' global task management system without requiring centralised brokering or data transfer services. The powerful API combined with Python and Java bindings can easily be used to build new ...

  15. Efficient computation of clipped Voronoi diagram for mesh generation

    KAUST Repository

    Yan, Dongming

    2013-04-01

    The Voronoi diagram is a fundamental geometric structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact domain (i.e. a bounded and closed 2D region or a 3D volume), some Voronoi cells of their Voronoi diagram are infinite or partially outside of the domain, but in practice only the parts of the cells inside the domain are needed, as when computing the centroidal Voronoi tessellation. Such a Voronoi diagram confined to a compact domain is called a clipped Voronoi diagram. We present an efficient algorithm to compute the clipped Voronoi diagram for a set of sites with respect to a compact 2D region or a 3D volume. We also apply the proposed method to optimal mesh generation based on the centroidal Voronoi tessellation. Crown Copyright © 2011 Published by Elsevier Ltd. All rights reserved.

  16. Efficient computation of clipped Voronoi diagram for mesh generation

    KAUST Repository

    Yan, Dongming; Wang, Wen Ping; Lé vy, Bruno L.; Liu, Yang

    2013-01-01

    The Voronoi diagram is a fundamental geometric structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact domain (i.e. a bounded and closed 2D region or a 3D volume), some Voronoi cells of their Voronoi diagram are infinite or partially outside of the domain, but in practice only the parts of the cells inside the domain are needed, as when computing the centroidal Voronoi tessellation. Such a Voronoi diagram confined to a compact domain is called a clipped Voronoi diagram. We present an efficient algorithm to compute the clipped Voronoi diagram for a set of sites with respect to a compact 2D region or a 3D volume. We also apply the proposed method to optimal mesh generation based on the centroidal Voronoi tessellation. Crown Copyright © 2011 Published by Elsevier Ltd. All rights reserved.

  17. Computing exact bundle compliance control charts via probability generating functions.

    Science.gov (United States)

    Chen, Binchao; Matis, Timothy; Benneyan, James

    2016-06-01

    Compliance to evidenced-base practices, individually and in 'bundles', remains an important focus of healthcare quality improvement for many clinical conditions. The exact probability distribution of composite bundle compliance measures used to develop corresponding control charts and other statistical tests is based on a fairly large convolution whose direct calculation can be computationally prohibitive. Various series expansions and other approximation approaches have been proposed, each with computational and accuracy tradeoffs, especially in the tails. This same probability distribution also arises in other important healthcare applications, such as for risk-adjusted outcomes and bed demand prediction, with the same computational difficulties. As an alternative, we use probability generating functions to rapidly obtain exact results and illustrate the improved accuracy and detection over other methods. Numerical testing across a wide range of applications demonstrates the computational efficiency and accuracy of this approach.

  18. Multimedia Image Technology and Computer Aided Manufacturing Engineering Analysis

    Science.gov (United States)

    Nan, Song

    2018-03-01

    Since the reform and opening up, with the continuous development of science and technology in China, more and more advanced science and technology have emerged under the trend of diversification. Multimedia imaging technology, for example, has a significant and positive impact on computer aided manufacturing engineering in China. From the perspective of scientific and technological advancement and development, the multimedia image technology has a very positive influence on the application and development of computer-aided manufacturing engineering, whether in function or function play. Therefore, this paper mainly starts from the concept of multimedia image technology to analyze the application of multimedia image technology in computer aided manufacturing engineering.

  19. The Computer Generated Art/Contemporary Cinematography And The Remainder Of The Art History. A Critical Approach

    Directory of Open Access Journals (Sweden)

    Modesta Lupașcu

    2016-11-01

    Full Text Available The paper analyses the re-conceptualization of the intermedial trope of computer generated images/VFX in recent 3D works/cinema scenes through several examples from art history, which are connected with. The obvious connections between art history and images are not conceived primarily as an embodiment of a painting, the introduction of the real into the image, but prove the reconstructive tendencies of contemporary post-postmodern art. The intellectual, the casual, or the obsessive interaction with art history shown by the new film culture, is already celebrated trough 3D computer generated art, focused to a consistently pictorialist cinematography.

  20. DISTRIBUTED GENERATION OF COMPUTER MUSIC IN THE INTERNET OF THINGS

    Directory of Open Access Journals (Sweden)

    G. G. Rogozinsky

    2015-07-01

    Full Text Available Problem Statement. The paper deals with distributed intelligent multi-agent system for computer music generation. A mathematical model for data extraction from the environment and their application in the music generation process is proposed. Methods. We use Resource Description Framework for representation of timbre data. A special musical programming language Csound is used for subsystem of synthesis and sound processing. Sound generation occurs according to the parameters of compositional model, getting data from the outworld. Results. We propose architecture of a potential distributed system for computer music generation. An example of core sound synthesis is presented. We also propose a method for mapping real world parameters to the plane of compositional model, in an attempt to imitate elements and aspects of creative inspiration. Music generation system has been represented as an artifact in the Central Museum of Communication n.a. A.S. Popov in the framework of «Night of Museums» action. In the course of public experiment it was stated that, in the whole, the system tends to a quick settling of neutral state with no musical events generation. This proves the necessity of algorithms design for active condition support of agents’ network, in the whole. Practical Relevance. Realization of the proposed system will give the possibility for creation of a technological platform for a whole new class of applications, including augmented acoustic reality and algorithmic composition.

  1. Tolerance analysis through computational imaging simulations

    Science.gov (United States)

    Birch, Gabriel C.; LaCasse, Charles F.; Stubbs, Jaclynn J.; Dagel, Amber L.; Bradley, Jon

    2017-11-01

    The modeling and simulation of non-traditional imaging systems require holistic consideration of the end-to-end system. We demonstrate this approach through a tolerance analysis of a random scattering lensless imaging system.

  2. A computational model to generate simulated three-dimensional breast masses

    Energy Technology Data Exchange (ETDEWEB)

    Sisternes, Luis de; Brankov, Jovan G.; Zysk, Adam M.; Wernick, Miles N., E-mail: wernick@iit.edu [Medical Imaging Research Center, Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, Illinois 60616 (United States); Schmidt, Robert A. [Kurt Rossmann Laboratories for Radiologic Image Research, Department of Radiology, The University of Chicago, Chicago, Illinois 60637 (United States); Nishikawa, Robert M. [Department of Radiology, University of Pittsburgh, Pittsburgh, Pennsylvania 15213 (United States)

    2015-02-15

    Purpose: To develop algorithms for creating realistic three-dimensional (3D) simulated breast masses and embedding them within actual clinical mammograms. The proposed techniques yield high-resolution simulated breast masses having randomized shapes, with user-defined mass type, size, location, and shape characteristics. Methods: The authors describe a method of producing 3D digital simulations of breast masses and a technique for embedding these simulated masses within actual digitized mammograms. Simulated 3D breast masses were generated by using a modified stochastic Gaussian random sphere model to generate a central tumor mass, and an iterative fractal branching algorithm to add complex spicule structures. The simulated masses were embedded within actual digitized mammograms. The authors evaluated the realism of the resulting hybrid phantoms by generating corresponding left- and right-breast image pairs, consisting of one breast image containing a real mass, and the opposite breast image of the same patient containing a similar simulated mass. The authors then used computer-aided diagnosis (CAD) methods and expert radiologist readers to determine whether significant differences can be observed between the real and hybrid images. Results: The authors found no statistically significant difference between the CAD features obtained from the real and simulated images of masses with either spiculated or nonspiculated margins. Likewise, the authors found that expert human readers performed very poorly in discriminating their hybrid images from real mammograms. Conclusions: The authors’ proposed method permits the realistic simulation of 3D breast masses having user-defined characteristics, enabling the creation of a large set of hybrid breast images containing a well-characterized mass, embedded within real breast background. The computational nature of the model makes it suitable for detectability studies, evaluation of computer aided diagnosis algorithms, and

  3. A computational model to generate simulated three-dimensional breast masses

    International Nuclear Information System (INIS)

    Sisternes, Luis de; Brankov, Jovan G.; Zysk, Adam M.; Wernick, Miles N.; Schmidt, Robert A.; Nishikawa, Robert M.

    2015-01-01

    Purpose: To develop algorithms for creating realistic three-dimensional (3D) simulated breast masses and embedding them within actual clinical mammograms. The proposed techniques yield high-resolution simulated breast masses having randomized shapes, with user-defined mass type, size, location, and shape characteristics. Methods: The authors describe a method of producing 3D digital simulations of breast masses and a technique for embedding these simulated masses within actual digitized mammograms. Simulated 3D breast masses were generated by using a modified stochastic Gaussian random sphere model to generate a central tumor mass, and an iterative fractal branching algorithm to add complex spicule structures. The simulated masses were embedded within actual digitized mammograms. The authors evaluated the realism of the resulting hybrid phantoms by generating corresponding left- and right-breast image pairs, consisting of one breast image containing a real mass, and the opposite breast image of the same patient containing a similar simulated mass. The authors then used computer-aided diagnosis (CAD) methods and expert radiologist readers to determine whether significant differences can be observed between the real and hybrid images. Results: The authors found no statistically significant difference between the CAD features obtained from the real and simulated images of masses with either spiculated or nonspiculated margins. Likewise, the authors found that expert human readers performed very poorly in discriminating their hybrid images from real mammograms. Conclusions: The authors’ proposed method permits the realistic simulation of 3D breast masses having user-defined characteristics, enabling the creation of a large set of hybrid breast images containing a well-characterized mass, embedded within real breast background. The computational nature of the model makes it suitable for detectability studies, evaluation of computer aided diagnosis algorithms, and

  4. Generation of a suite of 3D computer-generated breast phantoms from a limited set of human subject data

    International Nuclear Information System (INIS)

    Hsu, Christina M. L.; Palmeri, Mark L.; Segars, W. Paul; Veress, Alexander I.; Dobbins, James T. III

    2013-01-01

    Purpose: The authors previously reported on a three-dimensional computer-generated breast phantom, based on empirical human image data, including a realistic finite-element based compression model that was capable of simulating multimodality imaging data. The computerized breast phantoms are a hybrid of two phantom generation techniques, combining empirical breast CT (bCT) data with flexible computer graphics techniques. However, to date, these phantoms have been based on single human subjects. In this paper, the authors report on a new method to generate multiple phantoms, simulating additional subjects from the limited set of original dedicated breast CT data. The authors developed an image morphing technique to construct new phantoms by gradually transitioning between two human subject datasets, with the potential to generate hundreds of additional pseudoindependent phantoms from the limited bCT cases. The authors conducted a preliminary subjective assessment with a limited number of observers (n= 4) to illustrate how realistic the simulated images generated with the pseudoindependent phantoms appeared. Methods: Several mesh-based geometric transformations were developed to generate distorted breast datasets from the original human subject data. Segmented bCT data from two different human subjects were used as the “base” and “target” for morphing. Several combinations of transformations were applied to morph between the “base’ and “target” datasets such as changing the breast shape, rotating the glandular data, and changing the distribution of the glandular tissue. Following the morphing, regions of skin and fat were assigned to the morphed dataset in order to appropriately assign mechanical properties during the compression simulation. The resulting morphed breast was compressed using a finite element algorithm and simulated mammograms were generated using techniques described previously. Sixty-two simulated mammograms, generated from morphing

  5. Mathematics and computer science in medical imaging

    International Nuclear Information System (INIS)

    Viergever, M.A.; Todd-Pokroper, A.E.

    1987-01-01

    The book is divided into two parts. Part 1 gives an introduction to and an overview of the field in ten tutorial chapters. Part 2 contains a selection of invited and proffered papers reporting on current research. Subjects covered in depth are: analytical image reconstruction, regularization, iterative methods, image structure, 3-D display, compression, architectures for image processing, statistical pattern recognition, and expert systems in medical imaging

  6. Medical imaging technology reviews and computational applications

    CERN Document Server

    Dewi, Dyah

    2015-01-01

    This book presents the latest research findings and reviews in the field of medical imaging technology, covering ultrasound diagnostics approaches for detecting osteoarthritis, breast carcinoma and cardiovascular conditions, image guided biopsy and segmentation techniques for detecting lung cancer, image fusion, and simulating fluid flows for cardiovascular applications. It offers a useful guide for students, lecturers and professional researchers in the fields of biomedical engineering and image processing.

  7. Energy expenditure in adolescents playing new generation computer games.

    Science.gov (United States)

    Graves, Lee; Stratton, Gareth; Ridgers, N D; Cable, N T

    2008-07-01

    To compare the energy expenditure of adolescents when playing sedentary and new generation active computer games. Cross sectional comparison of four computer games. Setting Research laboratories. Six boys and five girls aged 13-15 years. Participants were fitted with a monitoring device validated to predict energy expenditure. They played four computer games for 15 minutes each. One of the games was sedentary (XBOX 360) and the other three were active (Wii Sports). Predicted energy expenditure, compared using repeated measures analysis of variance. Mean (standard deviation) predicted energy expenditure when playing Wii Sports bowling (190.6 (22.2) kl/kg/min), tennis (202.5 (31.5) kl/kg/min), and boxing (198.1 (33.9) kl/kg/min) was significantly greater than when playing sedentary games (125.5 (13.7) kl/kg/min) (Pgames. Playing new generation active computer games uses significantly more energy than playing sedentary computer games but not as much energy as playing the sport itself. The energy used when playing active Wii Sports games was not of high enough intensity to contribute towards the recommended daily amount of exercise in children.

  8. A Medical Image Backup Architecture Based on a NoSQL Database and Cloud Computing Services.

    Science.gov (United States)

    Santos Simões de Almeida, Luan Henrique; Costa Oliveira, Marcelo

    2015-01-01

    The use of digital systems for storing medical images generates a huge volume of data. Digital images are commonly stored and managed on a Picture Archiving and Communication System (PACS), under the DICOM standard. However, PACS is limited because it is strongly dependent on the server's physical space. Alternatively, Cloud Computing arises as an extensive, low cost, and reconfigurable resource. However, medical images contain patient information that can not be made available in a public cloud. Therefore, a mechanism to anonymize these images is needed. This poster presents a solution for this issue by taking digital images from PACS, converting the information contained in each image file to a NoSQL database, and using cloud computing to store digital images.

  9. Computer codes for simulation of Angra 1 reactor steam generator

    International Nuclear Information System (INIS)

    Pinto, A.C.

    1978-01-01

    A digital computer code is developed for the simulation of the steady-state operation of a u-tube steam generator with natural recirculation used in Pressurized Water Reactors. The steam generator is simulated with two flow channel separated by a metallic wall, with a preheating section with counter flow and a vaporizing section with parallel flow. The program permits the changes in flow patterns and heat transfer correlations, in accordance with the local conditions along the vaporizing section. Various sub-routines are developed for the determination of steam and water properties and a mathematical model is established for the simulation of transients in the same steam generator. The steady state operating conditions in one of the steam generators of ANGRA 1 reactor are determined utilizing this programme. Global results obtained agree with published values [pt

  10. Computation of Superconducting Generators for Wind Turbine Applications

    DEFF Research Database (Denmark)

    Rodriguez Zermeno, Victor Manuel

    The idea of introducing a superconducting generator for offshore wind turbine applications has received increasing support. It has been proposed as a way to meet energy market requirements and policies demanding clean energy sources in the near future. However, design considerations have to take......, to the actual generators in the KW (MW) class with an expected cross section in the order of decimeters (meters). This thesis work presents cumulative results intended to create a bottom-up model of a synchronous generator with superconducting rotor windings. In a first approach, multiscale meshes with large...... of the generator including ramp-up of rotor coils, load connection and change was simulated. Hence, transient hysteresis losses in the superconducting coils were computed. This allowed addressing several important design and performance issues such as critical current of the superconducting coils, electric load...

  11. An integrated compact airborne multispectral imaging system using embedded computer

    Science.gov (United States)

    Zhang, Yuedong; Wang, Li; Zhang, Xuguo

    2015-08-01

    An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.

  12. Computational anatomy based on whole body imaging basic principles of computer-assisted diagnosis and therapy

    CERN Document Server

    Masutani, Yoshitaka

    2017-01-01

    This book deals with computational anatomy, an emerging discipline recognized in medical science as a derivative of conventional anatomy. It is also a completely new research area on the boundaries of several sciences and technologies, such as medical imaging, computer vision, and applied mathematics. Computational Anatomy Based on Whole Body Imaging highlights the underlying principles, basic theories, and fundamental techniques in computational anatomy, which are derived from conventional anatomy, medical imaging, computer vision, and applied mathematics, in addition to various examples of applications in clinical data. The book will cover topics on the basics and applications of the new discipline. Drawing from areas in multidisciplinary fields, it provides comprehensive, integrated coverage of innovative approaches to computational anatomy. As well,Computational Anatomy Based on Whole Body Imaging serves as a valuable resource for researchers including graduate students in the field and a connection with ...

  13. Computer screen photo-excited surface plasmon resonance imaging.

    Science.gov (United States)

    Filippini, Daniel; Winquist, Fredrik; Lundström, Ingemar

    2008-09-12

    Angle and spectra resolved surface plasmon resonance (SPR) images of gold and silver thin films with protein deposits is demonstrated using a regular computer screen as light source and a web camera as detector. The screen provides multiple-angle illumination, p-polarized light and controlled spectral radiances to excite surface plasmons in a Kretchmann configuration. A model of the SPR reflectances incorporating the particularities of the source and detector explain the observed signals and the generation of distinctive SPR landscapes is demonstrated. The sensitivity and resolution of the method, determined in air and solution, are 0.145 nm pixel(-1), 0.523 nm, 5.13x10(-3) RIU degree(-1) and 6.014x10(-4) RIU, respectively, encouraging results at this proof of concept stage and considering the ubiquity of the instrumentation.

  14. Automatic Description Generation from Images : A Survey of Models, Datasets, and Evaluation Measures

    NARCIS (Netherlands)

    Bernardi, Raffaella; Cakici, Ruket; Elliott, Desmond; Erdem, Aykut; Erdem, Erkut; Ikizler-Cinbis, Nazli; Keller, Frank; Muscat, Adrian; Plank, Barbara

    2016-01-01

    Automatic description generation from natural images is a challenging problem that has recently received a large amount of interest from the computer vision and natural language processing communities. In this survey, we classify the existing approaches based on how they conceptualize this problem,

  15. Integral computer-generated hologram via a modified Gerchberg-Saxton algorithm

    International Nuclear Information System (INIS)

    Wu, Pei-Jung; Lin, Bor-Shyh; Chen, Chien-Yue; Huang, Guan-Syun; Deng, Qing-Long; Chang, Hsuan T

    2015-01-01

    An integral computer-generated hologram, which modulates the phase function of an object based on a modified Gerchberg–Saxton algorithm and compiles a digital cryptographic diagram with phase synthesis, is proposed in this study. When the diagram completes position demultiplexing decipherment, multi-angle elemental images can be reconstructed. Furthermore, an integral CGH with a depth of 225 mm and a visual angle of ±11° is projected through the lens array. (paper)

  16. Future trends in computer waste generation in India.

    Science.gov (United States)

    Dwivedy, Maheshwar; Mittal, R K

    2010-11-01

    The objective of this paper is to estimate the future projection of computer waste in India and to subsequently analyze their flow at the end of their useful phase. For this purpose, the study utilizes the logistic model-based approach proposed by Yang and Williams to forecast future trends in computer waste. The model estimates future projection of computer penetration rate utilizing their first lifespan distribution and historical sales data. A bounding analysis on the future carrying capacity was simulated using the three parameter logistic curve. The observed obsolete generation quantities from the extrapolated penetration rates are then used to model the disposal phase. The results of the bounding analysis indicate that in the year 2020, around 41-152 million units of computers will become obsolete. The obsolete computer generation quantities are then used to estimate the End-of-Life outflows by utilizing a time-series multiple lifespan model. Even a conservative estimate of the future recycling capacity of PCs will reach upwards of 30 million units during 2025. Apparently, more than 150 million units could be potentially recycled in the upper bound case. However, considering significant future investment in the e-waste recycling sector from all stakeholders in India, we propose a logistic growth in the recycling rate and estimate the requirement of recycling capacity between 60 and 400 million units for the lower and upper bound case during 2025. Finally, we compare the future obsolete PC generation amount of the US and India. Copyright © 2010 Elsevier Ltd. All rights reserved.

  17. Full parallax three-dimensional computer generated hologram with occlusion effect using ray casting technique

    International Nuclear Information System (INIS)

    Zhang, Hao; Tan, Qiaofeng; Jin, Guofan

    2013-01-01

    Holographic display is capable of reconstructing the whole optical wave field of a three-dimensional (3D) scene. It is the only one among all the 3D display techniques that can produce all the depth cues. With the development of computing technology and spatial light modulators, computer generated holograms (CGHs) can now be used to produce dynamic 3D images of synthetic objects. Computation holography becomes highly complicated and demanding when it is employed to produce real 3D images. Here we present a novel algorithm for generating a full parallax 3D CGH with occlusion effect, which is an important property of 3D perception, but has often been neglected in fully computed hologram synthesis. The ray casting technique, which is widely used in computer graphics, is introduced to handle the occlusion issue of CGH computation. Horizontally and vertically distributed rays are projected from each hologram sample to the 3D objects to obtain the complex amplitude distribution. The occlusion issue is handled by performing ray casting calculations to all the hologram samples. The proposed algorithm has no restriction on or approximation to the 3D objects, and hence it can produce reconstructed images with correct shading effect and no visible artifacts. Programmable graphics processing unit (GPU) is used to perform parallel calculation. This is made possible because each hologram sample belongs to an independent operation. To demonstrate the performance of our proposed algorithm, an optical experiment is performed to reconstruct the 3D scene by using a phase-only spatial light modulator. We can easily perceive the accommodation cue by focusing our eyes on different depths of the scene and the motion parallax cue with occlusion effect by moving our eyes around. The experiment result confirms that the CGHs produced by our algorithm can successfully reconstruct 3D images with all the depth cues.

  18. Wavefront Control and Image Restoration with Less Computing

    Science.gov (United States)

    Lyon, Richard G.

    2010-01-01

    PseudoDiversity is a method of recovering the wavefront in a sparse- or segmented- aperture optical system typified by an interferometer or a telescope equipped with an adaptive primary mirror consisting of controllably slightly moveable segments. (PseudoDiversity should not be confused with a radio-antenna-arraying method called pseudodiversity.) As in the cases of other wavefront- recovery methods, the streams of wavefront data generated by means of PseudoDiversity are used as feedback signals for controlling electromechanical actuators of the various segments so as to correct wavefront errors and thereby, for example, obtain a clearer, steadier image of a distant object in the presence of atmospheric turbulence. There are numerous potential applications in astronomy, remote sensing from aircraft and spacecraft, targeting missiles, sighting military targets, and medical imaging (including microscopy) through such intervening media as cells or water. In comparison with prior wavefront-recovery methods used in adaptive optics, PseudoDiversity involves considerably simpler equipment and procedures and less computation. For PseudoDiversity, there is no need to install separate metrological equipment or to use any optomechanical components beyond those that are already parts of the optical system to which the method is applied. In Pseudo- Diversity, the actuators of a subset of the segments or subapertures are driven to make the segments dither in the piston, tilt, and tip degrees of freedom. Each aperture is dithered at a unique frequency at an amplitude of a half wavelength of light. During the dithering, images on the focal plane are detected and digitized at a rate of at least four samples per dither period. In the processing of the image samples, the use of different dither frequencies makes it possible to determine the separate effects of the various dithered segments or apertures. The digitized image-detector outputs are processed in the spatial

  19. Foundations of computer vision computational geometry, visual image structures and object shape detection

    CERN Document Server

    Peters, James F

    2017-01-01

    This book introduces the fundamentals of computer vision (CV), with a focus on extracting useful information from digital images and videos. Including a wealth of methods used in detecting and classifying image objects and their shapes, it is the first book to apply a trio of tools (computational geometry, topology and algorithms) in solving CV problems, shape tracking in image object recognition and detecting the repetition of shapes in single images and video frames. Computational geometry provides a visualization of topological structures such as neighborhoods of points embedded in images, while image topology supplies us with structures useful in the analysis and classification of image regions. Algorithms provide a practical, step-by-step means of viewing image structures. The implementations of CV methods in Matlab and Mathematica, classification of chapter problems with the symbols (easily solved) and (challenging) and its extensive glossary of key words, examples and connections with the fabric of C...

  20. Computer assisted analysis of medical x-ray images

    Science.gov (United States)

    Bengtsson, Ewert

    1996-01-01

    X-rays were originally used to expose film. The early computers did not have enough capacity to handle images with useful resolution. The rapid development of computer technology over the last few decades has, however, led to the introduction of computers into radiology. In this overview paper, the various possible roles of computers in radiology are examined. The state of the art is briefly presented, and some predictions about the future are made.

  1. Generative adversarial networks for anomaly detection in images

    OpenAIRE

    Batiste Ros, Guillem

    2018-01-01

    Anomaly detection is used to identify abnormal observations that don t follow a normal pattern. Inthis work, we use the power of Generative Adversarial Networks in sampling from image distributionsto perform anomaly detection with images and to identify local anomalous segments within thisimages. Also, we explore potential application of this method to support pathological analysis ofbiological tissues

  2. Image quality and dose in computed tomography

    International Nuclear Information System (INIS)

    Jurik, A.G.; Jessen, K.A.; Hansen, J.

    1997-01-01

    Radiation exposure to the patient during CT is relatively high, and it is therefore important to optimize the dose so that it is as low as possible but still consistent with required diagnostic image quality. There is no established method for measuring diagnostic image quality; therefore, a set of image quality criteria which must be fulfilled for optimal image quality was defined for the retroperitoneal space and the mediastinum. The use of these criteria for assessment of image quality was tested based on 113 retroperitoneal and 68 mediastinal examinations performed in seven different CT units. All the criteria, except one, were found to be usable for measuring diagnostic image quality. The fulfilment of criteria was related to the radiation dose given in the different departments. By examination of the retroperitoneal space the effective dose varied between 5.1 and 20.0 mSv (milli Sievert), and there was a slight correlation between dose and high percent of ''yes'' score for the image quality criteria. For examination of the mediastinum the dose range was 4.4-26.5 mSv, and there was no significant increment of image quality at high doses. The great variation of dose at different CT units was due partly to differences regarding the examination procedure, especially the number of slices and the mAs (milli ampere second), but inherent dose variation between different scanners also played a part. (orig.). With 6 figs., 6 tabs

  3. The Computer Generated Art/Contemporary Cinematography And The Remainder Of The Art History. A Critical Approach

    OpenAIRE

    Modesta Lupașcu

    2016-01-01

    The paper analyses the re-conceptualization of the intermedial trope of computer generated images/VFX in recent 3D works/cinema scenes through several examples from art history, which are connected with. The obvious connections between art history and images are not conceived primarily as an embodiment of a painting, the introduction of the real into the image, but prove the reconstructive tendencies of contemporary post-postmodern art. The intellectual, the casual, or the obsessive interacti...

  4. Automatic generation of computable implementation guides from clinical information models.

    Science.gov (United States)

    Boscá, Diego; Maldonado, José Alberto; Moner, David; Robles, Montserrat

    2015-06-01

    Clinical information models are increasingly used to describe the contents of Electronic Health Records. Implementation guides are a common specification mechanism used to define such models. They contain, among other reference materials, all the constraints and rules that clinical information must obey. However, these implementation guides typically are oriented to human-readability, and thus cannot be processed by computers. As a consequence, they must be reinterpreted and transformed manually into an executable language such as Schematron or Object Constraint Language (OCL). This task can be difficult and error prone due to the big gap between both representations. The challenge is to develop a methodology for the specification of implementation guides in such a way that humans can read and understand easily and at the same time can be processed by computers. In this paper, we propose and describe a novel methodology that uses archetypes as basis for generation of implementation guides. We use archetypes to generate formal rules expressed in Natural Rule Language (NRL) and other reference materials usually included in implementation guides such as sample XML instances. We also generate Schematron rules from NRL rules to be used for the validation of data instances. We have implemented these methods in LinkEHR, an archetype editing platform, and exemplify our approach by generating NRL rules and implementation guides from EN ISO 13606, openEHR, and HL7 CDA archetypes. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Mobile Imaging and Computing for Intelligent Structural Damage Inspection

    Directory of Open Access Journals (Sweden)

    ZhiQiang Chen

    2014-01-01

    Full Text Available Optical imaging is a commonly used technique in civil engineering for aiding the archival of damage scenes and more recently for image analysis-based damage quantification. However, the limitations are evident when applying optical imaging in the field. The most significant one is the lacking of computing and processing capability in the real time. The advancement of mobile imaging and computing technologies provides a promising opportunity to change this norm. This paper first provides a timely introduction of the state-of-the-art mobile imaging and computing technologies for the purpose of engineering application development. Further we propose a mobile imaging and computing (MIC framework for conducting intelligent condition assessment for constructed objects, which features in situ imaging and real-time damage analysis. This framework synthesizes advanced mobile technologies with three innovative features: (i context-enabled image collection, (ii interactive image preprocessing, and (iii real-time image analysis and analytics. Through performance evaluation and field experiments, this paper demonstrates the feasibility and efficiency of the proposed framework.

  6. Material decomposition and virtual non-contrast imaging in photon counting computed tomography: an animal study

    Science.gov (United States)

    Gutjahr, R.; Polster, C.; Kappler, S.; Pietsch, H.; Jost, G.; Hahn, K.; Schöck, F.; Sedlmair, M.; Allmendinger, T.; Schmidt, B.; Krauss, B.; Flohr, T. G.

    2016-03-01

    The energy resolving capabilities of Photon Counting Detectors (PCD) in Computed Tomography (CT) facilitate energy-sensitive measurements. The provided image-information can be processed with Dual Energy and Multi Energy algorithms. A research PCD-CT firstly allows acquiring images with a close to clinical configuration of both the X-ray tube and the CT-detector. In this study, two algorithms (Material Decomposition and Virtual Non-Contrast-imaging (VNC)) are applied on a data set acquired from an anesthetized rabbit scanned using the PCD-CT system. Two contrast agents (CA) are applied: A gadolinium (Gd) based CA used to enhance contrasts for vascular imaging, and xenon (Xe) and air as a CA used to evaluate local ventilation of the animal's lung. Four different images are generated: a) A VNC image, suppressing any traces of the injected Gd imitating a native scan, b) a VNC image with a Gd-image as an overlay, where contrast enhancements in the vascular system are highlighted using colored labels, c) another VNC image with a Xe-image as an overlay, and d) a 3D rendered image of the animal's lung, filled with Xe, indicating local ventilation characteristics. All images are generated from two images based on energy bin information. It is shown that a modified version of a commercially available dual energy software framework is capable of providing images with diagnostic value obtained from the research PCD-CT system.

  7. Comparing Generative Adversarial Network Techniques for Image Creation and Modification

    NARCIS (Netherlands)

    Pieters, Mathijs; Wiering, Marco

    2018-01-01

    Generative adversarial networks (GANs) have demonstrated to be successful at generating realistic real-world images. In this paper we compare various GAN techniques, both supervised and unsupervised. The effects on training stability of different objective functions are compared. We add an encoder

  8. Image viewing station for MR and SPECT : using personal computer

    International Nuclear Information System (INIS)

    Yim, Byung Il; Jeong, Eun Kee; Suh, Jin Suck; Kim, Myeong Joon

    1996-01-01

    Macro language was programmed to analyze and process on Macintosh personal computers, GEMR images digitally transferred from the MR main computer, with special interest in the interpretation of information such as patients data and imaging parameters under each image header. By this method, raw data(files) of certain patients may be digitally stored on a hard disk or CD ROM, and the quantitative analysis, interpretation and display is possible. Patients and images were randomly selected 4.X MR images were transferred through FTP using the ethernet network. 5.X and SPECT images were transferred using floppy diskettes. To process transferred images, an freely distributed software for Macintosh namely NIH Image, with its macro language, was used to import images and translate header information. To identify necessary information, a separate window named I nfo=txt , was made for each image series. MacLC, Centris650, and PowerMac6100/CD, 7100/CD, 8100/CD models with 256 color and RAM over 8Mbyte were used. Different versions of MR images and SPECT images were displayed simultaneously and a separate window named 'info-txt' was used to show all necessary information(name of the patient, unit number, date, TR, TE, FOV etc.). Additional information(diagnosis, pathologic report etc.) was stored in another text box in 'info-txt'. The size of the file for each image plane was about 149Kbytes and the images were stored in a step-like file folders. 4.X and 5.X GE Signa 1.5T images were successfully processed with Macintosh computer and NIH Image. This result may be applied to many fields and there is hope of a broader area of application with the linkage of NIH Image and a database program

  9. Infrared Spectroscopic Imaging: The Next Generation

    Science.gov (United States)

    Bhargava, Rohit

    2013-01-01

    Infrared (IR) spectroscopic imaging seemingly matured as a technology in the mid-2000s, with commercially successful instrumentation and reports in numerous applications. Recent developments, however, have transformed our understanding of the recorded data, provided capability for new instrumentation, and greatly enhanced the ability to extract more useful information in less time. These developments are summarized here in three broad areas— data recording, interpretation of recorded data, and information extraction—and their critical review is employed to project emerging trends. Overall, the convergence of selected components from hardware, theory, algorithms, and applications is one trend. Instead of similar, general-purpose instrumentation, another trend is likely to be diverse and application-targeted designs of instrumentation driven by emerging component technologies. The recent renaissance in both fundamental science and instrumentation will likely spur investigations at the confluence of conventional spectroscopic analyses and optical physics for improved data interpretation. While chemometrics has dominated data processing, a trend will likely lie in the development of signal processing algorithms to optimally extract spectral and spatial information prior to conventional chemometric analyses. Finally, the sum of these recent advances is likely to provide unprecedented capability in measurement and scientific insight, which will present new opportunities for the applied spectroscopist. PMID:23031693

  10. A light weight secure image encryption scheme based on chaos & DNA computing

    Directory of Open Access Journals (Sweden)

    Bhaskar Mondal

    2017-10-01

    Full Text Available This paper proposed a new light weight secure cryptographic scheme for secure image communication. In this scheme the plain image is permuted first using a sequence of pseudo random number (PRN and encrypted by DeoxyriboNucleic Acid (DNA computation. Two PRN sequences are generated by a Pseudo Random Number Generator (PRNG based on cross coupled chaotic logistic map using two sets of keys. The first PRN sequence is used for permuting the plain image whereas the second PRN sequence is used for generating random DNA sequence. The number of rounds of permutation and encryption may be variable to increase security. The scheme is proposed for gray label images but the scheme may be extended for color images and text data. Simulation results exhibit that the proposed scheme can defy any kind of attack.

  11. Medical Image Processing for Fully Integrated Subject Specific Whole Brain Mesh Generation

    Directory of Open Access Journals (Sweden)

    Chih-Yang Hsu

    2015-05-01

    Full Text Available Currently, anatomically consistent segmentation of vascular trees acquired with magnetic resonance imaging requires the use of multiple image processing steps, which, in turn, depend on manual intervention. In effect, segmentation of vascular trees from medical images is time consuming and error prone due to the tortuous geometry and weak signal in small blood vessels. To overcome errors and accelerate the image processing time, we introduce an automatic image processing pipeline for constructing subject specific computational meshes for entire cerebral vasculature, including segmentation of ancillary structures; the grey and white matter, cerebrospinal fluid space, skull, and scalp. To demonstrate the validity of the new pipeline, we segmented the entire intracranial compartment with special attention of the angioarchitecture from magnetic resonance imaging acquired for two healthy volunteers. The raw images were processed through our pipeline for automatic segmentation and mesh generation. Due to partial volume effect and finite resolution, the computational meshes intersect with each other at respective interfaces. To eliminate anatomically inconsistent overlap, we utilized morphological operations to separate the structures with a physiologically sound gap spaces. The resulting meshes exhibit anatomically correct spatial extent and relative positions without intersections. For validation, we computed critical biometrics of the angioarchitecture, the cortical surfaces, ventricular system, and cerebrospinal fluid (CSF spaces and compared against literature values. Volumina and surface areas of the computational mesh were found to be in physiological ranges. In conclusion, we present an automatic image processing pipeline to automate the segmentation of the main intracranial compartments including a subject-specific vascular trees. These computational meshes can be used in 3D immersive visualization for diagnosis, surgery planning with haptics

  12. Potentially Low Cost Solution to Extend Use of Early Generation Computed Tomography

    Directory of Open Access Journals (Sweden)

    Tonna, Joseph E

    2010-12-01

    Full Text Available In preparing a case report on Brown-Séquard syndrome for publication, we made the incidental finding that the inexpensive, commercially available three-dimensional (3D rendering software we were using could produce high quality 3D spinal cord reconstructions from any series of two-dimensional (2D computed tomography (CT images. This finding raises the possibility that spinal cord imaging capabilities can be expanded where bundled 2D multi-planar reformats and 3D reconstruction software for CT are not available and in situations where magnetic resonance imaging (MRI is either not available or appropriate (e.g. metallic implants. Given the worldwide burden of trauma and considering the limited availability of MRI and advanced generation CT scanners, we propose an alternative, potentially useful approach to imaging spinal cord that might be useful in areas where technical capabilities and support are limited. [West J Emerg Med. 2010; 11(5:463-466.

  13. Generation of synthetic image sequences for the verification of matching and tracking algorithms for deformation analysis

    Science.gov (United States)

    Bethmann, F.; Jepping, C.; Luhmann, T.

    2013-04-01

    This paper reports on a method for the generation of synthetic image data for almost arbitrary static or dynamic 3D scenarios. Image data generation is based on pre-defined 3D objects, object textures, camera orientation data and their imaging properties. The procedure does not focus on the creation of photo-realistic images under consideration of complex imaging and reflection models as they are used by common computer graphics programs. In contrast, the method is designed with main emphasis on geometrically correct synthetic images without radiometric impact. The calculation process includes photogrammetric distortion models, hence cameras with arbitrary geometric imaging characteristics can be applied. Consequently, image sets can be created that are consistent to mathematical photogrammetric models to be used as sup-pixel accurate data for the assessment of high-precision photogrammetric processing methods. In the first instance the paper describes the process of image simulation under consideration of colour value interpolation, MTF/PSF and so on. Subsequently the geometric quality of the synthetic images is evaluated with ellipse operators. Finally, simulated image sets are used to investigate matching and tracking algorithms as they have been developed at IAPG for deformation measurement in car safety testing.

  14. Real-time computer treatment of THz passive device images with the high image quality

    Science.gov (United States)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  15. Seventh Medical Image Computing and Computer Assisted Intervention Conference (MICCAI 2012)

    CERN Document Server

    Miller, Karol; Nielsen, Poul; Computational Biomechanics for Medicine : Models, Algorithms and Implementation

    2013-01-01

    One of the greatest challenges for mechanical engineers is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, biomedical sciences, and medicine. This book is an opportunity for computational biomechanics specialists to present and exchange opinions on the opportunities of applying their techniques to computer-integrated medicine. Computational Biomechanics for Medicine: Models, Algorithms and Implementation collects the papers from the Seventh Computational Biomechanics for Medicine Workshop held in Nice in conjunction with the Medical Image Computing and Computer Assisted Intervention conference. The topics covered include: medical image analysis, image-guided surgery, surgical simulation, surgical intervention planning, disease prognosis and diagnostics, injury mechanism analysis, implant and prostheses design, and medical robotics.

  16. Dictionary of computer vision and image processing

    National Research Council Canada - National Science Library

    Fisher, R. B

    2014-01-01

    ... been identified for inclusion since the current edition was published. Revised to include an additional 1000 new terms to reflect current updates, which includes a significantly increased focus on image processing terms, as well as machine learning terms...

  17. Application of Generative Adversarial Networks (GANs) to jet images

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    https://arxiv.org/abs/1701.05927 We provide a bridge between generative modeling in the Machine Learning community and simulated physical processes in High Energy Particle Physics by applying a novel Generative Adversarial Network (GAN) architecture to the production of jet images -- 2D representations of energy depositions from particles interacting with a calorimeter. We propose a simple architecture, the Location-Aware Generative Adversarial Network, that learns to produce realistic radiation patterns from simulated high energy particle collisions. The pixel intensities of GAN-generated images faithfully span over many orders of magnitude and exhibit the desired low-dimensional physical properties (i.e., jet mass, n-subjettiness, etc.). We shed light on limitations, and provide a novel empirical validation of image quality and validity of GAN-produced simulations of the natural world. This work provides a base for further explorations of GANs for use in faster simulation in High Energy Particle Physics.

  18. Computed Tomography and Magnetic Resonance Imaging Features of the Temporomandibular Joint in Two Normal Camels

    Directory of Open Access Journals (Sweden)

    Alberto Arencibia

    2012-01-01

    Full Text Available Computed tomography (CT and magnetic resonance (MR image features of the temporomandibular joint (TMJ and associated structures in two mature dromedary camels were obtained with a third-generation equipment CT and a superconducting magnet RM at 1.5 Tesla. Images were acquired in sagittal and transverse planes. Medical imaging processing with imaging software was applied to obtain postprocessing CT and MR images. Relevant anatomic structures were identified and labelled. The resulting images provided excellent anatomic detail of the TMJ and associated structures. Annotated CT and MR images from this study are intended as an anatomical reference useful in the interpretation for clinical CT and MR imaging studies of the TMJ of the dromedary camels.

  19. X-ray Computed Tomography Image Quality Indicator (IQI) Development

    Data.gov (United States)

    National Aeronautics and Space Administration — Phase one of the program is to identify suitable x-ray Computed Tomography (CT) Image Quality Indicator (IQI) design(s) that can be used to adequately capture CT...

  20. Integrating publicly-available data to generate computationally ...

    Science.gov (United States)

    The adverse outcome pathway (AOP) framework provides a way of organizing knowledge related to the key biological events that result in a particular health outcome. For the majority of environmental chemicals, the availability of curated pathways characterizing potential toxicity is limited. Methods are needed to assimilate large amounts of available molecular data and quickly generate putative AOPs for further testing and use in hazard assessment. A graph-based workflow was used to facilitate the integration of multiple data types to generate computationally-predicted (cp) AOPs. Edges between graph entities were identified through direct experimental or literature information or computationally inferred using frequent itemset mining. Data from the TG-GATEs and ToxCast programs were used to channel large-scale toxicogenomics information into a cpAOP network (cpAOPnet) of over 20,000 relationships describing connections between chemical treatments, phenotypes, and perturbed pathways measured by differential gene expression and high-throughput screening targets. Sub-networks of cpAOPs for a reference chemical (carbon tetrachloride, CCl4) and outcome (hepatic steatosis) were extracted using the network topology. Comparison of the cpAOP subnetworks to published mechanistic descriptions for both CCl4 toxicity and hepatic steatosis demonstrate that computational approaches can be used to replicate manually curated AOPs and identify pathway targets that lack genomic mar

  1. Network Restoration for Next-Generation Communication and Computing Networks

    Directory of Open Access Journals (Sweden)

    B. S. Awoyemi

    2018-01-01

    Full Text Available Network failures are undesirable but inevitable occurrences for most modern communication and computing networks. A good network design must be robust enough to handle sudden failures, maintain traffic flow, and restore failed parts of the network within a permissible time frame, at the lowest cost achievable and with as little extra complexity in the network as possible. Emerging next-generation (xG communication and computing networks such as fifth-generation networks, software-defined networks, and internet-of-things networks have promises of fast speeds, impressive data rates, and remarkable reliability. To achieve these promises, these complex and dynamic xG networks must be built with low failure possibilities, high network restoration capacity, and quick failure recovery capabilities. Hence, improved network restoration models have to be developed and incorporated in their design. In this paper, a comprehensive study on network restoration mechanisms that are being developed for addressing network failures in current and emerging xG networks is carried out. Open-ended problems are identified, while invaluable ideas for better adaptation of network restoration to evolving xG communication and computing paradigms are discussed.

  2. Design and applications of Computed Industrial Tomographic Imaging System (CITIS)

    International Nuclear Information System (INIS)

    Ramakrishna, G.S.; Umesh Kumar; Datta, S.S.; Rao, S.M.

    1996-01-01

    Computed tomographic imaging is an advanced technique for nondestructive testing (NDT) and examination. For the first time in India a computed aided tomography system has been indigenously developed in BARC for testing industrial components and was successfully demonstrated. The system in addition to Computed Tomography (CT) can also perform Digital Radiography (DR) to serve as a powerful tool for NDT applications. It has wider applications in the fields of nuclear, space and allied fields. The authors have developed a computed industrial tomographic imaging system with Cesium 137 gamma radiation source for nondestructive examination of engineering and industrial specimens. This presentation highlights the design and development of a prototype system and its software for image reconstruction, simulation and display. The paper also describes results obtained with several tests specimens, current development and possibility of using neutrons as well as high energy x-rays in computed tomography. (author)

  3. Multi-Detector Computed Tomography Imaging Techniques in Arterial Injuries

    Directory of Open Access Journals (Sweden)

    Cameron Adler

    2018-04-01

    Full Text Available Cross-sectional imaging has become a critical aspect in the evaluation of arterial injuries. In particular, angiography using computed tomography (CT is the imaging of choice. A variety of techniques and options are available when evaluating for arterial injuries. Techniques involve contrast bolus, various phases of contrast enhancement, multiplanar reconstruction, volume rendering, and maximum intensity projection. After the images are rendered, a variety of features may be seen that diagnose the injury. This article provides a general overview of the techniques, important findings, and pitfalls in cross sectional imaging of arterial imaging, particularly in relation to computed tomography. In addition, the future directions of computed tomography, including a few techniques in the process of development, is also discussed.

  4. Students’ needs of Computer Science: learning about image processing

    Directory of Open Access Journals (Sweden)

    Juana Marlen Tellez Reinoso

    2009-12-01

    Full Text Available To learn the treatment to image, specifically in the application Photoshop Marinates is one of the objectives in the specialty of Degree in Education, Computer Sciencie, guided to guarantee the preparation of the students as future professional, being able to reach in each citizen of our country an Integral General Culture. With that purpose a computer application is suggested, of tutorial type, entitled “Learning Treatment to Image".

  5. Use of personal computer image for processing a magnetic resonance image (MRI)

    International Nuclear Information System (INIS)

    Yamamoto, Tetsuo; Tanaka, Hitoshi

    1988-01-01

    Image processing of MR imaging was attempted by using a popular personal computer as 16-bit model. The computer processed the images on a 256 x 256 matrix and 512 x 512 matrix. The softwer languages for image-processing were those of Macro-Assembler performed by (MS-DOS). The original images, acuired with an 0.5 T superconducting machine (VISTA MR 0.5 T, Picker International) were transfered to the computer by the flexible disket. Image process are the display of image to monitor, other the contrast enhancement, the unsharped mask contrast enhancement, the various filter process, the edge detections or the color histogram was obtained in 1.6 sec to 67 sec, indicating that commercialzed personal computer had ability for routine clinical purpose in MRI-processing. (author)

  6. Factor Xa generation by computational modeling: an additional discriminator to thrombin generation evaluation.

    Directory of Open Access Journals (Sweden)

    Kathleen E Brummel-Ziedins

    Full Text Available Factor (fXa is a critical enzyme in blood coagulation that is responsible for the initiation and propagation of thrombin generation. Previously we have shown that analysis of computationally generated thrombin profiles is a tool to investigate hemostasis in various populations. In this study, we evaluate the potential of computationally derived time courses of fXa generation as another approach for investigating thrombotic risk. Utilizing the case (n = 473 and control (n = 426 population from the Leiden Thrombophilia Study and each individual's plasma protein factor composition for fII, fV, fVII, fVIII, fIX, fX, antithrombin and tissue factor pathway inhibitor, tissue factor-initiated total active fXa generation was assessed using a mathematical model. FXa generation was evaluated by the area under the curve (AUC, the maximum rate (MaxR and level (MaxL and the time to reach these, TMaxR and TMaxL, respectively. FXa generation was analyzed in the entire populations and in defined subgroups (by sex, age, body mass index, oral contraceptive use. The maximum rates and levels of fXa generation occur over a 10- to 12- fold range in both cases and controls. This variation is larger than that observed with thrombin (3-6 fold in the same population. The greatest risk association was obtained using either MaxR or MaxL of fXa generation; with an ∼2.2 fold increased risk for individuals exceeding the 90(th percentile. This risk was similar to that of thrombin generation(MaxR OR 2.6. Grouping defined by oral contraceptive (OC use in the control population showed the biggest differences in fXa generation; a >60% increase in the MaxR upon OC use. FXa generation can distinguish between a subset of individuals characterized by overlapping thrombin generation profiles. Analysis of fXa generation is a phenotypic characteristic which may prove to be a more sensitive discriminator than thrombin generation among all individuals.

  7. The Next Generation ARC Middleware and ATLAS Computing Model

    International Nuclear Information System (INIS)

    Filipčič, Andrej; Cameron, David; Konstantinov, Aleksandr; Karpenko, Dmytro; Smirnova, Oxana

    2012-01-01

    The distributed NDGF Tier-1 and associated NorduGrid clusters are well integrated into the ATLAS computing environment but follow a slightly different paradigm than other ATLAS resources. The current paradigm does not divide the sites as in the commonly used hierarchical model, but rather treats them as a single storage endpoint and a pool of distributed computing nodes. The next generation ARC middleware with its several new technologies provides new possibilities in development of the ATLAS computing model, such as pilot jobs with pre-cached input files, automatic job migration between the sites, integration of remote sites without connected storage elements, and automatic brokering for jobs with non-standard resource requirements. ARC's data transfer model provides an automatic way for the computing sites to participate in ATLAS’ global task management system without requiring centralised brokering or data transfer services. The powerful API combined with Python and Java bindings can easily be used to build new services for job control and data transfer. Integration of the ARC core into the EMI middleware provides a natural way to implement the new services using the ARC components

  8. Unconventional methods of imaging: computational microscopy and compact implementations

    Science.gov (United States)

    McLeod, Euan; Ozcan, Aydogan

    2016-07-01

    In the past two decades or so, there has been a renaissance of optical microscopy research and development. Much work has been done in an effort to improve the resolution and sensitivity of microscopes, while at the same time to introduce new imaging modalities, and make existing imaging systems more efficient and more accessible. In this review, we look at two particular aspects of this renaissance: computational imaging techniques and compact imaging platforms. In many cases, these aspects go hand-in-hand because the use of computational techniques can simplify the demands placed on optical hardware in obtaining a desired imaging performance. In the first main section, we cover lens-based computational imaging, in particular, light-field microscopy, structured illumination, synthetic aperture, Fourier ptychography, and compressive imaging. In the second main section, we review lensfree holographic on-chip imaging, including how images are reconstructed, phase recovery techniques, and integration with smart substrates for more advanced imaging tasks. In the third main section we describe how these and other microscopy modalities have been implemented in compact and field-portable devices, often based around smartphones. Finally, we conclude with some comments about opportunities and demand for better results, and where we believe the field is heading.

  9. Contemporary imaging: Magnetic resonance imaging, computed tomography, and interventional radiology

    International Nuclear Information System (INIS)

    Goldberg, H.I.; Higgins, C.; Ring, E.J.

    1985-01-01

    In addition to discussing the most recent advances in magnetic resonance imaging (MRI), computerized tomography (CT), and the vast array of interventional procedures, this book explores the appropriate clinical applications of each of these important modalities

  10. Pulmonary dynamics and functional imaging with krypton-81m as related to generator delivery characteristics

    International Nuclear Information System (INIS)

    Kaplan, E.

    1985-01-01

    Krypton-81m supplied from a generator by continuous elution with air is used with a gamma-camera computer system to produce a sequence of images from multiple breaths, which reconstructed the time-activity images of the breathing human lung. Functional images are produced by subsequent derivation to show specific variables of the dynamic sequences. The dynamic, quantitative, and regional aspects of the respiratory cycle are thus made available in a single study. The need for the delivery of a constant ratio of /sub 81m/Kr to air is required to accurately produce these various studies

  11. Computer functions in overall plant control of candu generating stations

    International Nuclear Information System (INIS)

    Chou, Q.B.; Stokes, H.W.

    1976-01-01

    System Planning Specifications form the basic requirements for the performance of the plant including its response to abnormal situations. The rules for the computer control programs are devised from these, taking into account limitations imposed by the reactor, heat transport and turbine-generator systems. The paper outlines these specifications and the limitations imposed by the major items of plant equipment. It describes the functions of each of the main programs, their interactions and the control modes used in the existing Ontario Hydro's nuclear station or proposed for future stations. Some simulation results showing the performance of the overall unit control system and plans for future studies are discussed. (orig.) [de

  12. 1st International Conference on Computer Vision and Image Processing

    CERN Document Server

    Kumar, Sanjeev; Roy, Partha; Sen, Debashis

    2017-01-01

    This edited volume contains technical contributions in the field of computer vision and image processing presented at the First International Conference on Computer Vision and Image Processing (CVIP 2016). The contributions are thematically divided based on their relation to operations at the lower, middle and higher levels of vision systems, and their applications. The technical contributions in the areas of sensors, acquisition, visualization and enhancement are classified as related to low-level operations. They discuss various modern topics – reconfigurable image system architecture, Scheimpflug camera calibration, real-time autofocusing, climate visualization, tone mapping, super-resolution and image resizing. The technical contributions in the areas of segmentation and retrieval are classified as related to mid-level operations. They discuss some state-of-the-art techniques – non-rigid image registration, iterative image partitioning, egocentric object detection and video shot boundary detection. Th...

  13. DE-BLURRING SINGLE PHOTON EMISSION COMPUTED TOMOGRAPHY IMAGES USING WAVELET DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    Neethu M. Sasi

    2016-02-01

    Full Text Available Single photon emission computed tomography imaging is a popular nuclear medicine imaging technique which generates images by detecting radiations emitted by radioactive isotopes injected in the human body. Scattering of these emitted radiations introduces blur in this type of images. This paper proposes an image processing technique to enhance cardiac single photon emission computed tomography images by reducing the blur in the image. The algorithm works in two main stages. In the first stage a maximum likelihood estimate of the point spread function and the true image is obtained. In the second stage Lucy Richardson algorithm is applied on the selected wavelet coefficients of the true image estimate. The significant contribution of this paper is that processing of images is done in the wavelet domain. Pre-filtering is also done as a sub stage to avoid unwanted ringing effects. Real cardiac images are used for the quantitative and qualitative evaluations of the algorithm. Blur metric, peak signal to noise ratio and Tenengrad criterion are used as quantitative measures. Comparison against other existing de-blurring algorithms is also done. The simulation results indicate that the proposed method effectively reduces blur present in the image.

  14. Quantification of the radionuclide image: Theoretical concepts and the role of the computer

    International Nuclear Information System (INIS)

    Rabinowitz, A.; Wexler, J.P.; Blaufox, M.D.

    1984-01-01

    The purpose of this chapter is to provide the reader with the basic fundamentals for understanding dynamic and quantitative imaging studies. The computer, which is a basic requirement for the optimum generation and analysis of these data, is discussed here. These studies require an understanding of physiologic and mathematic principles and of the workings of the machine that is used to record them

  15. Application of cone beam computed tomography in facial imaging science

    Institute of Scientific and Technical Information of China (English)

    Zacharias Fourie; Janalt Damstra; Yijin Ren

    2012-01-01

    The use of three-dimensional (3D) methods for facial imaging has increased significantly over the past years.Traditional 2D imaging has gradually being replaced by 3D images in different disciplines,particularly in the fields of orthodontics,maxillofacial surgery,plastic and reconstructive surgery,neurosurgery and forensic sciences.In most cases,3D facial imaging overcomes the limitations of traditional 2D methods and provides the clinician with more accurate information regarding the soft-tissues and the underlying skeleton.The aim of this study was to review the types of imaging methods used for facial imaging.It is important to realize the difference between the types of 3D imaging methods as application and indications thereof may differ.Since 3D cone beam computed tomography (CBCT) imaging will play an increasingly importanl role in orthodontics and orthognathic surgery,special emphasis should be placed on discussing CBCT applications in facial evaluations.

  16. A Generative Computer Model for Preliminary Design of Mass Housing

    Directory of Open Access Journals (Sweden)

    Ahmet Emre DİNÇER

    2014-05-01

    Full Text Available Today, we live in what we call the “Information Age”, an age in which information technologies are constantly being renewed and developed. Out of this has emerged a new approach called “Computational Design” or “Digital Design”. In addition to significantly influencing all fields of engineering, this approach has come to play a similar role in all stages of the design process in the architectural field. In providing solutions for analytical problems in design such as cost estimate, circulation systems evaluation and environmental effects, which are similar to engineering problems, this approach is being used in the evaluation, representation and presentation of traditionally designed buildings. With developments in software and hardware technology, it has evolved as the studies based on design of architectural products and production implementations with digital tools used for preliminary design stages. This paper presents a digital model which may be used in the preliminary stage of mass housing design with Cellular Automata, one of generative design systems based on computational design approaches. This computational model, developed by scripts of 3Ds Max software, has been implemented on a site plan design of mass housing, floor plan organizations made by user preferences and facade designs. By using the developed computer model, many alternative housing types could be rapidly produced. The interactive design tool of this computational model allows the user to transfer dimensional and functional housing preferences by means of the interface prepared for model. The results of the study are discussed in the light of innovative architectural approaches.

  17. The diffractive achromat full spectrum computational imaging with diffractive optics

    KAUST Repository

    Peng, Yifan

    2016-07-11

    Diffractive optical elements (DOEs) have recently drawn great attention in computational imaging because they can drastically reduce the size and weight of imaging devices compared to their refractive counterparts. However, the inherent strong dispersion is a tremendous obstacle that limits the use of DOEs in full spectrum imaging, causing unacceptable loss of color fidelity in the images. In particular, metamerism introduces a data dependency in the image blur, which has been neglected in computational imaging methods so far. We introduce both a diffractive achromat based on computational optimization, as well as a corresponding algorithm for correction of residual aberrations. Using this approach, we demonstrate high fidelity color diffractive-only imaging over the full visible spectrum. In the optical design, the height profile of a diffractive lens is optimized to balance the focusing contributions of different wavelengths for a specific focal length. The spectral point spread functions (PSFs) become nearly identical to each other, creating approximately spectrally invariant blur kernels. This property guarantees good color preservation in the captured image and facilitates the correction of residual aberrations in our fast two-step deconvolution without additional color priors. We demonstrate our design of diffractive achromat on a 0.5mm ultrathin substrate by photolithography techniques. Experimental results show that our achromatic diffractive lens produces high color fidelity and better image quality in the full visible spectrum. © 2016 ACM.

  18. Image processing and computer graphics in radiology. Pt. A

    International Nuclear Information System (INIS)

    Toennies, K.D.

    1993-01-01

    The reports give a full review of all aspects of digital imaging in radiology which are of significance to image processing and the subsequent picture archiving and communication techniques. The review strongly clings to practice and illustrates the various contributions from specialized areas of the computer sciences, such as computer vision, computer graphics, database systems and information and communication systems, man-machine interactions and software engineering. Methods and models available are explained and assessed for their respective performance and value, and basic principles are briefly explained. (DG) [de

  19. Image processing and computer graphics in radiology. Pt. B

    International Nuclear Information System (INIS)

    Toennies, K.D.

    1993-01-01

    The reports give a full review of all aspects of digital imaging in radiology which are of significance to image processing and the subsequent picture archiving and communication techniques. The review strongly clings to practice and illustrates the various contributions from specialized areas of the computer sciences, such as computer vision, computer graphics, database systems and information and communication systems, man-machine interactions and software engineering. Methods and models available are explained and assessed for their respective performance and value, and basic principles are briefly explained. (DG) [de

  20. Learning a generative model of images by factoring appearance and shape.

    Science.gov (United States)

    Le Roux, Nicolas; Heess, Nicolas; Shotton, Jamie; Winn, John

    2011-03-01

    Computer vision has grown tremendously in the past two decades. Despite all efforts, existing attempts at matching parts of the human visual system's extraordinary ability to understand visual scenes lack either scope or power. By combining the advantages of general low-level generative models and powerful layer-based and hierarchical models, this work aims at being a first step toward richer, more flexible models of images. After comparing various types of restricted Boltzmann machines (RBMs) able to model continuous-valued data, we introduce our basic model, the masked RBM, which explicitly models occlusion boundaries in image patches by factoring the appearance of any patch region from its shape. We then propose a generative model of larger images using a field of such RBMs. Finally, we discuss how masked RBMs could be stacked to form a deep model able to generate more complicated structures and suitable for various tasks such as segmentation or object recognition.

  1. Computer Vision and Image Processing: A Paper Review

    Directory of Open Access Journals (Sweden)

    victor - wiley

    2018-02-01

    Full Text Available Computer vision has been studied from many persective. It expands from raw data recording into techniques and ideas combining digital image processing, pattern recognition, machine learning and computer graphics. The wide usage has attracted many scholars to integrate with many disciplines and fields. This paper provide a survey of the recent technologies and theoretical concept explaining the development of computer vision especially related to image processing using different areas of their field application. Computer vision helps scholars to analyze images and video to obtain necessary information,    understand information on events or descriptions, and scenic pattern. It used method of multi-range application domain with massive data analysis. This paper provides contribution of recent development on reviews related to computer vision, image processing, and their related studies. We categorized the computer vision mainstream into four group e.g., image processing, object recognition, and machine learning. We also provide brief explanation on the up-to-date information about the techniques and their performance.

  2. Generative Topic Modeling in Image Data Mining and Bioinformatics Studies

    Science.gov (United States)

    Chen, Xin

    2012-01-01

    Probabilistic topic models have been developed for applications in various domains such as text mining, information retrieval and computer vision and bioinformatics domain. In this thesis, we focus on developing novel probabilistic topic models for image mining and bioinformatics studies. Specifically, a probabilistic topic-connection (PTC) model…

  3. PARAGON-IPS: A Portable Imaging Software System For Multiple Generations Of Image Processing Hardware

    Science.gov (United States)

    Montelione, John

    1989-07-01

    Paragon-IPS is a comprehensive software system which is available on virtually all generations of image processing hardware. It is designed for an image processing department or a scientist and engineer who is doing image processing full-time. It is being used by leading R&D labs in government agencies and Fortune 500 companies. Applications include reconnaissance, non-destructive testing, remote sensing, medical imaging, etc.

  4. Applications of evolutionary computation in image processing and pattern recognition

    CERN Document Server

    Cuevas, Erik; Perez-Cisneros, Marco

    2016-01-01

    This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...

  5. Symmetric and asymmetric hybrid cryptosystem based on compressive sensing and computer generated holography

    Science.gov (United States)

    Ma, Lihong; Jin, Weimin

    2018-01-01

    A novel symmetric and asymmetric hybrid optical cryptosystem is proposed based on compressive sensing combined with computer generated holography. In this method there are six encryption keys, among which two decryption phase masks are different from the two random phase masks used in the encryption process. Therefore, the encryption system has the feature of both symmetric and asymmetric cryptography. On the other hand, because computer generated holography can flexibly digitalize the encrypted information and compressive sensing can significantly reduce data volume, what is more, the final encryption image is real function by phase truncation, the method favors the storage and transmission of the encryption data. The experimental results demonstrate that the proposed encryption scheme boosts the security and has high robustness against noise and occlusion attacks.

  6. Image encryption using random sequence generated from generalized information domain

    International Nuclear Information System (INIS)

    Zhang Xia-Yan; Wu Jie-Hua; Zhang Guo-Ji; Li Xuan; Ren Ya-Zhou

    2016-01-01

    A novel image encryption method based on the random sequence generated from the generalized information domain and permutation–diffusion architecture is proposed. The random sequence is generated by reconstruction from the generalized information file and discrete trajectory extraction from the data stream. The trajectory address sequence is used to generate a P-box to shuffle the plain image while random sequences are treated as keystreams. A new factor called drift factor is employed to accelerate and enhance the performance of the random sequence generator. An initial value is introduced to make the encryption method an approximately one-time pad. Experimental results show that the random sequences pass the NIST statistical test with a high ratio and extensive analysis demonstrates that the new encryption scheme has superior security. (paper)

  7. Fluorescence In Situ Hybridization (FISH Signal Analysis Using Automated Generated Projection Images

    Directory of Open Access Journals (Sweden)

    Xingwei Wang

    2012-01-01

    Full Text Available Fluorescence in situ hybridization (FISH tests provide promising molecular imaging biomarkers to more accurately and reliably detect and diagnose cancers and genetic disorders. Since current manual FISH signal analysis is low-efficient and inconsistent, which limits its clinical utility, developing automated FISH image scanning systems and computer-aided detection (CAD schemes has been attracting research interests. To acquire high-resolution FISH images in a multi-spectral scanning mode, a huge amount of image data with the stack of the multiple three-dimensional (3-D image slices is generated from a single specimen. Automated preprocessing these scanned images to eliminate the non-useful and redundant data is important to make the automated FISH tests acceptable in clinical applications. In this study, a dual-detector fluorescence image scanning system was applied to scan four specimen slides with FISH-probed chromosome X. A CAD scheme was developed to detect analyzable interphase cells and map the multiple imaging slices recorded FISH-probed signals into the 2-D projection images. CAD scheme was then applied to each projection image to detect analyzable interphase cells using an adaptive multiple-threshold algorithm, identify FISH-probed signals using a top-hat transform, and compute the ratios between the normal and abnormal cells. To assess CAD performance, the FISH-probed signals were also independently visually detected by an observer. The Kappa coefficients for agreement between CAD and observer ranged from 0.69 to 1.0 in detecting/counting FISH signal spots in four testing samples. The study demonstrated the feasibility of automated FISH signal analysis that applying a CAD scheme to the automated generated 2-D projection images.

  8. Physical image quality of computed radiography in mammography system

    International Nuclear Information System (INIS)

    Norriza Mohd Isa; Muhammad Jamal Isa; Wan Muhamad Saridan Wan Hassan; Fatimah Othman

    2013-01-01

    Full-text: Mammography is a screening procedure that mostly used for early detection of breast cancer. In digital imaging system, Computed Radiography is a cost-effective technology that applied indirect conversion detector. The paper presents physical image quality parameter measurements namely modulation transfer function (MTF), normalized noise power spectrum (NNPS) and detective quantum efficiency (DQE) of Computed Radiography in mammography system. MTF was calculated from two different orientations of slanted images of an edge test device and NNPS was estimated using flat-field image. Both images were acquired using a standard mammography beam quality. DQE was determined by applying the MTF and NNPS values into our developed software program. Both orientations have similar DQE characteristics. (author)

  9. Development of computed tomography system and image reconstruction algorithm

    International Nuclear Information System (INIS)

    Khairiah Yazid; Mohd Ashhar Khalid; Azaman Ahmad; Khairul Anuar Mohd Salleh; Ab Razak Hamzah

    2006-01-01

    Computed tomography is one of the most advanced and powerful nondestructive inspection techniques, which is currently used in many different industries. In several CT systems, detection has been by combination of an X-ray image intensifier and charge -coupled device (CCD) camera or by using line array detector. The recent development of X-ray flat panel detector has made fast CT imaging feasible and practical. Therefore this paper explained the arrangement of a new detection system which is using the existing high resolution (127 μm pixel size) flat panel detector in MINT and the image reconstruction technique developed. The aim of the project is to develop a prototype flat panel detector based CT imaging system for NDE. The prototype consisted of an X-ray tube, a flat panel detector system, a rotation table and a computer system to control the sample motion and image acquisition. Hence this project is divided to two major tasks, firstly to develop image reconstruction algorithm and secondly to integrate X-ray imaging components into one CT system. The image reconstruction algorithm using filtered back-projection method is developed and compared to other techniques. The MATLAB program is the tools used for the simulations and computations for this project. (Author)

  10. Second-harmonic generation imaging of collagen in ancient bone.

    Science.gov (United States)

    Thomas, B; McIntosh, D; Fildes, T; Smith, L; Hargrave, F; Islam, M; Thompson, T; Layfield, R; Scott, D; Shaw, B; Burrell, C L; Gonzalez, S; Taylor, S

    2017-12-01

    Second-harmonic generation imaging (SHG) captures triple helical collagen molecules near tissue surfaces. Biomedical research routinely utilizes various imaging software packages to quantify SHG signals for collagen content and distribution estimates in modern tissue samples including bone. For the first time using SHG, samples of modern, medieval, and ice age bones were imaged to test the applicability of SHG to ancient bone from a variety of ages, settings, and taxa. Four independent techniques including Raman spectroscopy, FTIR spectroscopy, radiocarbon dating protocols, and mass spectrometry-based protein sequencing, confirm the presence of protein, consistent with the hypothesis that SHG imaging detects ancient bone collagen. These results suggest that future studies have the potential to use SHG imaging to provide new insights into the composition of ancient bone, to characterize ancient bone disorders, to investigate collagen preservation within and between various taxa, and to monitor collagen decay regimes in different depositional environments.

  11. Second-harmonic generation imaging of collagen in ancient bone

    Directory of Open Access Journals (Sweden)

    B. Thomas

    2017-12-01

    Full Text Available Second-harmonic generation imaging (SHG captures triple helical collagen molecules near tissue surfaces. Biomedical research routinely utilizes various imaging software packages to quantify SHG signals for collagen content and distribution estimates in modern tissue samples including bone. For the first time using SHG, samples of modern, medieval, and ice age bones were imaged to test the applicability of SHG to ancient bone from a variety of ages, settings, and taxa. Four independent techniques including Raman spectroscopy, FTIR spectroscopy, radiocarbon dating protocols, and mass spectrometry-based protein sequencing, confirm the presence of protein, consistent with the hypothesis that SHG imaging detects ancient bone collagen. These results suggest that future studies have the potential to use SHG imaging to provide new insights into the composition of ancient bone, to characterize ancient bone disorders, to investigate collagen preservation within and between various taxa, and to monitor collagen decay regimes in different depositional environments.

  12. Generating porosity spectrum of carbonate reservoirs using ultrasonic imaging log

    Science.gov (United States)

    Zhang, Jie; Nie, Xin; Xiao, Suyun; Zhang, Chong; Zhang, Chaomo; Zhang, Zhansong

    2018-03-01

    Imaging logging tools can provide us the borehole wall image. The micro-resistivity imaging logging has been used to obtain borehole porosity spectrum. However, the resistivity imaging logging cannot cover the whole borehole wall. In this paper, we propose a method to calculate the porosity spectrum using ultrasonic imaging logging data. Based on the amplitude attenuation equation, we analyze the factors affecting the propagation of wave in drilling fluid and formation and based on the bulk-volume rock model, Wyllie equation and Raymer equation, we establish various conversion models between the reflection coefficient β and porosity ϕ. Then we use the ultrasonic imaging logging and conventional wireline logging data to calculate the near-borehole formation porosity distribution spectrum. The porosity spectrum result obtained from ultrasonic imaging data is compared with the one from the micro-resistivity imaging data, and they turn out to be similar, but with discrepancy, which is caused by the borehole coverage and data input difference. We separate the porosity types by performing threshold value segmentation and generate porosity-depth distribution curves by counting with equal depth spacing on the porosity image. The practice result is good and reveals the efficiency of our method.

  13. Self-motion perception: assessment by computer-generated animations

    Science.gov (United States)

    Parker, D. E.; Harm, D. L.; Sandoz, G. R.; Skinner, N. C.

    1998-01-01

    The goal of this research is more precise description of adaptation to sensory rearrangements, including microgravity, by development of improved procedures for assessing spatial orientation perception. Thirty-six subjects reported perceived self-motion following exposure to complex inertial-visual motion. Twelve subjects were assigned to each of 3 perceptual reporting procedures: (a) animation movie selection, (b) written report selection and (c) verbal report generation. The question addressed was: do reports produced by these procedures differ with respect to complexity and reliability? Following repeated (within-day and across-day) exposures to 4 different "motion profiles," subjects either (a) selected movies presented on a laptop computer, or (b) selected written descriptions from a booklet, or (c) generated self-motion verbal descriptions that corresponded most closely with their motion experience. One "complexity" and 2 reliability "scores" were calculated. Contrary to expectations, reliability and complexity scores were essentially equivalent for the animation movie selection and written report selection procedures. Verbal report generation subjects exhibited less complexity than did subjects in the other conditions and their reports were often ambiguous. The results suggest that, when selecting from carefully written descriptions and following appropriate training, people may be better able to describe their self-motion experience with words than is usually believed.

  14. Computer vision applications for coronagraphic optical alignment and image processing.

    Science.gov (United States)

    Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A

    2013-05-10

    Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

  15. FAST: framework for heterogeneous medical image computing and visualization.

    Science.gov (United States)

    Smistad, Erik; Bozorgi, Mohammadmehdi; Lindseth, Frank

    2015-11-01

    Computer systems are becoming increasingly heterogeneous in the sense that they consist of different processors, such as multi-core CPUs and graphic processing units. As the amount of medical image data increases, it is crucial to exploit the computational power of these processors. However, this is currently difficult due to several factors, such as driver errors, processor differences, and the need for low-level memory handling. This paper presents a novel FrAmework for heterogeneouS medical image compuTing and visualization (FAST). The framework aims to make it easier to simultaneously process and visualize medical images efficiently on heterogeneous systems. FAST uses common image processing programming paradigms and hides the details of memory handling from the user, while enabling the use of all processors and cores on a system. The framework is open-source, cross-platform and available online. Code examples and performance measurements are presented to show the simplicity and efficiency of FAST. The results are compared to the insight toolkit (ITK) and the visualization toolkit (VTK) and show that the presented framework is faster with up to 20 times speedup on several common medical imaging algorithms. FAST enables efficient medical image computing and visualization on heterogeneous systems. Code examples and performance evaluations have demonstrated that the toolkit is both easy to use and performs better than existing frameworks, such as ITK and VTK.

  16. Computational Needs for the Next Generation Electric Grid Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Birman, Kenneth; Ganesh, Lakshmi; Renessee, Robbert van; Ferris, Michael; Hofmann, Andreas; Williams, Brian; Sztipanovits, Janos; Hemingway, Graham; University, Vanderbilt; Bose, Anjan; Stivastava, Anurag; Grijalva, Santiago; Grijalva, Santiago; Ryan, Sarah M.; McCalley, James D.; Woodruff, David L.; Xiong, Jinjun; Acar, Emrah; Agrawal, Bhavna; Conn, Andrew R.; Ditlow, Gary; Feldmann, Peter; Finkler, Ulrich; Gaucher, Brian; Gupta, Anshul; Heng, Fook-Luen; Kalagnanam, Jayant R; Koc, Ali; Kung, David; Phan, Dung; Singhee, Amith; Smith, Basil

    2011-10-05

    The April 2011 DOE workshop, 'Computational Needs for the Next Generation Electric Grid', was the culmination of a year-long process to bring together some of the Nation's leading researchers and experts to identify computational challenges associated with the operation and planning of the electric power system. The attached papers provide a journey into these experts' insights, highlighting a class of mathematical and computational problems relevant for potential power systems research. While each paper defines a specific problem area, there were several recurrent themes. First, the breadth and depth of power system data has expanded tremendously over the past decade. This provides the potential for new control approaches and operator tools that can enhance system efficiencies and improve reliability. However, the large volume of data poses its own challenges, and could benefit from application of advances in computer networking and architecture, as well as data base structures. Second, the computational complexity of the underlying system problems is growing. Transmitting electricity from clean, domestic energy resources in remote regions to urban consumers, for example, requires broader, regional planning over multi-decade time horizons. Yet, it may also mean operational focus on local solutions and shorter timescales, as reactive power and system dynamics (including fast switching and controls) play an increasingly critical role in achieving stability and ultimately reliability. The expected growth in reliance on variable renewable sources of electricity generation places an exclamation point on both of these observations, and highlights the need for new focus in areas such as stochastic optimization to accommodate the increased uncertainty that is occurring in both planning and operations. Application of research advances in algorithms (especially related to optimization techniques and uncertainty quantification) could accelerate power

  17. Combined multi-kernel head computed tomography images optimized for depicting both brain parenchyma and bone.

    Science.gov (United States)

    Takagi, Satoshi; Nagase, Hiroyuki; Hayashi, Tatsuya; Kita, Tamotsu; Hayashi, Katsumi; Sanada, Shigeru; Koike, Masayuki

    2014-01-01

    The hybrid convolution kernel technique for computed tomography (CT) is known to enable the depiction of an image set using different window settings. Our purpose was to decrease the number of artifacts in the hybrid convolution kernel technique for head CT and to determine whether our improved combined multi-kernel head CT images enabled diagnosis as a substitute for both brain (low-pass kernel-reconstructed) and bone (high-pass kernel-reconstructed) images. Forty-four patients with nondisplaced skull fractures were included. Our improved multi-kernel images were generated so that pixels of >100 Hounsfield unit in both brain and bone images were composed of CT values of bone images and other pixels were composed of CT values of brain images. Three radiologists compared the improved multi-kernel images with bone images. The improved multi-kernel images and brain images were identically displayed on the brain window settings. All three radiologists agreed that the improved multi-kernel images on the bone window settings were sufficient for diagnosing skull fractures in all patients. This improved multi-kernel technique has a simple algorithm and is practical for clinical use. Thus, simplified head CT examinations and fewer images that need to be stored can be expected.

  18. Medical image computing and computer-assisted intervention - MICCAI 2006. Pt. 1. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, R. [Technical Univ. of Denmark, Lyngby (Denmark). Informatics and Mathematical Modelling; Nielsen, M. [IT Univ. of Copenhagen (Denmark); Sporring, J. (eds.) [Copenhagen Univ. (Denmark). Dept. of Computer Science

    2006-07-01

    The two-volume set LNCS 4190 and LNCS 4191 constitute the refereed proceedings of the 9th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2006, held in Copenhagen, Denmark in October 2006. The program committee carefully selected 39 revised full papers and 193 revised poster papers from 578 submissions for presentation in two volumes, based on a rigorous peer reviews. The first volume includes 114 contributions related to bone shape analysis, robotics and tracking, segmentation, analysis of diffusion tensor MRI, shape analysis and morphometry, simulation and interaction, robotics and intervention, cardio-vascular applications, image analysis in oncology, brain atlases and segmentation, cardiac motion analysis, clinical applications, and registration. The second volume collects 118 papers related to segmentation, validation and quantitative image analysis, brain image processing, motion in image formation, image guided clinical applications, registration, as well as brain analysis and registration. (orig.)

  19. Medical image computing and computer-assisted intervention - MICCAI 2006. Pt. 2. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, R. [Technical Univ. of Denmark, Lyngby (Denmark). Informatics and Mathematical Modelling; Nielsen, M. [IT Univ. of Copenhagen (Denmark); Sporring, J. (eds.) [Copenhagen Univ. (Denmark). Dept. of Computer Science

    2006-07-01

    The two-volume set LNCS 4190 and LNCS 4191 constitute the refereed proceedings of the 9th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2006, held in Copenhagen, Denmark in October 2006. The program committee carefully selected 39 revised full papers and 193 revised poster papers from 578 submissions for presentation in two volumes, based on a rigorous peer reviews. The first volume includes 114 contributions related to bone shape analysis, robotics and tracking, segmentation, analysis of diffusion tensor MRI, shape analysis and morphometry, simulation and interaction, robotics and intervention, cardio-vascular applications, image analysis in oncology, brain atlases and segmentation, cardiac motion analysis, clinical applications, and registration. The second volume collects 118 papers related to segmentation, validation and quantitative image analysis, brain image processing, motion in image formation, image guided clinical applications, registration, as well as brain analysis and registration. (orig.)

  20. Medical image computing and computer-assisted intervention - MICCAI 2006. Pt. 2. Proceedings

    International Nuclear Information System (INIS)

    Larsen, R.; Sporring, J.

    2006-01-01

    The two-volume set LNCS 4190 and LNCS 4191 constitute the refereed proceedings of the 9th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2006, held in Copenhagen, Denmark in October 2006. The program committee carefully selected 39 revised full papers and 193 revised poster papers from 578 submissions for presentation in two volumes, based on a rigorous peer reviews. The first volume includes 114 contributions related to bone shape analysis, robotics and tracking, segmentation, analysis of diffusion tensor MRI, shape analysis and morphometry, simulation and interaction, robotics and intervention, cardio-vascular applications, image analysis in oncology, brain atlases and segmentation, cardiac motion analysis, clinical applications, and registration. The second volume collects 118 papers related to segmentation, validation and quantitative image analysis, brain image processing, motion in image formation, image guided clinical applications, registration, as well as brain analysis and registration. (orig.)

  1. Medical image computing and computer-assisted intervention - MICCAI 2006. Pt. 1. Proceedings

    International Nuclear Information System (INIS)

    Larsen, R.; Sporring, J.

    2006-01-01

    The two-volume set LNCS 4190 and LNCS 4191 constitute the refereed proceedings of the 9th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2006, held in Copenhagen, Denmark in October 2006. The program committee carefully selected 39 revised full papers and 193 revised poster papers from 578 submissions for presentation in two volumes, based on a rigorous peer reviews. The first volume includes 114 contributions related to bone shape analysis, robotics and tracking, segmentation, analysis of diffusion tensor MRI, shape analysis and morphometry, simulation and interaction, robotics and intervention, cardio-vascular applications, image analysis in oncology, brain atlases and segmentation, cardiac motion analysis, clinical applications, and registration. The second volume collects 118 papers related to segmentation, validation and quantitative image analysis, brain image processing, motion in image formation, image guided clinical applications, registration, as well as brain analysis and registration. (orig.)

  2. Optical computed tomography for imaging the breast: first look

    Science.gov (United States)

    Grable, Richard J.; Ponder, Steven L.; Gkanatsios, Nikolaos A.; Dieckmann, William; Olivier, Patrick F.; Wake, Robert H.; Zeng, Yueping

    2000-07-01

    The purpose of the study is to compare computed tomography optical imaging with traditional breast imaging techniques. Images produced by computed tomography laser mammography (CTLMTM) scanner are compared with images obtained from mammography, and in some cases ultrasound and/or magnetic resonance imaging (MRI). During the CTLM procedure, a near infrared laser irradiates the breast and an array of photodiodes detectors records light scattered through the breast tissue. The laser and detectors rotate synchronously around the breast to acquire a series of slice data along the coronal place. The procedure is performed without any breast compression or optical matching fluid. Cross-sectional slices of the breast are produced using a reconstruction algorithm. Reconstruction based on the diffusion theory is used to produce cross-sectional slices of the breast. Multiple slice images are combined to produce a three dimensional volumetric array of the imaged breast. This array is used to derive axial and sagittal images of the breast corresponding to cranio-caudal and medio-lateral images used in mammography. Over 200 women and 3 men have been scanned in clinical trials. The most obvious features seen in images produced by the optical tomography scanner are vascularization and significant lesions. Breast features caused by fibrocystic changes and cysts are less obvious. Breast density does not appear to be a significant factor in the quality of the image. We see correlation of the optical image structure with that seen with traditional breast imaging techniques. Further testing is being conducted to explore the sensitivity and specificity of optical tomography of the breast.

  3. Computer-aided diagnosis and artificial intelligence in clinical imaging.

    Science.gov (United States)

    Shiraishi, Junji; Li, Qiang; Appelbaum, Daniel; Doi, Kunio

    2011-11-01

    Computer-aided diagnosis (CAD) is rapidly entering the radiology mainstream. It has already become a part of the routine clinical work for the detection of breast cancer with mammograms. The computer output is used as a "second opinion" in assisting radiologists' image interpretations. The computer algorithm generally consists of several steps that may include image processing, image feature analysis, and data classification via the use of tools such as artificial neural networks (ANN). In this article, we will explore these and other current processes that have come to be referred to as "artificial intelligence." One element of CAD, temporal subtraction, has been applied for enhancing interval changes and for suppressing unchanged structures (eg, normal structures) between 2 successive radiologic images. To reduce misregistration artifacts on the temporal subtraction images, a nonlinear image warping technique for matching the previous image to the current one has been developed. Development of the temporal subtraction method originated with chest radiographs, with the method subsequently being applied to chest computed tomography (CT) and nuclear medicine bone scans. The usefulness of the temporal subtraction method for bone scans was demonstrated by an observer study in which reading times and diagnostic accuracy improved significantly. An additional prospective clinical study verified that the temporal subtraction image could be used as a "second opinion" by radiologists with negligible detrimental effects. ANN was first used in 1990 for computerized differential diagnosis of interstitial lung diseases in CAD. Since then, ANN has been widely used in CAD schemes for the detection and diagnosis of various diseases in different imaging modalities, including the differential diagnosis of lung nodules and interstitial lung diseases in chest radiography, CT, and position emission tomography/CT. It is likely that CAD will be integrated into picture archiving and

  4. Reconfigurable Optical Interconnections Via Dynamic Computer-Generated Holograms

    Science.gov (United States)

    Liu, Hua-Kuang (Inventor); Zhou, Shao-Min (Inventor)

    1996-01-01

    A system is presented for optically providing one-to-many irregular interconnections, and strength-adjustable many-to-many irregular interconnections which may be provided with strengths (weights) w(sub ij) using multiple laser beams which address multiple holograms and means for combining the beams modified by the holograms to form multiple interconnections, such as a cross-bar switching network. The optical means for interconnection is based on entering a series of complex computer-generated holograms on an electrically addressed spatial light modulator for real-time reconfigurations, thus providing flexibility for interconnection networks for large-scale practical use. By employing multiple sources and holograms, the number of interconnection patterns achieved is increased greatly.

  5. [A computer-aided image diagnosis and study system].

    Science.gov (United States)

    Li, Zhangyong; Xie, Zhengxiang

    2004-08-01

    The revolution in information processing, particularly the digitizing of medicine, has changed the medical study, work and management. This paper reports a method to design a system for computer-aided image diagnosis and study. Combined with some good idea of graph-text system and picture archives communicate system (PACS), the system was realized and used for "prescription through computer", "managing images" and "reading images under computer and helping the diagnosis". Also typical examples were constructed in a database and used to teach the beginners. The system was developed by the visual developing tools based on object oriented programming (OOP) and was carried into operation on the Windows 9X platform. The system possesses friendly man-machine interface.

  6. Meteosat third generation imager: simulation of the flexible combined imager instrument chain

    Science.gov (United States)

    Just, Dieter; Gutiérrez, Rebeca; Roveda, Fausto; Steenbergen, Theo

    2014-10-01

    The Meteosat Third Generation (MTG) Programme is the next generation of European geostationary meteorological systems. The first MTG satellite, MTG-I1, which is scheduled for launch at the end of 2018, will host two imaging instruments: the Flexible Combined Imager (FCI) and the Lightning Imager. The FCI will provide continuation of the SEVIRI imager operations on the current Meteosat Second Generation satellites (MSG), but with an improved spatial, temporal and spectral resolution, not dissimilar to GOES-R (of NASA/NOAA). Unlike SEVIRI on the spinning MSG spacecraft, the FCI will be mounted on a 3-axis stabilised platform and a 2-axis tapered scan will provide a full coverage of the Earth in 10 minute repeat cycles. Alternatively, a rapid scanning mode can cover smaller areas, but with a better temporal resolution of up to 2.5 minutes. In order to assess some of the data acquisition and processing aspects which will apply to the FCI, a simplified end-to-end imaging chain prototype was set up. The simulation prototype consists of four different functional blocks: - A function for the generation of FCI-like references images - An image acquisition simulation function for the FCI Line-of-Sight calculation and swath generation - A processing function that reverses the swath generation process by rectifying the swath data - An evaluation function for assessing the quality of the processed data with respect to the reference images This paper presents an overview of the FCI instrument chain prototype, covering instrument characteristics, reference image generation, image acquisition simulation, and processing aspects. In particular, it provides in detail the description of the generation of references images, highlighting innovative features, but also limitations. This is followed by a description of the image acquisition simulation process, and the rectification and evaluation function. The latter two are described in more detail in a separate paper. Finally, results

  7. Correction for polychromatic X-ray image distortion in computer tomography images

    International Nuclear Information System (INIS)

    1979-01-01

    A method and apparatus are described which correct the polychromatic distortion of CT images that is produced by the non-linear interaction of body constituents with a polychromatic X-ray beam. A CT image is processed to estimate the proportion of the attenuation coefficients of the constituents in each pixel element. A multiplicity of projections for each constituent are generated from the original image and are combined utilizing a multidimensional polynomial which approximates the non-linear interaction involved. An error image is then generated from the combined projections and is subtracted from the original image to correct for the polychromatic distortion. (Auth.)

  8. Jet-images: computer vision inspired techniques for jet tagging

    Energy Technology Data Exchange (ETDEWEB)

    Cogan, Josh; Kagan, Michael; Strauss, Emanuel; Schwarztman, Ariel [SLAC National Accelerator Laboratory,Menlo Park, CA 94028 (United States)

    2015-02-18

    We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon-initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.

  9. Jet-images: computer vision inspired techniques for jet tagging

    International Nuclear Information System (INIS)

    Cogan, Josh; Kagan, Michael; Strauss, Emanuel; Schwarztman, Ariel

    2015-01-01

    We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon-initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.

  10. Identifying Computer-Generated Portraits: The Importance of Training and Incentives.

    Science.gov (United States)

    Mader, Brandon; Banks, Martin S; Farid, Hany

    2017-09-01

    The past two decades have seen remarkable advances in photo-realistic rendering of everything from inanimate objects to landscapes, animals, and humans. We previously showed that despite these tremendous advances, human observers remain fairly good at distinguishing computer-generated from photographic images. Building on these results, we describe a series of follow-up experiments that reveal how to improve observer performance. Of general interest to anyone performing psychophysical studies on Mechanical Turk or similar platforms, we find that observer performance can be significantly improved with the proper incentives.

  11. Full-color large-scaled computer-generated holograms using RGB color filters.

    Science.gov (United States)

    Tsuchiyama, Yasuhiro; Matsushima, Kyoji

    2017-02-06

    A technique using RGB color filters is proposed for creating high-quality full-color computer-generated holograms (CGHs). The fringe of these CGHs is composed of more than a billion pixels. The CGHs reconstruct full-parallax three-dimensional color images with a deep sensation of depth caused by natural motion parallax. The simulation technique as well as the principle and challenges of high-quality full-color reconstruction are presented to address the design of filter properties suitable for large-scaled CGHs. Optical reconstructions of actual fabricated full-color CGHs are demonstrated in order to verify the proposed techniques.

  12. Computer-assisted instruction; MR imaging of congenital heart disease

    International Nuclear Information System (INIS)

    Choi, Young Hi; Yu, Pil Mun; Lee, Sang Hoon; Choe, Yeon Hyeon; Kim, Yang Min

    1996-01-01

    To develop a software program for computer-assisted instruction on MR imaging of congenital heart disease for medical students and residents to achieve repetitive and effective self-learning. We used a film scanner(Scan Maker 35t) and IBM-PC(486 DX-2, 60 MHz) for acquisition and storage of image data. The accessories attached to the main processor were CD-ROM drive(Sony), sound card(Soundblaster-Pro), and speaker. We used software of Adobe Photoshop(v 3.0) and paint shop-pro(v 3.0) for preprocessing image data, and paintbrush from microsoft windows 3.1 for labelling. The language used for programming was visual basic(v 3.0) from microsoft corporation. We developed a software program for computer-assisted instruction on MR imaging of congenital heart disease as an effective educational tool

  13. Computational model of lightness perception in high dynamic range imaging

    Science.gov (United States)

    Krawczyk, Grzegorz; Myszkowski, Karol; Seidel, Hans-Peter

    2006-02-01

    An anchoring theory of lightness perception by Gilchrist et al. [1999] explains many characteristics of human visual system such as lightness constancy and its spectacular failures which are important in the perception of images. The principal concept of this theory is the perception of complex scenes in terms of groups of consistent areas (frameworks). Such areas, following the gestalt theorists, are defined by the regions of common illumination. The key aspect of the image perception is the estimation of lightness within each framework through the anchoring to the luminance perceived as white, followed by the computation of the global lightness. In this paper we provide a computational model for automatic decomposition of HDR images into frameworks. We derive a tone mapping operator which predicts lightness perception of the real world scenes and aims at its accurate reproduction on low dynamic range displays. Furthermore, such a decomposition into frameworks opens new grounds for local image analysis in view of human perception.

  14. Image matrix processor for fast multi-dimensional computations

    Science.gov (United States)

    Roberson, George P.; Skeate, Michael F.

    1996-01-01

    An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.

  15. Image processing with massively parallel computer Quadrics Q1

    International Nuclear Information System (INIS)

    Della Rocca, A.B.; La Porta, L.; Ferriani, S.

    1995-05-01

    Aimed to evaluate the image processing capabilities of the massively parallel computer Quadrics Q1, a convolution algorithm that has been implemented is described in this report. At first the discrete convolution mathematical definition is recalled together with the main Q1 h/w and s/w features. Then the different codification forms of the algorythm are described and the Q1 performances are compared with those obtained by different computers. Finally, the conclusions report on main results and suggestions

  16. Intranasal dexmedetomidine for sedation for pediatric computed tomography imaging.

    Science.gov (United States)

    Mekitarian Filho, Eduardo; Robinson, Fay; de Carvalho, Werther Brunow; Gilio, Alfredo Elias; Mason, Keira P

    2015-05-01

    This prospective observational pilot study evaluated the aerosolized intranasal route for dexmedetomidine as a safe, effective, and efficient option for infant and pediatric sedation for computed tomography imaging. The mean time to sedation was 13.4 minutes, with excellent image quality, no failed sedations, or significant adverse events. Registered with ClinicalTrials.gov: NCT01900405. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Simulation of Profiles Data For Computed Tomography Using Object Images

    International Nuclear Information System (INIS)

    Srisatit, Somyot

    2007-08-01

    Full text: It is necessary to use a scanning system to obtain the profiles data for computed tomographic images. A good profile data can give a good contrast and resolution. For the scanning system, high efficiency and high price of radiation equipments must be used. So, the simulated profiles data to obtain a good CT images quality as same as the real one for the demonstration can be used

  18. Research of second harmonic generation images based on texture analysis

    Science.gov (United States)

    Liu, Yao; Li, Yan; Gong, Haiming; Zhu, Xiaoqin; Huang, Zufang; Chen, Guannan

    2014-09-01

    Texture analysis plays a crucial role in identifying objects or regions of interest in an image. It has been applied to a variety of medical image processing, ranging from the detection of disease and the segmentation of specific anatomical structures, to differentiation between healthy and pathological tissues. Second harmonic generation (SHG) microscopy as a potential noninvasive tool for imaging biological tissues has been widely used in medicine, with reduced phototoxicity and photobleaching. In this paper, we clarified the principles of texture analysis including statistical, transform, structural and model-based methods and gave examples of its applications, reviewing studies of the technique. Moreover, we tried to apply texture analysis to the SHG images for the differentiation of human skin scar tissues. Texture analysis method based on local binary pattern (LBP) and wavelet transform was used to extract texture features of SHG images from collagen in normal and abnormal scars, and then the scar SHG images were classified into normal or abnormal ones. Compared with other texture analysis methods with respect to the receiver operating characteristic analysis, LBP combined with wavelet transform was demonstrated to achieve higher accuracy. It can provide a new way for clinical diagnosis of scar types. At last, future development of texture analysis in SHG images were discussed.

  19. Speckle noise reduction for computer generated holograms of objects with diffuse surfaces

    Science.gov (United States)

    Symeonidou, Athanasia; Blinder, David; Ahar, Ayyoub; Schretter, Colas; Munteanu, Adrian; Schelkens, Peter

    2016-04-01

    Digital holography is mainly used today for metrology and microscopic imaging and is emerging as an important potential technology for future holographic television. To generate the holographic content, computer-generated holography (CGH) techniques convert geometric descriptions of a 3D scene content. To model different surface types, an accurate model of light propagation has to be considered, including for example, specular and diffuse reflection. In previous work, we proposed a fast CGH method for point cloud data using multiple wavefront recording planes, look-up tables (LUTs) and occlusion processing. This work extends our method to account for diffuse reflections, enabling rendering of deep 3D scenes in high resolution with wide viewing angle support. This is achieved by modifying the spectral response of the light propagation kernels contained by the look-up tables. However, holograms encoding diffuse reflective surfaces depict significant amounts of speckle noise, a problem inherent to holography. Hence, techniques to improve the reduce speckle noise are evaluated in this paper. Moreover, we propose as well a technique to suppress the aperture diffraction during numerical, viewdependent rendering by apodizing the hologram. Results are compared visually and in terms of their respective computational efficiency. The experiments show that by modelling diffuse reflection in the LUTs, a more realistic yet computationally efficient framework for generating high-resolution CGH is achieved.

  20. Imaging of the hip joint. Computed tomography versus magnetic resonance imaging

    Science.gov (United States)

    Lang, P.; Genant, H. K.; Jergesen, H. E.; Murray, W. R.

    1992-01-01

    The authors reviewed the applications and limitations of computed tomography (CT) and magnetic resonance (MR) imaging in the assessment of the most common hip disorders. Magnetic resonance imaging is the most sensitive technique in detecting osteonecrosis of the femoral head. Magnetic resonance reflects the histologic changes associated with osteonecrosis very well, which may ultimately help to improve staging. Computed tomography can more accurately identify subchondral fractures than MR imaging and thus remains important for staging. In congenital dysplasia of the hip, the position of the nonossified femoral head in children less than six months of age can only be inferred by indirect signs on CT. Magnetic resonance imaging demonstrates the cartilaginous femoral head directly without ionizing radiation. Computed tomography remains the imaging modality of choice for evaluating fractures of the hip joint. In some patients, MR imaging demonstrates the fracture even when it is not apparent on radiography. In neoplasm, CT provides better assessment of calcification, ossification, and periosteal reaction than MR imaging. Magnetic resonance imaging, however, represents the most accurate imaging modality for evaluating intramedullary and soft-tissue extent of the tumor and identifying involvement of neurovascular bundles. Magnetic resonance imaging can also be used to monitor response to chemotherapy. In osteoarthrosis and rheumatoid arthritis of the hip, both CT and MR provide more detailed assessment of the severity of disease than conventional radiography because of their tomographic nature. Magnetic resonance imaging is unique in evaluating cartilage degeneration and loss, and in demonstrating soft-tissue alterations such as inflammatory synovial proliferation.

  1. 基于计算全息的串联式三随机相位板图像加密%Image encryption technology of three random phase based on computer generated hologram

    Institute of Scientific and Technical Information of China (English)

    席思星; 孙欣; 刘兵; 田巍; 云茂金; 孔伟金; 张文飞; 梁键

    2011-01-01

    本文提出一种串联式三随机相位板图像加密的新方法,该方法充分运用计算全息记录复值光场的特性以记录加密图像,在传统的双随机相位加密系统基础上,置人第三个随机相位板在输出平面上,对输出的计算全息图进行相位调制加密,引入了新的密钥,获得很好的双密钥效果!同时由于计算全息周再现的多频特性,解密须正确提取单元频谱,进一步提高了图像传输的安全性。%A new image encryption technology of three random phase plates is proposed.Recording the encrypted image by CGH with it's features of Recording the value of optical field in this method,the third random phase plate was placed on the output plane in the 4f double random phase encryption system to modulate and encrypt the phase of the CGH,there are new keys,the third random phase plate,the exact drawing of the spectrum uint with the multi-frequency characteristics of CGH,which further improve the image transmission security.

  2. NCI Workshop Report: Clinical and Computational Requirements for Correlating Imaging Phenotypes with Genomics Signatures

    Directory of Open Access Journals (Sweden)

    Rivka Colen

    2014-10-01

    Full Text Available The National Cancer Institute (NCI Cancer Imaging Program organized two related workshops on June 26–27, 2013, entitled “Correlating Imaging Phenotypes with Genomics Signatures Research” and “Scalable Computational Resources as Required for Imaging-Genomics Decision Support Systems.” The first workshop focused on clinical and scientific requirements, exploring our knowledge of phenotypic characteristics of cancer biological properties to determine whether the field is sufficiently advanced to correlate with imaging phenotypes that underpin genomics and clinical outcomes, and exploring new scientific methods to extract phenotypic features from medical images and relate them to genomics analyses. The second workshop focused on computational methods that explore informatics and computational requirements to extract phenotypic features from medical images and relate them to genomics analyses and improve the accessibility and speed of dissemination of existing NIH resources. These workshops linked clinical and scientific requirements of currently known phenotypic and genotypic cancer biology characteristics with imaging phenotypes that underpin genomics and clinical outcomes. The group generated a set of recommendations to NCI leadership and the research community that encourage and support development of the emerging radiogenomics research field to address short-and longer-term goals in cancer research.

  3. Correlative neuroanatomy of computed tomography and magnetic resonance imaging

    International Nuclear Information System (INIS)

    Groot, J.

    1984-01-01

    Since the development of computed tomography (CT) more than a decade ago, still another form of imaging has become available that provides displays of normal and abnormal human structures. Magnetic resonance imaging is given complete coverage in this book. It describes both CT and MR anatomy that explains basic principles and the current status of imaging the brain and spine. The author uses three-dimensional concepts to provide the reader with a simple means to compare the main structures of the brain, skull and spine. Combining normal, gross neuroanatomic illustrations with CT and MR images of normal and abnormal conditions, the book provides diagnostic guidance. Drawings, photographs and radiologic images are used to help

  4. The fifth generation computer project state of the art report 111

    CERN Document Server

    Scarrott

    1983-01-01

    The Fifth Generation Computer Project is a two-part book consisting of the invited papers and the analysis. The invited papers examine various aspects of The Fifth Generation Computer Project. The analysis part assesses the major advances of the Fifth Generation Computer Project and provides a balanced analysis of the state of the art in The Fifth Generation. This part provides a balanced and comprehensive view of the development in Fifth Generation Computer technology. The Bibliography compiles the most important published material on the subject of The Fifth Generation.

  5. Application of FPGA's in Flexible Analogue Electronic Image Generator Design

    Directory of Open Access Journals (Sweden)

    Peter Kulla

    2006-01-01

    Full Text Available This paper focuses on usage of the FPGAs (Field Programmable Gate Arrays Xilinx as a part of our more complex workdedicated to design of flexible analogue electronic images generator for application in TV measurement technique or/and TV servicetechnique or/and education process. The FPGAs performs here the role of component colour R, G, B, synchronization and blanking signals source. These signals are next processed and amplified in other parts of the generator as NTSC/PAL source encoder and RF modulator. The main aim of this paper is to show the possibilities how with suitable development software use a FPGAs in analog TV technology.

  6. A Versatile Image Processor For Digital Diagnostic Imaging And Its Application In Computed Radiography

    Science.gov (United States)

    Blume, H.; Alexandru, R.; Applegate, R.; Giordano, T.; Kamiya, K.; Kresina, R.

    1986-06-01

    In a digital diagnostic imaging department, the majority of operations for handling and processing of images can be grouped into a small set of basic operations, such as image data buffering and storage, image processing and analysis, image display, image data transmission and image data compression. These operations occur in almost all nodes of the diagnostic imaging communications network of the department. An image processor architecture was developed in which each of these functions has been mapped into hardware and software modules. The modular approach has advantages in terms of economics, service, expandability and upgradeability. The architectural design is based on the principles of hierarchical functionality, distributed and parallel processing and aims at real time response. Parallel processing and real time response is facilitated in part by a dual bus system: a VME control bus and a high speed image data bus, consisting of 8 independent parallel 16-bit busses, capable of handling combined up to 144 MBytes/sec. The presented image processor is versatile enough to meet the video rate processing needs of digital subtraction angiography, the large pixel matrix processing requirements of static projection radiography, or the broad range of manipulation and display needs of a multi-modality diagnostic work station. Several hardware modules are described in detail. For illustrating the capabilities of the image processor, processed 2000 x 2000 pixel computed radiographs are shown and estimated computation times for executing the processing opera-tions are presented.

  7. Computational assessment of visual search strategies in volumetric medical images.

    Science.gov (United States)

    Wen, Gezheng; Aizenman, Avigael; Drew, Trafton; Wolfe, Jeremy M; Haygood, Tamara Miner; Markey, Mia K

    2016-01-01

    When searching through volumetric images [e.g., computed tomography (CT)], radiologists appear to use two different search strategies: "drilling" (restrict eye movements to a small region of the image while quickly scrolling through slices), or "scanning" (search over large areas at a given depth before moving on to the next slice). To computationally identify the type of image information that is used in these two strategies, 23 naïve observers were instructed with either "drilling" or "scanning" when searching for target T's in 20 volumes of faux lung CTs. We computed saliency maps using both classical two-dimensional (2-D) saliency, and a three-dimensional (3-D) dynamic saliency that captures the characteristics of scrolling through slices. Comparing observers' gaze distributions with the saliency maps showed that search strategy alters the type of saliency that attracts fixations. Drillers' fixations aligned better with dynamic saliency and scanners with 2-D saliency. The computed saliency was greater for detected targets than for missed targets. Similar results were observed in data from 19 radiologists who searched five stacks of clinical chest CTs for lung nodules. Dynamic saliency may be superior to the 2-D saliency for detecting targets embedded in volumetric images, and thus "drilling" may be more efficient than "scanning."

  8. Image correction for computed tomography to remove crosstalk artifacts

    International Nuclear Information System (INIS)

    King, K.F.

    1990-01-01

    A correction method and apparatus for Computed Tomography (CT) which removes ring and streak artifacts from images by correcting for data contamination by crosstalk errors comprises subtracting from the output S o of a detector, a crosstalk factor derived from outputs of adjacent detectors. The crosstalk factors are obtained by scanning an off-centre phantom. (author)

  9. Object recognition in images by human vision and computer vision

    NARCIS (Netherlands)

    Chen, Q.; Dijkstra, J.; Vries, de B.

    2010-01-01

    Object recognition plays a major role in human behaviour research in the built environment. Computer based object recognition techniques using images as input are challenging, but not an adequate representation of human vision. This paper reports on the differences in object shape recognition

  10. Computed radiography imaging plates and associated methods of manufacture

    Science.gov (United States)

    Henry, Nathaniel F.; Moses, Alex K.

    2015-08-18

    Computed radiography imaging plates incorporating an intensifying material that is coupled to or intermixed with the phosphor layer, allowing electrons and/or low energy x-rays to impart their energy on the phosphor layer, while decreasing internal scattering and increasing resolution. The radiation needed to perform radiography can also be reduced as a result.

  11. Computer processing of microscopic images of bacteria : morphometry and fluorimetry

    NARCIS (Netherlands)

    Wilkinson, Michael H.F.; Jansen, Gijsbert J.; Waaij, Dirk van der

    1994-01-01

    Several techniques that use computer analysis of microscopic images have been developed to study the complicated microbial flora in the human intestine, including measuring the shape and fluorescence intensity of bacteria. These techniques allow rapid assessment of changes in the intestinal flora

  12. RATIO_TOOL - SOFTWARE FOR COMPUTING IMAGE RATIOS

    Science.gov (United States)

    Yates, G. L.

    1994-01-01

    Geological studies analyze spectral data in order to gain information on surface materials. RATIO_TOOL is an interactive program for viewing and analyzing large multispectral image data sets that have been created by an imaging spectrometer. While the standard approach to classification of multispectral data is to match the spectrum for each input pixel against a library of known mineral spectra, RATIO_TOOL uses ratios of spectral bands in order to spot significant areas of interest within a multispectral image. Each image band can be viewed iteratively, or a selected image band of the data set can be requested and displayed. When the image ratios are computed, the result is displayed as a gray scale image. At this point a histogram option helps in viewing the distribution of values. A thresholding option can then be used to segment the ratio image result into two to four classes. The segmented image is then color coded to indicate threshold classes and displayed alongside the gray scale image. RATIO_TOOL is written in C language for Sun series computers running SunOS 4.0 and later. It requires the XView toolkit and the OpenWindows window manager (version 2.0 or 3.0). The XView toolkit is distributed with Open Windows. A color monitor is also required. The standard distribution medium for RATIO_TOOL is a .25 inch streaming magnetic tape cartridge in UNIX tar format. An electronic copy of the documentation is included on the program media. RATIO_TOOL was developed in 1992 and is a copyrighted work with all copyright vested in NASA. Sun, SunOS, and OpenWindows are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.

  13. System-level tools and reconfigurable computing for next-generation HWIL systems

    Science.gov (United States)

    Stark, Derek; McAulay, Derek; Cantle, Allan J.; Devlin, Malachy

    2001-08-01

    Previous work has been presented on the creation of computing architectures called DIME, which addressed the particular computing demands of hardware in the loop systems. These demands include low latency, high data rates and interfacing. While it is essential to have a capable platform for handling and processing of the data streams, the tools must also complement this so that a system's engineer is able to construct their final system. The paper will present the work in the area of integration of system level design tools, such as MATLAB and SIMULINK, with a reconfigurable computing platform. This will demonstrate how algorithms can be implemented and simulated in a familiar rapid application development environment before they are automatically transposed for downloading directly to the computing platform. This complements the established control tools, which handle the configuration and control of the processing systems leading to a tool suite for system development and implementation. As the development tools have evolved the core-processing platform has also been enhanced. These improved platforms are based on dynamically reconfigurable computing, utilizing FPGA technologies, and parallel processing methods that more than double the performance and data bandwidth capabilities. This offers support for the processing of images in Infrared Scene Projectors with 1024 X 1024 resolutions at 400 Hz frame rates. The processing elements will be using the latest generation of FPGAs, which implies that the presented systems will be rated in terms of Tera (1012) operations per second.

  14. Image Guided Radiation Therapy Using Synthetic Computed Tomography Images in Brain Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Price, Ryan G. [Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan (United States); Department of Radiation Oncology, Wayne State University School of Medicine, Detroit, Michigan (United States); Kim, Joshua P.; Zheng, Weili [Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan (United States); Chetty, Indrin J. [Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan (United States); Department of Radiation Oncology, Wayne State University School of Medicine, Detroit, Michigan (United States); Glide-Hurst, Carri, E-mail: churst2@hfhs.org [Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan (United States); Department of Radiation Oncology, Wayne State University School of Medicine, Detroit, Michigan (United States)

    2016-07-15

    Purpose: The development of synthetic computed tomography (CT) (synCT) derived from magnetic resonance (MR) images supports MR-only treatment planning. We evaluated the accuracy of synCT and synCT-generated digitally reconstructed radiographs (DRRs) relative to CT and determined their performance for image guided radiation therapy (IGRT). Methods and Materials: Magnetic resonance simulation (MR-SIM) and CT simulation (CT-SIM) images were acquired of an anthropomorphic skull phantom and 12 patient brain cancer cases. SynCTs were generated using fluid attenuation inversion recovery, ultrashort echo time, and Dixon data sets through a voxel-based weighted summation of 5 tissue classifications. The DRRs were generated from the phantom synCT, and geometric fidelity was assessed relative to CT-generated DRRs through bounding box and landmark analysis. An offline retrospective analysis was conducted to register cone beam CTs (n=34) to synCTs and CTs using automated rigid registration in the treatment planning system. Planar MV and KV images (n=37) were rigidly registered to synCT and CT DRRs using an in-house script. Planar and volumetric registration reproducibility was assessed and margin differences were characterized by the van Herk formalism. Results: Bounding box and landmark analysis of phantom synCT DRRs were within 1 mm of CT DRRs. Absolute planar registration shift differences ranged from 0.0 to 0.7 mm for phantom DRRs on all treatment platforms and from 0.0 to 0.4 mm for volumetric registrations. For patient planar registrations, the mean shift differences were 0.4 ± 0.5 mm (range, −0.6 to 1.6 mm), 0.0 ± 0.5 mm (range, −0.9 to 1.2 mm), and 0.1 ± 0.3 mm (range, −0.7 to 0.6 mm) for the superior-inferior (S-I), left-right (L-R), and anterior-posterior (A-P) axes, respectively. The mean shift differences in volumetric registrations were 0.6 ± 0.4 mm (range, −0.2 to 1.6 mm), 0.2 ± 0.4 mm (range, −0.3 to 1.2 mm), and 0.2 ± 0

  15. Image Guided Radiation Therapy Using Synthetic Computed Tomography Images in Brain Cancer

    International Nuclear Information System (INIS)

    Price, Ryan G.; Kim, Joshua P.; Zheng, Weili; Chetty, Indrin J.; Glide-Hurst, Carri

    2016-01-01

    Purpose: The development of synthetic computed tomography (CT) (synCT) derived from magnetic resonance (MR) images supports MR-only treatment planning. We evaluated the accuracy of synCT and synCT-generated digitally reconstructed radiographs (DRRs) relative to CT and determined their performance for image guided radiation therapy (IGRT). Methods and Materials: Magnetic resonance simulation (MR-SIM) and CT simulation (CT-SIM) images were acquired of an anthropomorphic skull phantom and 12 patient brain cancer cases. SynCTs were generated using fluid attenuation inversion recovery, ultrashort echo time, and Dixon data sets through a voxel-based weighted summation of 5 tissue classifications. The DRRs were generated from the phantom synCT, and geometric fidelity was assessed relative to CT-generated DRRs through bounding box and landmark analysis. An offline retrospective analysis was conducted to register cone beam CTs (n=34) to synCTs and CTs using automated rigid registration in the treatment planning system. Planar MV and KV images (n=37) were rigidly registered to synCT and CT DRRs using an in-house script. Planar and volumetric registration reproducibility was assessed and margin differences were characterized by the van Herk formalism. Results: Bounding box and landmark analysis of phantom synCT DRRs were within 1 mm of CT DRRs. Absolute planar registration shift differences ranged from 0.0 to 0.7 mm for phantom DRRs on all treatment platforms and from 0.0 to 0.4 mm for volumetric registrations. For patient planar registrations, the mean shift differences were 0.4 ± 0.5 mm (range, −0.6 to 1.6 mm), 0.0 ± 0.5 mm (range, −0.9 to 1.2 mm), and 0.1 ± 0.3 mm (range, −0.7 to 0.6 mm) for the superior-inferior (S-I), left-right (L-R), and anterior-posterior (A-P) axes, respectively. The mean shift differences in volumetric registrations were 0.6 ± 0.4 mm (range, −0.2 to 1.6 mm), 0.2 ± 0.4 mm (range, −0.3 to 1.2 mm), and 0.2 ± 0

  16. How Well Do Computer-Generated Faces Tap Face Expertise?

    Directory of Open Access Journals (Sweden)

    Kate Crookes

    Full Text Available The use of computer-generated (CG stimuli in face processing research is proliferating due to the ease with which faces can be generated, standardised and manipulated. However there has been surprisingly little research into whether CG faces are processed in the same way as photographs of real faces. The present study assessed how well CG faces tap face identity expertise by investigating whether two indicators of face expertise are reduced for CG faces when compared to face photographs. These indicators were accuracy for identification of own-race faces and the other-race effect (ORE-the well-established finding that own-race faces are recognised more accurately than other-race faces. In Experiment 1 Caucasian and Asian participants completed a recognition memory task for own- and other-race real and CG faces. Overall accuracy for own-race faces was dramatically reduced for CG compared to real faces and the ORE was significantly and substantially attenuated for CG faces. Experiment 2 investigated perceptual discrimination for own- and other-race real and CG faces with Caucasian and Asian participants. Here again, accuracy for own-race faces was significantly reduced for CG compared to real faces. However the ORE was not affected by format. Together these results signal that CG faces of the type tested here do not fully tap face expertise. Technological advancement may, in the future, produce CG faces that are equivalent to real photographs. Until then caution is advised when interpreting results obtained using CG faces.

  17. Spiral Computed Tomographic Imaging Related to Computerized Ultrasonographic Images of Carotid Plaque Morphology and Histology

    DEFF Research Database (Denmark)

    Grønholdt, Marie-Louise M.; Wagner, Aase; Wiebe, Britt M.

    2001-01-01

    Echolucency of carotid atherosclerotic plaques, as evaluated by computerized B-mode ultrasonographic images, has been associated with an increased incidence of brain infarcts on cerebral computed tomographic scans. We tested the hypotheses that characterization of carotid plaques on spiral comput...

  18. Computer simulation of radiographic images sharpness in several system of image record

    International Nuclear Information System (INIS)

    Silva, Marcia Aparecida; Schiable, Homero; Frere, Annie France; Marques, Paulo M.A.; Oliveira, Henrique J.Q. de; Alves, Fatima F.R.; Medeiros, Regina B.

    1996-01-01

    A method to predict the influence of the record system on radiographic images sharpness by computer simulation is studied. The method intend to previously show the image to be obtained for each type of film or screen-film combination used during the exposure

  19. Spatial image modulation to improve performance of computed tomography imaging spectrometer

    Science.gov (United States)

    Bearman, Gregory H. (Inventor); Wilson, Daniel W. (Inventor); Johnson, William R. (Inventor)

    2010-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having patterns for imposing spatial structure are provided. The pattern may be imposed either directly on the object scene being imaged or at the field stop aperture. The use of the pattern improves the accuracy of the captured spatial and spectral information.

  20. Thermal Infrared Imaging-Based Computational Psychophysiology for Psychometrics.

    Science.gov (United States)

    Cardone, Daniela; Pinti, Paola; Merla, Arcangelo

    2015-01-01

    Thermal infrared imaging has been proposed as a potential system for the computational assessment of human autonomic nervous activity and psychophysiological states in a contactless and noninvasive way. Through bioheat modeling of facial thermal imagery, several vital signs can be extracted, including localized blood perfusion, cardiac pulse, breath rate, and sudomotor response, since all these parameters impact the cutaneous temperature. The obtained physiological information could then be used to draw inferences about a variety of psychophysiological or affective states, as proved by the increasing number of psychophysiological studies using thermal infrared imaging. This paper presents therefore a review of the principal achievements of thermal infrared imaging in computational physiology with regard to its capability of monitoring psychophysiological activity.

  1. Encoded diffractive optics for full-spectrum computational imaging

    KAUST Repository

    Heide, Felix

    2016-09-16

    Diffractive optical elements can be realized as ultra-thin plates that offer significantly reduced footprint and weight compared to refractive elements. However, such elements introduce severe chromatic aberrations and are not variable, unless used in combination with other elements in a larger, reconfigurable optical system. We introduce numerically optimized encoded phase masks in which different optical parameters such as focus or zoom can be accessed through changes in the mechanical alignment of a ultra-thin stack of two or more masks. Our encoded diffractive designs are combined with a new computational approach for self-calibrating imaging (blind deconvolution) that can restore high-quality images several orders of magnitude faster than the state of the art without pre-calibration of the optical system. This co-design of optics and computation enables tunable, full-spectrum imaging using thin diffractive optics.

  2. Encoded diffractive optics for full-spectrum computational imaging

    KAUST Repository

    Heide, Felix; Fu, Qiang; Peng, Yifan; Heidrich, Wolfgang

    2016-01-01

    Diffractive optical elements can be realized as ultra-thin plates that offer significantly reduced footprint and weight compared to refractive elements. However, such elements introduce severe chromatic aberrations and are not variable, unless used in combination with other elements in a larger, reconfigurable optical system. We introduce numerically optimized encoded phase masks in which different optical parameters such as focus or zoom can be accessed through changes in the mechanical alignment of a ultra-thin stack of two or more masks. Our encoded diffractive designs are combined with a new computational approach for self-calibrating imaging (blind deconvolution) that can restore high-quality images several orders of magnitude faster than the state of the art without pre-calibration of the optical system. This co-design of optics and computation enables tunable, full-spectrum imaging using thin diffractive optics.

  3. Image Relaxation Matching Based on Feature Points for DSM Generation

    Institute of Scientific and Technical Information of China (English)

    ZHENG Shunyi; ZHANG Zuxun; ZHANG Jianqing

    2004-01-01

    In photogrammetry and remote sensing, image matching is a basic and crucial process for automatic DEM generation. In this paper we presented a image relaxation matching method based on feature points. This method can be considered as an extention of regular grid point based matching. It avoids the shortcome of grid point based matching. For example, with this method, we can avoid low or even no texture area where errors frequently appear in cross correlaton matching. In the mean while, it makes full use of some mature techniques such as probability relaxation, image pyramid and the like which have already been successfully used in grid point matching process. Application of the technique to DEM generaton in different regions proved that it is more reasonable and reliable.

  4. A computer code to simulate X-ray imaging techniques

    International Nuclear Information System (INIS)

    Duvauchelle, Philippe; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel

    2000-01-01

    A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests

  5. A computer code to simulate X-ray imaging techniques

    Energy Technology Data Exchange (ETDEWEB)

    Duvauchelle, Philippe E-mail: philippe.duvauchelle@insa-lyon.fr; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel

    2000-09-01

    A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests.

  6. Generating color terrain images in an emergency response system

    International Nuclear Information System (INIS)

    Belles, R.D.

    1985-08-01

    The Atmospheric Release Advisory Capability (ARAC) provides real-time assessments of the consequences resulting from an atmospheric release of radioactive material. In support of this operation, a system has been created which integrates numerical models, data acquisition systems, data analysis techniques, and professional staff. Of particular importance is the rapid generation of graphical images of the terrain surface in the vicinity of the accident site. A terrain data base and an associated acquisition system have been developed that provide the required terrain data. This data is then used as input to a collection of graphics programs which create and display realistic color images of the terrain. The graphics system currently has the capability of generating color shaded relief images from both overhead and perspective viewpoints within minutes. These images serve to quickly familiarize ARAC assessors with the terrain near the release location, and thus permit them to make better informed decisions in modeling the behavior of the released material. 7 refs., 8 figs

  7. Computer-generated ovaries to assist follicle counting experiments.

    Directory of Open Access Journals (Sweden)

    Angelos Skodras

    Full Text Available Precise estimation of the number of follicles in ovaries is of key importance in the field of reproductive biology, both from a developmental point of view, where follicle numbers are determined at specific time points, as well as from a therapeutic perspective, determining the adverse effects of environmental toxins and cancer chemotherapeutics on the reproductive system. The two main factors affecting follicle number estimates are the sampling method and the variation in follicle numbers within animals of the same strain, due to biological variability. This study aims at assessing the effect of these two factors, when estimating ovarian follicle numbers of neonatal mice. We developed computer algorithms, which generate models of neonatal mouse ovaries (simulated ovaries, with characteristics derived from experimental measurements already available in the published literature. The simulated ovaries are used to reproduce in-silico counting experiments based on unbiased stereological techniques; the proposed approach provides the necessary number of ovaries and sampling frequency to be used in the experiments given a specific biological variability and a desirable degree of accuracy. The simulated ovary is a novel, versatile tool which can be used in the planning phase of experiments to estimate the expected number of animals and workload, ensuring appropriate statistical power of the resulting measurements. Moreover, the idea of the simulated ovary can be applied to other organs made up of large numbers of individual functional units.

  8. A new generation drilling rig: hydraulically powered and computer controlled

    Energy Technology Data Exchange (ETDEWEB)

    Laurent, M.; Angman, P.; Oveson, D. [Tesco Corp., Calgary, AB, (Canada)

    1999-11-01

    Development, testing and operation of a new generation of hydraulically powered and computer controlled drilling rig that incorporates a number of features that enhance functionality and productivity, is described. The rig features modular construction, a large heated common drilling machinery room, permanently-mounted draw works which, along with the permanently installed top drive, significantly reduces rig-up/rig-down time. Also featured are closed and open hydraulic systems and a unique hydraulic distribution manifold. All functions are controlled through a programmable logic controller (PLC), providing almost unlimited interlocks and calculations to increase rig safety and efficiency. Simplified diagnostic routines, remote monitoring and troubleshooting are also part of the system. To date, two rigs are in operation. Performance of both rigs has been rated as `very good`. Little or no operational problems have been experienced; downtime has averaged 0.61 per cent since August 1998 when the the first of the two rigs went into operation. The most important future application for this rig is for use with the casing drilling process which eliminates the need for drill pipe and tripping. It also reduces the drilling time lost due to unscheduled events such as reaming, fishing and taking kicks while tripping. 1 tab., 6 figs.

  9. A comparative study between xerographic, computer-assisted overlay generation and animated-superimposition methods in bite mark analyses.

    Science.gov (United States)

    Tai, Meng Wei; Chong, Zhen Feng; Asif, Muhammad Khan; Rahmat, Rabiah A; Nambiar, Phrabhakaran

    2016-09-01

    This study was to compare the suitability and precision of xerographic and computer-assisted methods for bite mark investigations. Eleven subjects were asked to bite on their forearm and the bite marks were photographically recorded. Alginate impressions of the subjects' dentition were taken and their casts were made using dental stone. The overlays generated by xerographic method were obtained by photocopying the subjects' casts and the incisal edge outlines were then transferred on a transparent sheet. The bite mark images were imported into Adobe Photoshop® software and printed to life-size. The bite mark analyses using xerographically generated overlays were done by comparing an overlay to the corresponding printed bite mark images manually. In computer-assisted method, the subjects' casts were scanned into Adobe Photoshop®. The bite mark analyses using computer-assisted overlay generation were done by matching an overlay and the corresponding bite mark images digitally using Adobe Photoshop®. Another comparison method was superimposing the cast images with corresponding bite mark images employing the Adobe Photoshop® CS6 and GIF-Animator©. A score with a range of 0-3 was given during analysis to each precision-determining criterion and the score was increased with better matching. The Kruskal Wallis H test showed significant difference between the three sets of data (H=18.761, p<0.05). In conclusion, bite mark analysis using the computer-assisted animated-superimposition method was the most accurate, followed by the computer-assisted overlay generation and lastly the xerographic method. The superior precision contributed by digital method is discernible despite the human skin being a poor recording medium of bite marks. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Generating high gray-level resolution monochrome displays with conventional computer graphics cards and color monitors.

    Science.gov (United States)

    Li, Xiangrui; Lu, Zhong-Lin; Xu, Pengjing; Jin, Jianzhong; Zhou, Yifeng

    2003-11-30

    Display systems based on conventional computer graphics cards are capable of generating images with about 8-bit luminance resolution. However, most vision experiments require more than 12 bits of luminance resolution. Pelli and Zhang [Spatial Vis. 10 (1997) 443] described a video attenuator for generating high luminance resolution displays on a monochrome monitor, or for driving just the green gun of a color monitor. Here we show how to achieve a white display by adding video amplifiers to duplicate the monochrome signal to drive all three guns of any color monitor. Because of the lack of the availability of high quality monochrome monitors, our method provides an inexpensive way to achieve high-resolution monochromatic displays using conventional, easy-to-get equipment. We describe the design principles, test results, and a few additional functionalities.

  11. Single instruction computer architecture and its application in image processing

    Science.gov (United States)

    Laplante, Phillip A.

    1992-03-01

    A single processing computer system using only half-adder circuits is described. In addition, it is shown that only a single hard-wired instruction is needed in the control unit to obtain a complete instruction set for this general purpose computer. Such a system has several advantages. First it is intrinsically a RISC machine--in fact the 'ultimate RISC' machine. Second, because only a single type of logic element is employed the entire computer system can be easily realized on a single, highly integrated chip. Finally, due to the homogeneous nature of the computer's logic elements, the computer has possible implementations as an optical or chemical machine. This in turn suggests possible paradigms for neural computing and artificial intelligence. After showing how we can implement a full-adder, min, max and other operations using the half-adder, we use an array of such full-adders to implement the dilation operation for two black and white images. Next we implement the erosion operation of two black and white images using a relative complement function and the properties of erosion and dilation. This approach was inspired by papers by van der Poel in which a single instruction is used to furnish a complete set of general purpose instructions and by Bohm- Jacopini where it is shown that any problem can be solved using a Turing machine with one entry and one exit.

  12. Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering

    Science.gov (United States)

    Jiang, Lu; Piao, Yan

    2018-04-01

    The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.

  13. Image based Monte Carlo modeling for computational phantom

    International Nuclear Information System (INIS)

    Cheng, M.; Wang, W.; Zhao, K.; Fan, Y.; Long, P.; Wu, Y.

    2013-01-01

    Full text of the publication follows. The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verification of the models for Monte Carlo (MC) simulation are very tedious, error-prone and time-consuming. In addition, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling. The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients (Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection. (authors)

  14. Automatic Solitary Lung Nodule Detection in Computed Tomography Images Slices

    Science.gov (United States)

    Sentana, I. W. B.; Jawas, N.; Asri, S. A.

    2018-01-01

    Lung nodule is an early indicator of some lung diseases, including lung cancer. In Computed Tomography (CT) based image, nodule is known as a shape that appears brighter than lung surrounding. This research aim to develop an application that automatically detect lung nodule in CT images. There are some steps in algorithm such as image acquisition and conversion, image binarization, lung segmentation, blob detection, and classification. Data acquisition is a step to taking image slice by slice from the original *.dicom format and then each image slices is converted into *.tif image format. Binarization that tailoring Otsu algorithm, than separated the background and foreground part of each image slices. After removing the background part, the next step is to segment part of the lung only so the nodule can localized easier. Once again Otsu algorithm is use to detect nodule blob in localized lung area. The final step is tailoring Support Vector Machine (SVM) to classify the nodule. The application has succeed detecting near round nodule with a certain threshold of size. Those detecting result shows drawback in part of thresholding size and shape of nodule that need to enhance in the next part of the research. The algorithm also cannot detect nodule that attached to wall and Lung Chanel, since it depend the searching only on colour differences.

  15. Robust generative asymmetric GMM for brain MR image segmentation.

    Science.gov (United States)

    Ji, Zexuan; Xia, Yong; Zheng, Yuhui

    2017-11-01

    Accurate segmentation of brain tissues from magnetic resonance (MR) images based on the unsupervised statistical models such as Gaussian mixture model (GMM) has been widely studied during last decades. However, most GMM based segmentation methods suffer from limited accuracy due to the influences of noise and intensity inhomogeneity in brain MR images. To further improve the accuracy for brain MR image segmentation, this paper presents a Robust Generative Asymmetric GMM (RGAGMM) for simultaneous brain MR image segmentation and intensity inhomogeneity correction. First, we develop an asymmetric distribution to fit the data shapes, and thus construct a spatial constrained asymmetric model. Then, we incorporate two pseudo-likelihood quantities and bias field estimation into the model's log-likelihood, aiming to exploit the neighboring priors of within-cluster and between-cluster and to alleviate the impact of intensity inhomogeneity, respectively. Finally, an expectation maximization algorithm is derived to iteratively maximize the approximation of the data log-likelihood function to overcome the intensity inhomogeneity in the image and segment the brain MR images simultaneously. To demonstrate the performances of the proposed algorithm, we first applied the proposed algorithm to a synthetic brain MR image to show the intermediate illustrations and the estimated distribution of the proposed algorithm. The next group of experiments is carried out in clinical 3T-weighted brain MR images which contain quite serious intensity inhomogeneity and noise. Then we quantitatively compare our algorithm to state-of-the-art segmentation approaches by using Dice coefficient (DC) on benchmark images obtained from IBSR and BrainWeb with different level of noise and intensity inhomogeneity. The comparison results on various brain MR images demonstrate the superior performances of the proposed algorithm in dealing with the noise and intensity inhomogeneity. In this paper, the RGAGMM

  16. Batch Image Encryption Using Generated Deep Features Based on Stacked Autoencoder Network

    Directory of Open Access Journals (Sweden)

    Fei Hu

    2017-01-01

    Full Text Available Chaos-based algorithms have been widely adopted to encrypt images. But previous chaos-based encryption schemes are not secure enough for batch image encryption, for images are usually encrypted using a single sequence. Once an encrypted image is cracked, all the others will be vulnerable. In this paper, we proposed a batch image encryption scheme into which a stacked autoencoder (SAE network was introduced to generate two chaotic matrices; then one set is used to produce a total shuffling matrix to shuffle the pixel positions on each plain image, and another produces a series of independent sequences of which each is used to confuse the relationship between the permutated image and the encrypted image. The scheme is efficient because of the advantages of parallel computing of SAE, which leads to a significant reduction in the run-time complexity; in addition, the hybrid application of shuffling and confusing enhances the encryption effect. To evaluate the efficiency of our scheme, we compared it with the prevalent “logistic map,” and outperformance was achieved in running time estimation. The experimental results and analysis show that our scheme has good encryption effect and is able to resist brute-force attack, statistical attack, and differential attack.

  17. Empowering enterprises through next-generation enterprise computing

    NARCIS (Netherlands)

    van Sinderen, Marten J.; Andrade Almeida, João

    Enterprise computing is concerned with exploiting interconnected computers to improve the efficiency and effectiveness of larger companies. Such companies form business organizations that manage various sorts of information, used by disparate groups of people, who are situated at different

  18. Supporting hypothesis generation by learners exploring an interactive computer simulation

    NARCIS (Netherlands)

    van Joolingen, Wouter R.; de Jong, Ton

    1992-01-01

    Computer simulations provide environments enabling exploratory learning. Research has shown that these types of learning environments are promising applications of computer assisted learning but also that they introduce complex learning settings, involving a large number of learning processes. This

  19. Digital image processing and analysis human and computer vision applications with CVIPtools

    CERN Document Server

    Umbaugh, Scott E

    2010-01-01

    Section I Introduction to Digital Image Processing and AnalysisDigital Image Processing and AnalysisOverviewImage Analysis and Computer VisionImage Processing and Human VisionKey PointsExercisesReferencesFurther ReadingComputer Imaging SystemsImaging Systems OverviewImage Formation and SensingCVIPtools SoftwareImage RepresentationKey PointsExercisesSupplementary ExercisesReferencesFurther ReadingSection II Digital Image Analysis and Computer VisionIntroduction to Digital Image AnalysisIntroductionPreprocessingBinary Image AnalysisKey PointsExercisesSupplementary ExercisesReferencesFurther Read

  20. A Stochastic Approach for Blurred Image Restoration and Optical Flow Computation on Field Image Sequence

    Institute of Scientific and Technical Information of China (English)

    高文; 陈熙霖

    1997-01-01

    The blur in target images caused by camera vibration due to robot motion or hand shaking and by object(s) moving in the background scene is different to deal with in the computer vision system.In this paper,the authors study the relation model between motion and blur in the case of object motion existing in video image sequence,and work on a practical computation algorithm for both motion analysis and blut image restoration.Combining the general optical flow and stochastic process,the paper presents and approach by which the motion velocity can be calculated from blurred images.On the other hand,the blurred image can also be restored using the obtained motion information.For solving a problem with small motion limitation on the general optical flow computation,a multiresolution optical flow algoritm based on MAP estimation is proposed. For restoring the blurred image ,an iteration algorithm and the obtained motion velocity are used.The experiment shows that the proposed approach for both motion velocity computation and blurred image restoration works well.

  1. Fast precalculated triangular mesh algorithm for 3D binary computer-generated holograms.

    Science.gov (United States)

    Yang, Fan; Kaczorowski, Andrzej; Wilkinson, Tim D

    2014-12-10

    A new method for constructing computer-generated holograms using a precalculated triangular mesh is presented. The speed of calculation can be increased dramatically by exploiting both the precalculated base triangle and GPU parallel computing. Unlike algorithms using point-based sources, this method can reconstruct a more vivid 3D object instead of a "hollow image." In addition, there is no need to do a fast Fourier transform for each 3D element every time. A ferroelectric liquid crystal spatial light modulator is used to display the binary hologram within our experiment and the hologram of a base right triangle is produced by utilizing just a one-step Fourier transform in the 2D case, which can be expanded to the 3D case by multiplying by a suitable Fresnel phase plane. All 3D holograms generated in this paper are based on Fresnel propagation; thus, the Fresnel plane is treated as a vital element in producing the hologram. A GeForce GTX 770 graphics card with 2 GB memory is used to achieve parallel computing.

  2. New generation of 3D desktop computer interfaces

    Science.gov (United States)

    Skerjanc, Robert; Pastoor, Siegmund

    1997-05-01

    Today's computer interfaces use 2-D displays showing windows, icons and menus and support mouse interactions for handling programs and data files. The interface metaphor is that of a writing desk with (partly) overlapping sheets of documents placed on its top. Recent advances in the development of 3-D display technology give the opportunity to take the interface concept a radical stage further by breaking the design limits of the desktop metaphor. The major advantage of the envisioned 'application space' is, that it offers an additional, immediately perceptible dimension to clearly and constantly visualize the structure and current state of interrelations between documents, videos, application programs and networked systems. In this context, we describe the development of a visual operating system (VOS). Under VOS, applications appear as objects in 3-D space. Users can (graphically connect selected objects to enable communication between the respective applications. VOS includes a general concept of visual and object oriented programming for tasks ranging from, e.g., low-level programming up to high-level application configuration. In order to enable practical operation in an office or at home for many hours, the system should be very comfortable to use. Since typical 3-D equipment used, e.g., in virtual-reality applications (head-mounted displays, data gloves) is rather cumbersome and straining, we suggest to use off-head displays and contact-free interaction techniques. In this article, we introduce an autostereoscopic 3-D display and connected video based interaction techniques which allow viewpoint-depending imaging (by head tracking) and visually controlled modification of data objects and links (by gaze tracking, e.g., to pick, 3-D objects just by looking at them).

  3. Generative Computer Assisted Instruction: An Application of Artificial Intelligence to CAI.

    Science.gov (United States)

    Koffman, Elliot B.

    Frame-oriented computer-assisted instruction (CAI) systems dominate the field, but these mechanized programed texts utilize the computational power of the computer to a minimal degree and are difficult to modify. Newer, generative CAI systems which are supplied with a knowledge of subject matter can generate their own problems and solutions, can…

  4. Computational imaging using lightweight diffractive-refractive optics

    KAUST Repository

    Peng, Yifan

    2015-11-23

    Diffractive optical elements (DOE) show great promise for imaging optics that are thinner and more lightweight than conventional refractive lenses while preserving their light efficiency. Unfortunately, severe spectral dispersion currently limits the use of DOEs in consumer-level lens design. In this article, we jointly design lightweight diffractive-refractive optics and post-processing algorithms to enable imaging under white light illumination. Using the Fresnel lens as a general platform, we show three phase-plate designs, including a super-thin stacked plate design, a diffractive-refractive-hybrid lens, and a phase coded-aperture lens. Combined with cross-channel deconvolution algorithm, both spherical and chromatic aberrations are corrected. Experimental results indicate that using our computational imaging approach, diffractive-refractive optics is an alternative candidate to build light efficient and thin optics for white light imaging.

  5. Computational imaging using lightweight diffractive-refractive optics

    KAUST Repository

    Peng, Yifan; Fu, Qiang; Amata, Hadi; Su, Shuochen; Heide, Felix; Heidrich, Wolfgang

    2015-01-01

    Diffractive optical elements (DOE) show great promise for imaging optics that are thinner and more lightweight than conventional refractive lenses while preserving their light efficiency. Unfortunately, severe spectral dispersion currently limits the use of DOEs in consumer-level lens design. In this article, we jointly design lightweight diffractive-refractive optics and post-processing algorithms to enable imaging under white light illumination. Using the Fresnel lens as a general platform, we show three phase-plate designs, including a super-thin stacked plate design, a diffractive-refractive-hybrid lens, and a phase coded-aperture lens. Combined with cross-channel deconvolution algorithm, both spherical and chromatic aberrations are corrected. Experimental results indicate that using our computational imaging approach, diffractive-refractive optics is an alternative candidate to build light efficient and thin optics for white light imaging.

  6. Cone beam computed tomography: A boon for maxillofacial imaging

    Directory of Open Access Journals (Sweden)

    Sreenivas Rao Ghali

    2017-01-01

    Full Text Available In day to day practice, the radiographic techniques used individually or in combination suffer from some inherent limits of all planar two-dimensional (2D projections such as magnification, distortion, superimposition, and misrepresentation of anatomic structures. The introduction of cone-beam computed tomography (CBCT, specifically dedicated to imaging the maxillofacial region, heralds a major shift from 2D to three-dimensional (3D approach. It provides a complete 3D view of the maxilla, mandible, teeth, and supporting structures with relatively high resolution allowing a more accurate diagnosis, treatment planning and monitoring, and analysis of outcomes than conventional 2D images, along with low radiation exposure to the patient. CBCT has opened up new vistas for the use of 3D imaging as a diagnostic and treatment planning tool in dentistry. This paper provides an overview of the imaging principles, underlying technology, dental applications, and in particular focuses on the emerging role of CBCT in dentistry.

  7. Low-dose computed tomographic imaging in orbital trauma

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, A.; Whitehouse, R.W. (Manchester Univ. (United Kingdom). Dept. of Diagnostic Radiology)

    1993-08-01

    The authors review findings in 75 computed tomographic (CT) examinations of 66 patients with orbital trauma who were imaged using a low-radiation-dose CT technique. Imaging was performed using a dynamic scan mode and exposure factors of 120 kVp and 80 mAs resulting in a skin dose of 11 mGy with an effective dose-equivalent of 0.22 mSv. Image quality was diagnostic in all cases and excellent in 73 examinations. Soft-tissue abnormalities within the orbit including muscle adhesions were well demonstrated both on primary axial and reconstructed multiplanar images. The benefits of multiplanar reconstructions are stressed and the contribution of soft-tissue injuries to symptomatic diplopia examined. (author).

  8. ImageParser: a tool for finite element generation from three-dimensional medical images

    Directory of Open Access Journals (Sweden)

    Yamada T

    2004-10-01

    Full Text Available Abstract Background The finite element method (FEM is a powerful mathematical tool to simulate and visualize the mechanical deformation of tissues and organs during medical examinations or interventions. It is yet a challenge to build up an FEM mesh directly from a volumetric image partially because the regions (or structures of interest (ROIs may be irregular and fuzzy. Methods A software package, ImageParser, is developed to generate an FEM mesh from 3-D tomographic medical images. This software uses a semi-automatic method to detect ROIs from the context of image including neighboring tissues and organs, completes segmentation of different tissues, and meshes the organ into elements. Results The ImageParser is shown to build up an FEM model for simulating the mechanical responses of the breast based on 3-D CT images. The breast is compressed by two plate paddles under an overall displacement as large as 20% of the initial distance between the paddles. The strain and tangential Young's modulus distributions are specified for the biomechanical analysis of breast tissues. Conclusion The ImageParser can successfully exact the geometry of ROIs from a complex medical image and generate the FEM mesh with customer-defined segmentation information.

  9. Generation of CR mammographic image for evaluation quality parameters

    International Nuclear Information System (INIS)

    Flores, Mabel B.; Mourao, Arnaldo P.; Centro Federal de Educacao Tecnologica de Minas Gerais

    2017-01-01

    Currently, among the diseases most feared by women, breast cancer ranks first in the world with an incidence of more than 1.6 million cases and a mortality of more than 521.9 thousand cases by year, which makes this disease the type of cancer with higher incidence and mortality compared to the other types of cancer that mainly affect the female gender, without considering non-melanoma skin cancer. In Brazil, more than 14.4 thousand deaths were registered in 2013 and more than 57 thousand new cases were estimated for 2016. The use of computerized radiography (CR) for the generation of mammographic digital images is widely used in Brazil for the screening of breast cancer. The aim of this investigation is to study the variation of CR plate response to exposure to X-ray beams in a mammography unit. Two CR plates from different manufacturers and a compressed breast phantom containing calcium carbonate structures of different sizes simulating calcifications were used for this study. An X-ray beam generated by 30 kV was selected to realize successive exposures of each plate by performing a time variation of 0.5 to 3.5 s, obtaining the raw images. The acquired images were evaluated with the ImageJ software to determine the saturation time of the plates when exposed to X-ray beams and the qualitative resolution of each plate. The plates were found to saturate at different times when exposed under the same conditions to X-ray beams. By means of the images acquired with the breast phantom, it was possible to observe only structures of calcium carbonate with sizes greater than 177 μm. (author)

  10. Ortho Image and DTM Generation with Intelligent Methods

    Science.gov (United States)

    Bagheri, H.; Sadeghian, S.

    2013-10-01

    Nowadays the artificial intelligent algorithms has considered in GIS and remote sensing. Genetic algorithm and artificial neural network are two intelligent methods that are used for optimizing of image processing programs such as edge extraction and etc. these algorithms are very useful for solving of complex program. In this paper, the ability and application of genetic algorithm and artificial neural network in geospatial production process like geometric modelling of satellite images for ortho photo generation and height interpolation in raster Digital Terrain Model production process is discussed. In first, the geometric potential of Ikonos-2 and Worldview-2 with rational functions, 2D & 3D polynomials were tested. Also comprehensive experiments have been carried out to evaluate the viability of the genetic algorithm for optimization of rational function, 2D & 3D polynomials. Considering the quality of Ground Control Points, the accuracy (RMSE) with genetic algorithm and 3D polynomials method for Ikonos-2 Geo image was 0.508 pixel sizes and the accuracy (RMSE) with GA algorithm and rational function method for Worldview-2 image was 0.930 pixel sizes. For more another optimization artificial intelligent methods, neural networks were used. With the use of perceptron network in Worldview-2 image, a result of 0.84 pixel sizes with 4 neurons in middle layer was gained. The final conclusion was that with artificial intelligent algorithms it is possible to optimize the existing models and have better results than usual ones. Finally the artificial intelligence methods, like genetic algorithms as well as neural networks, were examined on sample data for optimizing interpolation and for generating Digital Terrain Models. The results then were compared with existing conventional methods and it appeared that these methods have a high capacity in heights interpolation and that using these networks for interpolating and optimizing the weighting methods based on inverse

  11. Generation of CR mammographic image for evaluation quality parameters

    Energy Technology Data Exchange (ETDEWEB)

    Flores, Mabel B.; Mourao, Arnaldo P., E-mail: mbustos@ufmg.br, E-mail: apratabhz@gmail.com [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear; Centro Federal de Educacao Tecnologica de Minas Gerais (CEFET-MG), Belo Horizonte, MG (Brazil). Centro de Engenharia Politecnica

    2017-11-01

    Currently, among the diseases most feared by women, breast cancer ranks first in the world with an incidence of more than 1.6 million cases and a mortality of more than 521.9 thousand cases by year, which makes this disease the type of cancer with higher incidence and mortality compared to the other types of cancer that mainly affect the female gender, without considering non-melanoma skin cancer. In Brazil, more than 14.4 thousand deaths were registered in 2013 and more than 57 thousand new cases were estimated for 2016. The use of computerized radiography (CR) for the generation of mammographic digital images is widely used in Brazil for the screening of breast cancer. The aim of this investigation is to study the variation of CR plate response to exposure to X-ray beams in a mammography unit. Two CR plates from different manufacturers and a compressed breast phantom containing calcium carbonate structures of different sizes simulating calcifications were used for this study. An X-ray beam generated by 30 kV was selected to realize successive exposures of each plate by performing a time variation of 0.5 to 3.5 s, obtaining the raw images. The acquired images were evaluated with the ImageJ software to determine the saturation time of the plates when exposed to X-ray beams and the qualitative resolution of each plate. The plates were found to saturate at different times when exposed under the same conditions to X-ray beams. By means of the images acquired with the breast phantom, it was possible to observe only structures of calcium carbonate with sizes greater than 177 μm. (author)

  12. Accurate measurement of surface areas of anatomical structures by computer-assisted triangulation of computed tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Allardice, J.T.; Jacomb-Hood, J.; Abulafi, A.M.; Williams, N.S. (Royal London Hospital (United Kingdom)); Cookson, J.; Dykes, E.; Holman, J. (London Hospital Medical College (United Kingdom))

    1993-05-01

    There is a need for accurate surface area measurement of internal anatomical structures in order to define light dosimetry in adjunctive intraoperative photodynamic therapy (AIOPDT). The authors investigated whether computer-assisted triangulation of serial sections generated by computed tomography (CT) scanning can give an accurate assessment of the surface area of the walls of the true pelvis after anterior resection and before colorectal anastomosis. They show that the technique of paper density tessellation is an acceptable method of measuring the surface areas of phantom objects, with a maximum error of 0.5%, and is used as the gold standard. Computer-assisted triangulation of CT images of standard geometric objects and accurately-constructed pelvic phantoms gives a surface area assessment with a maximum error of 2.5% compared with the gold standard. The CT images of 20 patients' pelves have been analysed by computer-assisted triangulation and this shows the surface area of the walls varies from 143 cm[sup 2] to 392 cm[sup 2]. (Author).

  13. Metal Artifact Suppression in Dental Cone Beam Computed Tomography Images Using Image Processing Techniques.

    Science.gov (United States)

    Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh

    2018-01-01

    Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images.

  14. MYTHS vesus reality in computed radiography image quality

    International Nuclear Information System (INIS)

    Mango, Steve; Castro, Luiz

    2009-01-01

    As NDE operation - particularly radiographic testing - ransition form analog to digital technologies such as computed radiography (CR), users are learning that there's more to digital image quality than meets the eye. In fact, there are ultiple factors that determine the final perceived image quality of a computed radiograph. Many of these factors are misunderstood, and some are touted as the ''key parameter'' or ''magic bullet'' in producing optiumum image quality, In reality, such claims are oversimplified, and are more marketing hype than reality. The truth?. Perceived image quality results form the cascaded effects of many factor - such as sharpness, system noise, spot size and pixel size, subject contrast, bit depth, radiographic technique, and so on. Many of these factors are within the control of rdiographers or designers of equipment and media. This paper will explain some of these key factors, dispel some of the myths surrounding them, and will show that qualities such as bigger, smaller, more, or less are not always better when it comes to CR image quality. (authors)

  15. Some computer applications and digital image processing in nuclear medicine

    International Nuclear Information System (INIS)

    Lowinger, T.

    1981-01-01

    Methods of digital image processing are applied to problems in nuclear medicine imaging. The symmetry properties of central nervous system lesions are exploited in an attempt to determine the three-dimensional radioisotope density distribution within the lesions. An algorithm developed by astronomers at the end of the 19th century to determine the distribution of matter in globular clusters is applied to tumors. This algorithm permits the emission-computed-tomographic reconstruction of spherical lesions from a single view. The three-dimensional radioisotope distribution derived by the application of the algorithm can be used to characterize the lesions. The applicability to nuclear medicine images of ten edge detection methods in general usage in digital image processing were evaluated. A general model of image formation by scintillation cameras is developed. The model assumes that objects to be imaged are composed of a finite set of points. The validity of the model has been verified by its ability to duplicate experimental results. Practical applications of this work involve quantitative assessment of the distribution of radipharmaceuticals under clinical situations and the study of image processing algorithms

  16. Automated breast segmentation in ultrasound computer tomography SAFT images

    Science.gov (United States)

    Hopp, T.; You, W.; Zapf, M.; Tan, W. Y.; Gemmeke, H.; Ruiter, N. V.

    2017-03-01

    Ultrasound Computer Tomography (USCT) is a promising new imaging system for breast cancer diagnosis. An essential step before further processing is to remove the water background from the reconstructed images. In this paper we present a fully-automated image segmentation method based on three-dimensional active contours. The active contour method is extended by applying gradient vector flow and encoding the USCT aperture characteristics as additional weighting terms. A surface detection algorithm based on a ray model is developed to initialize the active contour, which is iteratively deformed to capture the breast outline in USCT reflection images. The evaluation with synthetic data showed that the method is able to cope with noisy images, and is not influenced by the position of the breast and the presence of scattering objects within the breast. The proposed method was applied to 14 in-vivo images resulting in an average surface deviation from a manual segmentation of 2.7 mm. We conclude that automated segmentation of USCT reflection images is feasible and produces results comparable to a manual segmentation. By applying the proposed method, reproducible segmentation results can be obtained without manual interaction by an expert.

  17. Secure Image Encryption Based On a Chua Chaotic Noise Generator

    Directory of Open Access Journals (Sweden)

    A. S. Andreatos

    2013-10-01

    Full Text Available This paper presents a secure image cryptography telecom system based on a Chua's circuit chaotic noise generator. A chaotic system based on synchronised Master–Slave Chua's circuits has been used as a chaotic true random number generator (CTRNG. Chaotic systems present unpredictable and complex behaviour. This characteristic, together with the dependence on the initial conditions as well as the tolerance of the circuit components, make CTRNGs ideal for cryptography. In the proposed system, the transmitter mixes an input image with chaotic noise produced by a CTRNG. Using thresholding techniques, the chaotic signal is converted to a true random bit sequence. The receiver must be able to reproduce exactly the same chaotic noise in order to subtract it from the received signal. This becomes possible with synchronisation between the two Chua's circuits: through the use of specific techniques, the trajectory of the Slave chaotic system can be bound to that of the Master circuit producing (almost identical behaviour. Additional blocks have been used in order to make the system highly parameterisable and robust against common attacks. The whole system is simulated in Matlab. Simulation results demonstrate satisfactory performance, as well as, robustness against cryptanalysis. The system works with both greyscale and colour jpg images.

  18. Real-time generation of images with pixel-by-pixel spectra for a coded aperture imager with high spectral resolution

    International Nuclear Information System (INIS)

    Ziock, K.P.; Burks, M.T.; Craig, W.; Fabris, L.; Hull, E.L.; Madden, N.W.

    2003-01-01

    The capabilities of a coded aperture imager are significantly enhanced when a detector with excellent energy resolution is used. We are constructing such an imager with a 1.1 cm thick, crossed-strip, planar detector which has 38 strips of 2 mm pitch in each dimension followed by a large coaxial detector. Full value from this system is obtained only when the images are 'fully deconvolved' meaning that the energy spectrum is available from each pixel in the image. The large number of energy bins associated with the spectral resolution of the detector, and the fixed pixel size, present significant computational challenges in generating an image in a timely manner at the conclusion of a data acquisition. The long computation times currently preclude the generation of intermediate images during the acquisition itself. We have solved this problem by building the images on-line as each event comes in using pre-imaged arrays of the system response. The generation of these arrays and the use of fractional mask-to-detector pixel sampling is discussed

  19. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    Science.gov (United States)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  20. Comparative study of ultrasound imaging, computed tomography and magnetic resonance imaging in gynecology

    International Nuclear Information System (INIS)

    Ishii, Kenji; Kobayashi, Hisaaki; Hoshihara, Takayuki; Kobayashi, Mitsunao; Suda, Yoshio; Takenaka, Eiichi; Sasa, Hidenori.

    1989-01-01

    We studied 18 patients who were operated at the National Defense Medical College Hospital and confirmed by pathological diagnosis. We compared ultrasound imaging, computed tomography (CT) and magnetic resonance imaging (MRI) of the patients. MRI was useful to diagnose enlargement of the uterine cavity and a small amount of ascites and to understand orientation of the pelvic organs. Ultrasound imaging is the most useful examination to diagnose gynecological diseases. But when it is difficult to diagnose by ultrasound imaging alone, we should employ either CT or MRI, or preferably both. (author)

  1. Multi-scale analysis of lung computed tomography images

    CERN Document Server

    Gori, I; Fantacci, M E; Preite Martinez, A; Retico, A; De Mitri, I; Donadio, S; Fulcheri, C

    2007-01-01

    A computer-aided detection (CAD) system for the identification of lung internal nodules in low-dose multi-detector helical Computed Tomography (CT) images was developed in the framework of the MAGIC-5 project. The three modules of our lung CAD system, a segmentation algorithm for lung internal region identification, a multi-scale dot-enhancement filter for nodule candidate selection and a multi-scale neural technique for false positive finding reduction, are described. The results obtained on a dataset of low-dose and thin-slice CT scans are shown in terms of free response receiver operating characteristic (FROC) curves and discussed.

  2. Computer-assisted detection of epileptiform focuses on SPECT images

    Science.gov (United States)

    Grzegorczyk, Dawid; Dunin-Wąsowicz, Dorota; Mulawka, Jan J.

    2010-09-01

    Epilepsy is a common nervous system disease often related to consciousness disturbances and muscular spasm which affects about 1% of the human population. Despite major technological advances done in medicine in the last years there was no sufficient progress towards overcoming it. Application of advanced statistical methods and computer image analysis offers the hope for accurate detection and later removal of an epileptiform focuses which are the cause of some types of epilepsy. The aim of this work was to create a computer system that would help to find and diagnose disorders of blood circulation in the brain This may be helpful for the diagnosis of the epileptic seizures onset in the brain.

  3. Computational information geometry for image and signal processing

    CERN Document Server

    Critchley, Frank; Dodson, Christopher

    2017-01-01

    This book focuses on the application and development of information geometric methods in the analysis, classification and retrieval of images and signals. It provides introductory chapters to help those new to information geometry and applies the theory to several applications. This area has developed rapidly over recent years, propelled by the major theoretical developments in information geometry, efficient data and image acquisition and the desire to process and interpret large databases of digital information. The book addresses both the transfer of methodology to practitioners involved in database analysis and in its efficient computational implementation.

  4. High spatial resolution CT image reconstruction using parallel computing

    International Nuclear Information System (INIS)

    Yin Yin; Liu Li; Sun Gongxing

    2003-01-01

    Using the PC cluster system with 16 dual CPU nodes, we accelerate the FBP and OR-OSEM reconstruction of high spatial resolution image (2048 x 2048). Based on the number of projections, we rewrite the reconstruction algorithms into parallel format and dispatch the tasks to each CPU. By parallel computing, the speedup factor is roughly equal to the number of CPUs, which can be up to about 25 times when 25 CPUs used. This technique is very suitable for real-time high spatial resolution CT image reconstruction. (authors)

  5. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    Science.gov (United States)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  6. A Technique for Generating Volumetric Cine-Magnetic Resonance Imaging

    International Nuclear Information System (INIS)

    Harris, Wendy; Ren, Lei; Cai, Jing; Zhang, You; Chang, Zheng; Yin, Fang-Fang

    2016-01-01

    Purpose: The purpose of this study was to develop a techique to generate on-board volumetric cine-magnetic resonance imaging (VC-MRI) using patient prior images, motion modeling, and on-board 2-dimensional cine MRI. Methods and Materials: One phase of a 4-dimensional MRI acquired during patient simulation is used as patient prior images. Three major respiratory deformation patterns of the patient are extracted from 4-dimensional MRI based on principal-component analysis. The on-board VC-MRI at any instant is considered as a deformation of the prior MRI. The deformation field is represented as a linear combination of the 3 major deformation patterns. The coefficients of the deformation patterns are solved by the data fidelity constraint using the acquired on-board single 2-dimensional cine MRI. The method was evaluated using both digital extended-cardiac torso (XCAT) simulation of lung cancer patients and MRI data from 4 real liver cancer patients. The accuracy of the estimated VC-MRI was quantitatively evaluated using volume-percent-difference (VPD), center-of-mass-shift (COMS), and target tracking errors. Effects of acquisition orientation, region-of-interest (ROI) selection, patient breathing pattern change, and noise on the estimation accuracy were also evaluated. Results: Image subtraction of ground-truth with estimated on-board VC-MRI shows fewer differences than image subtraction of ground-truth with prior image. Agreement between normalized profiles in the estimated and ground-truth VC-MRI was achieved with less than 6% error for both XCAT and patient data. Among all XCAT scenarios, the VPD between ground-truth and estimated lesion volumes was, on average, 8.43 ± 1.52% and the COMS was, on average, 0.93 ± 0.58 mm across all time steps for estimation based on the ROI region in the sagittal cine images. Matching to ROI in the sagittal view achieved better accuracy when there was substantial breathing pattern change. The technique was robust against

  7. A Technique for Generating Volumetric Cine-Magnetic Resonance Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Harris, Wendy [Medical Physics Graduate Program, Duke University, Durham, North Carolina (United States); Ren, Lei, E-mail: lei.ren@duke.edu [Medical Physics Graduate Program, Duke University, Durham, North Carolina (United States); Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina (United States); Cai, Jing [Medical Physics Graduate Program, Duke University, Durham, North Carolina (United States); Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina (United States); Zhang, You [Medical Physics Graduate Program, Duke University, Durham, North Carolina (United States); Chang, Zheng; Yin, Fang-Fang [Medical Physics Graduate Program, Duke University, Durham, North Carolina (United States); Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina (United States)

    2016-06-01

    Purpose: The purpose of this study was to develop a techique to generate on-board volumetric cine-magnetic resonance imaging (VC-MRI) using patient prior images, motion modeling, and on-board 2-dimensional cine MRI. Methods and Materials: One phase of a 4-dimensional MRI acquired during patient simulation is used as patient prior images. Three major respiratory deformation patterns of the patient are extracted from 4-dimensional MRI based on principal-component analysis. The on-board VC-MRI at any instant is considered as a deformation of the prior MRI. The deformation field is represented as a linear combination of the 3 major deformation patterns. The coefficients of the deformation patterns are solved by the data fidelity constraint using the acquired on-board single 2-dimensional cine MRI. The method was evaluated using both digital extended-cardiac torso (XCAT) simulation of lung cancer patients and MRI data from 4 real liver cancer patients. The accuracy of the estimated VC-MRI was quantitatively evaluated using volume-percent-difference (VPD), center-of-mass-shift (COMS), and target tracking errors. Effects of acquisition orientation, region-of-interest (ROI) selection, patient breathing pattern change, and noise on the estimation accuracy were also evaluated. Results: Image subtraction of ground-truth with estimated on-board VC-MRI shows fewer differences than image subtraction of ground-truth with prior image. Agreement between normalized profiles in the estimated and ground-truth VC-MRI was achieved with less than 6% error for both XCAT and patient data. Among all XCAT scenarios, the VPD between ground-truth and estimated lesion volumes was, on average, 8.43 ± 1.52% and the COMS was, on average, 0.93 ± 0.58 mm across all time steps for estimation based on the ROI region in the sagittal cine images. Matching to ROI in the sagittal view achieved better accuracy when there was substantial breathing pattern change. The technique was robust against

  8. A sampler of useful computational tools for applied geometry, computer graphics, and image processing foundations for computer graphics, vision, and image processing

    CERN Document Server

    Cohen-Or, Daniel; Ju, Tao; Mitra, Niloy J; Shamir, Ariel; Sorkine-Hornung, Olga; Zhang, Hao (Richard)

    2015-01-01

    A Sampler of Useful Computational Tools for Applied Geometry, Computer Graphics, and Image Processing shows how to use a collection of mathematical techniques to solve important problems in applied mathematics and computer science areas. The book discusses fundamental tools in analytical geometry and linear algebra. It covers a wide range of topics, from matrix decomposition to curvature analysis and principal component analysis to dimensionality reduction.Written by a team of highly respected professors, the book can be used in a one-semester, intermediate-level course in computer science. It

  9. Computer-aided assessment of diagnostic images for epidemiological research

    Directory of Open Access Journals (Sweden)

    Gange Stephen J

    2009-11-01

    Full Text Available Abstract Background Diagnostic images are often assessed for clinical outcomes using subjective methods, which are limited by the skill of the reviewer. Computer-aided diagnosis (CAD algorithms that assist reviewers in their decisions concerning outcomes have been developed to increase sensitivity and specificity in the clinical setting. However, these systems have not been well utilized in research settings to improve the measurement of clinical endpoints. Reductions in bias through their use could have important implications for etiologic research. Methods Using the example of cortical cataract detection, we developed an algorithm for assisting a reviewer in evaluating digital images for the presence and severity of lesions. Available image processing and statistical methods that were easily implementable were used as the basis for the CAD algorithm. The performance of the system was compared to the subjective assessment of five reviewers using 60 simulated images. Cortical cataract severity scores from 0 to 16 were assigned to the images by the reviewers and the CAD system, with each image assessed twice to obtain a measure of variability. Image characteristics that affected reviewer bias were also assessed by systematically varying the appearance of the simulated images. Results The algorithm yielded severity scores with smaller bias on images where cataract severity was mild to moderate (approximately ≤ 6/16ths. On high severity images, the bias of the CAD system exceeded that of the reviewers. The variability of the CAD system was zero on repeated images but ranged from 0.48 to 1.22 for the reviewers. The direction and magnitude of the bias exhibited by the reviewers was a function of the number of cataract opacities, the shape and the contrast of the lesions in the simulated images. Conclusion CAD systems are feasible to implement with available software and can be valuable when medical images contain exposure or outcome information for

  10. Three-dimensional pseudo-random number generator for implementing in hybrid computer systems

    International Nuclear Information System (INIS)

    Ivanov, M.A.; Vasil'ev, N.P.; Voronin, A.V.; Kravtsov, M.Yu.; Maksutov, A.A.; Spiridonov, A.A.; Khudyakova, V.I.; Chugunkov, I.V.

    2012-01-01

    The algorithm for generating pseudo-random numbers oriented to implementation by using hybrid computer systems is considered. The proposed solution is characterized by a high degree of parallel computing [ru

  11. Next Generation Computer Resources: Reference Model for Project Support Environments (Version 2.0)

    National Research Council Canada - National Science Library

    Brown, Alan

    1993-01-01

    The objective of the Next Generation Computer Resources (NGCR) program is to restructure the Navy's approach to acquisition of standard computing resources to take better advantage of commercial advances and investments...

  12. Bioassay Phantoms Using Medical Images and Computer Aided Manufacturing

    International Nuclear Information System (INIS)

    Xu, X. Geroge

    2011-01-01

    A radiation bioassay program relies on a set of standard human phantoms to calibrate and assess radioactivity levels inside a human body for radiation protection and nuclear medicine imaging purposes. However, the methodologies in the development and application of anthropomorphic phantoms, both physical and computational, had mostly remained the same for the past 40 years. We herein propose a 3-year research project to develop medical image-based physical and computational phantoms specifically for radiation bioassay applications involving internally deposited radionuclides. The broad, long-term objective of this research was to set the foundation for a systematic paradigm shift away from the anatomically crude phantoms in existence today to realistic and ultimately individual-specific bioassay methodologies. This long-term objective is expected to impact all areas of radiation bioassay involving nuclear power plants, U.S. DOE laboratories, and nuclear medicine clinics.

  13. Monitoring of facial stress during space flight: Optical computer recognition combining discriminative and generative methods

    Science.gov (United States)

    Dinges, David F.; Venkataraman, Sundara; McGlinchey, Eleanor L.; Metaxas, Dimitris N.

    2007-02-01

    Astronauts are required to perform mission-critical tasks at a high level of functional capability throughout spaceflight. Stressors can compromise their ability to do so, making early objective detection of neurobehavioral problems in spaceflight a priority. Computer optical approaches offer a completely unobtrusive way to detect distress during critical operations in space flight. A methodology was developed and a study completed to determine whether optical computer recognition algorithms could be used to discriminate facial expressions during stress induced by performance demands. Stress recognition from a facial image sequence is a subject that has not received much attention although it is an important problem for many applications beyond space flight (security, human-computer interaction, etc.). This paper proposes a comprehensive method to detect stress from facial image sequences by using a model-based tracker. The image sequences were captured as subjects underwent a battery of psychological tests under high- and low-stress conditions. A cue integration-based tracking system accurately captured the rigid and non-rigid parameters of different parts of the face (eyebrows, lips). The labeled sequences were used to train the recognition system, which consisted of generative (hidden Markov model) and discriminative (support vector machine) parts that yield results superior to using either approach individually. The current optical algorithm methods performed at a 68% accuracy rate in an experimental study of 60 healthy adults undergoing periods of high-stress versus low-stress performance demands. Accuracy and practical feasibility of the technique is being improved further with automatic multi-resolution selection for the discretization of the mask, and automated face detection and mask initialization algorithms.

  14. A New Chaos-Based Color Image Encryption Scheme with an Efficient Substitution Keystream Generation Strategy

    Directory of Open Access Journals (Sweden)

    Chong Fu

    2018-01-01

    Full Text Available This paper suggests a new chaos-based color image cipher with an efficient substitution keystream generation strategy. The hyperchaotic Lü system and logistic map are employed to generate the permutation and substitution keystream sequences for image data scrambling and mixing. In the permutation stage, the positions of colored subpixels in the input image are scrambled using a pixel-swapping mechanism, which avoids two main problems encountered when using the discretized version of area-preserving chaotic maps. In the substitution stage, we introduce an efficient keystream generation method that can extract three keystream elements from the current state of the iterative logistic map. Compared with conventional method, the total number of iterations is reduced by 3 times. To ensure the robustness of the proposed scheme against chosen-plaintext attack, the current state of the logistic map is perturbed during each iteration and the disturbance value is determined by plain-pixel values. The mechanism of associating the keystream sequence with plain-image also helps accelerate the diffusion process and increase the degree of randomness of the keystream sequence. Experimental results demonstrate that the proposed scheme has a satisfactory level of security and outperforms the conventional schemes in terms of computational efficiency.

  15. Novel method to calculate pulmonary compliance images in rodents from computed tomography acquired at constant pressures

    International Nuclear Information System (INIS)

    Guerrero, Thomas; Castillo, Richard; Sanders, Kevin; Price, Roger; Komaki, Ritsuko; Cody, Dianna

    2006-01-01

    Our goal was to develop a method for generating high-resolution three-dimensional pulmonary compliance images in rodents from computed tomography (CT) images acquired at a series of constant pressures in ventilated animals. One rat and one mouse were used to demonstrate this technique. A pre-clinical GE flat panel CT scanner (maximum 31 line-pairs cm -1 resolution) was utilized for image acquisition. The thorax of each animal was imaged with breath-holds at 2, 6, 10, 14 and 18 cm H 2 O pressure in triplicate. A deformable image registration algorithm was applied to each pair of CT images to map corresponding tissue elements. Pulmonary compliance was calculated on a voxel by voxel basis using adjacent pairs of CT images. Triplicate imaging was used to estimate the measurement error of this technique. The 3D pulmonary compliance images revealed regional heterogeneity of compliance. The maximum total lung compliance measured 0.080 (±0.007) ml air per cm H 2 O per ml of lung and 0.039 (±0.004) ml air per cm H 2 O per ml of lung for the rat and mouse, respectively. In this study, we have demonstrated a unique method of quantifying regional lung compliance from 4 to 16 cm H 2 O pressure with sub-millimetre spatial resolution in rodents

  16. Ratsnake: A Versatile Image Annotation Tool with Application to Computer-Aided Diagnosis

    Directory of Open Access Journals (Sweden)

    D. K. Iakovidis

    2014-01-01

    Full Text Available Image segmentation and annotation are key components of image-based medical computer-aided diagnosis (CAD systems. In this paper we present Ratsnake, a publicly available generic image annotation tool providing annotation efficiency, semantic awareness, versatility, and extensibility, features that can be exploited to transform it into an effective CAD system. In order to demonstrate this unique capability, we present its novel application for the evaluation and quantification of salient objects and structures of interest in kidney biopsy images. Accurate annotation identifying and quantifying such structures in microscopy images can provide an estimation of pathogenesis in obstructive nephropathy, which is a rather common disease with severe implication in children and infants. However a tool for detecting and quantifying the disease is not yet available. A machine learning-based approach, which utilizes prior domain knowledge and textural image features, is considered for the generation of an image force field customizing the presented tool for automatic evaluation of kidney biopsy images. The experimental evaluation of the proposed application of Ratsnake demonstrates its efficiency and effectiveness and promises its wide applicability across a variety of medical imaging domains.

  17. Computer-based quantitative computed tomography image analysis in idiopathic pulmonary fibrosis: A mini review.

    Science.gov (United States)

    Ohkubo, Hirotsugu; Nakagawa, Hiroaki; Niimi, Akio

    2018-01-01

    Idiopathic pulmonary fibrosis (IPF) is the most common type of progressive idiopathic interstitial pneumonia in adults. Many computer-based image analysis methods of chest computed tomography (CT) used in patients with IPF include the mean CT value of the whole lungs, density histogram analysis, density mask technique, and texture classification methods. Most of these methods offer good assessment of pulmonary functions, disease progression, and mortality. Each method has merits that can be used in clinical practice. One of the texture classification methods is reported to be superior to visual CT scoring by radiologist for correlation with pulmonary function and prediction of mortality. In this mini review, we summarize the current literature on computer-based CT image analysis of IPF and discuss its limitations and several future directions. Copyright © 2017 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.

  18. Cardiac Computed Tomography as an Imaging Modality in Coronary Anomalies.

    Science.gov (United States)

    Karliova, Irem; Fries, Peter; Schmidt, Jörg; Schneider, Ulrich; Shalabi, Ahmad; Schäfers, Hans-Joachim

    2018-01-01

    Coronary artery fistulae and coronary aneurysms are rare anomalies. When they become symptomatic, they require precise anatomic information to allow for planning of the therapeutic procedure. We report a case in which both fistulae and aneurysm were present. The required information could only be obtained by electrocardiogram-gated computed tomography with reformation. This imaging modality should be considered in every case of fistula or coronary aneurysm. Copyright © 2018 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  19. Computed tomography imaging for superior semicircular canal dehiscence syndrome

    International Nuclear Information System (INIS)

    Dobeli, Karen

    2006-01-01

    Superior semicircular canal dehiscence is a newly described syndrome of sound and/or pressure induced vertigo. Computed tomography (CT) imaging plays an important role in confirmation of a defect in the bone overlying the canal. A high resolution CT technique utilising 0.5 mm or thinner slices and multi-planar reconstructions parallel to the superior semicircular canal is required. Placement of a histogram over a suspected defect can assist CT diagnosis

  20. Comparison of image features calculated in different dimensions for computer-aided diagnosis of lung nodules

    Science.gov (United States)

    Xu, Ye; Lee, Michael C.; Boroczky, Lilla; Cann, Aaron D.; Borczuk, Alain C.; Kawut, Steven M.; Powell, Charles A.

    2009-02-01

    Features calculated from different dimensions of images capture quantitative information of the lung nodules through one or multiple image slices. Previously published computer-aided diagnosis (CADx) systems have used either twodimensional (2D) or three-dimensional (3D) features, though there has been little systematic analysis of the relevance of the different dimensions and of the impact of combining different dimensions. The aim of this study is to determine the importance of combining features calculated in different dimensions. We have performed CADx experiments on 125 pulmonary nodules imaged using multi-detector row CT (MDCT). The CADx system computed 192 2D, 2.5D, and 3D image features of the lesions. Leave-one-out experiments were performed using five different combinations of features from different dimensions: 2D, 3D, 2.5D, 2D+3D, and 2D+3D+2.5D. The experiments were performed ten times for each group. Accuracy, sensitivity and specificity were used to evaluate the performance. Wilcoxon signed-rank tests were applied to compare the classification results from these five different combinations of features. Our results showed that 3D image features generate the best result compared with other combinations of features. This suggests one approach to potentially reducing the dimensionality of the CADx data space and the computational complexity of the system while maintaining diagnostic accuracy.

  1. Efficient scatter model for simulation of ultrasound images from computed tomography data

    Science.gov (United States)

    D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.

    2015-12-01

    Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.

  2. Advances in computed radiography systems and their physical imaging characteristics

    International Nuclear Information System (INIS)

    Cowen, A.R.; Davies, A.G.; Kengyelics, S.M.

    2007-01-01

    Radiological imaging is progressing towards an all-digital future, across the spectrum of medical imaging techniques. Computed radiography (CR) has provided a ready pathway from screen film to digital radiography and a convenient entry point to PACS. This review briefly revisits the principles of modern CR systems and their physical imaging characteristics. Wide dynamic range and digital image enhancement are well-established benefits of CR, which lend themselves to improved image presentation and reduced rates of repeat exposures. However, in its original form CR offered limited scope for reducing the radiation dose per radiographic exposure, compared with screen film. Recent innovations in CR, including the use of dual-sided image readout and channelled storage phosphor have eased these concerns. For example, introduction of these technologies has improved detective quantum efficiency (DQE) by approximately 50 and 100%, respectively, compared with standard CR. As a result CR currently affords greater scope for reducing patient dose, and provides a more substantive challenge to the new solid-state, flat-panel, digital radiography detectors

  3. Computed tomography in the imaging of colonic diverticulitis

    International Nuclear Information System (INIS)

    Buckley, O.; Geoghegan, T.; O'Riordain, D.S.; Lyburn, I.D.; Torreggiani, W.C.

    2004-01-01

    Colonic diverticulitis occurs when diverticula within the colon become infected or inflamed. It is becoming an increasingly common cause for hospital admission, particularly in western society, where it is linked to a low fibre diet. Symptoms of diverticulitis include abdominal pain, diarrhoea and pyrexia, however, symptoms are often non-specific and the clinical diagnosis may be difficult. In addition, elderly patients and those taking corticosteroids may have limited findings on physical examination, even in the presence of severe diverticulitis. A high index of suspicion is required in such patients in order to avoid a significant delay in arriving at the correct diagnosis. Imaging plays an important role in establishing an early and correct diagnosis. In the past, contrast enema studies were the principal imaging test used to make the diagnosis. However, such studies lack sensitivity and have limited success in identifying abscesses that may require drainage. Conversely computed tomography (CT) is both sensitive and specific in making a diagnosis of diverticulitis. In addition, it is the imaging technique of choice in depicting complications such as perforation, abscess formation and fistulae. CT-guided drainage of diverticular abscesses helps to reduce sepsis and to permit a one-stage, rather than two-stage, surgical operation. The purpose of this review article is to discuss the role of CT in the imaging of diverticulitis, describe the CT imaging features and complications of this disease, as well as review the impact and rationale of CT imaging and intervention in the overall management of patients with diverticulitis

  4. Computed tomography perfusion imaging denoising using Gaussian process regression

    International Nuclear Information System (INIS)

    Zhu Fan; Gonzalez, David Rodriguez; Atkinson, Malcolm; Carpenter, Trevor; Wardlaw, Joanna

    2012-01-01

    Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. However, computed tomography (CT) images suffer from low contrast-to-noise ratios (CNR) as a consequence of the limitation of the exposure to radiation of the patient. As a consequence, the developments of methods for improving the CNR are valuable. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR), which takes advantage of the temporal information, to reduce the noise level. Over the entire image, GPR gains a 99% CNR improvement over the raw images and also improves the quality of haemodynamic maps allowing a better identification of edges and detailed information. At the level of individual voxel, GPR provides a stable baseline, helps us to identify key parameters from tissue time-concentration curves and reduces the oscillations in the curve. GPR is superior to the comparable techniques used in this study. (note)

  5. The Next Generation of the Montage Image Mopsaic Engine

    Science.gov (United States)

    Berriman, G. Bruce; Good, John; Rusholme, Ben; Robitaille, Thomas

    2016-01-01

    We have released a major upgrade of the Montage image mosaic engine (http://montage.ipac.caltech.edu) , as part of a program to develop the next generation of the engine in response to the rapid changes in the data processing landscape in Astronomy, which is generating ever larger data sets in ever more complex formats . The new release (version 4) contains modules dedicated to creating and managing mosaics of data stored as multi-dimensional arrays ("data cubes"). The new release inherits the architectural benefits of portability and scalability of the original design. The code is publicly available on Git Hub and the Montage web page. The release includes a command line tool that supports visualization of large images, and the beta-release of a Python interface to the visualization tool. We will provide examples on how to use these these features. We are generating a mosaic of the Galactic Arecibo L-band Feed Array HI (GALFA-HI) Survey maps of neutral hydrogen in and around our Milky Way Galaxy, to assess the performance at scale and to develop tools and methodologies that will enable scientists inexpert in cloud processing to exploit could platforms for data processing and product generation at scale. Future releases include support for an R-tree based mechanism for fast discovery of and access to large data sets and on-demand access to calibrated SDSS DR9 data that exploits it; support for the Hierarchical Equal Area isoLatitude Pixelization (HEALPix) scheme, now standard for projects investigating cosmic background radiation (Gorski et al 2005); support fort the Tessellated Octahedral Adaptive Subdivision Transform (TOAST), the sky partitioning sky used by the WorldWide Telescope (WWT); and a public applications programming interface (API) in C that can be called from other languages, especially Python.

  6. Imaging of Anal Fistulas: Comparison of Computed Tomographic Fistulography and Magnetic Resonance Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Changhu [Shandong Medical Imaging Research Institute, Shandong University, Jinan 250021 (China); Lu, Yongchao [Traditional Chinese Medicine Department, Provincial Hospital Affiliated to Shandong University, Jinan 250021 (China); Zhao, Bin [Shandong Medical Imaging Research Institute, Shandong University, Jinan 250021 (China); Du, Yinglin [Shandong Provincial Center for Disease Control and Prevention, Public Health Institute, Jinan 250014 (China); Wang, Cuiyan [Shandong Medical Imaging Research Institute, Shandong University, Jinan 250021 (China); Jiang, Wanli [Department of Radiology, Taishan Medical University, Taian 271000 (China)

    2014-07-01

    The primary importance of magnetic resonance (MR) imaging in evaluating anal fistulas lies in its ability to demonstrate hidden areas of sepsis and secondary extensions in patients with fistula in ano. MR imaging is relatively expensive, so there are many healthcare systems worldwide where access to MR imaging remains restricted. Until recently, computed tomography (CT) has played a limited role in imaging fistula in ano, largely owing to its poor resolution of soft tissue. In this article, the different imaging features of the CT and MRI are compared to demonstrate the relative accuracy of CT fistulography for the preoperative assessment of fistula in ano. CT fistulography and MR imaging have their own advantages for preoperative evaluation of perianal fistula, and can be applied to complement one another when necessary.

  7. Automated parasite faecal egg counting using fluorescence labelling, smartphone image capture and computational image analysis.

    Science.gov (United States)

    Slusarewicz, Paul; Pagano, Stefanie; Mills, Christopher; Popa, Gabriel; Chow, K Martin; Mendenhall, Michael; Rodgers, David W; Nielsen, Martin K

    2016-07-01

    Intestinal parasites are a concern in veterinary medicine worldwide and for human health in the developing world. Infections are identified by microscopic visualisation of parasite eggs in faeces, which is time-consuming, requires technical expertise and is impractical for use on-site. For these reasons, recommendations for parasite surveillance are not widely adopted and parasite control is based on administration of rote prophylactic treatments with anthelmintic drugs. This approach is known to promote anthelmintic resistance, so there is a pronounced need for a convenient egg counting assay to promote good clinical practice. Using a fluorescent chitin-binding protein, we show that this structural carbohydrate is present and accessible in shells of ova of strongyle, ascarid, trichurid and coccidian parasites. Furthermore, we show that a cellular smartphone can be used as an inexpensive device to image fluorescent eggs and, by harnessing the computational power of the phone, to perform image analysis to count the eggs. Strongyle egg counts generated by the smartphone system had a significant linear correlation with manual McMaster counts (R(2)=0.98), but with a significantly lower coefficient of variation (P=0.0177). Furthermore, the system was capable of differentiating equine strongyle and ascarid eggs similar to the McMaster method, but with significantly lower coefficients of variation (P<0.0001). This demonstrates the feasibility of a simple, automated on-site test to detect and/or enumerate parasite eggs in mammalian faeces without the need for a laboratory microscope, and highlights the potential of smartphones as relatively sophisticated, inexpensive and portable medical diagnostic devices. Copyright © 2016 Australian Society for Parasitology. Published by Elsevier Ltd. All rights reserved.

  8. Pictorial review: Electron beam computed tomography and multislice spiral computed tomography for cardiac imaging

    International Nuclear Information System (INIS)

    Lembcke, Alexander; Hein, Patrick A.; Dohmen, Pascal M.; Klessen, Christian; Wiese, Till H.; Hoffmann, Udo; Hamm, Bernd; Enzweiler, Christian N.H.

    2006-01-01

    Electron beam computed tomography (EBCT) revolutionized cardiac imaging by combining a constant high temporal resolution with prospective ECG triggering. For years, EBCT was the primary technique for some non-invasive diagnostic cardiac procedures such as calcium scoring and non-invasive angiography of the coronary arteries. Multislice spiral computed tomography (MSCT) on the other hand significantly advanced cardiac imaging through high volume coverage, improved spatial resolution and retrospective ECG gating. This pictorial review will illustrate the basic differences between both modalities with special emphasis to their image quality. Several experimental and clinical examples demonstrate the strengths and limitations of both imaging modalities in an intraindividual comparison for a broad range of diagnostic applications such as coronary artery calcium scoring, coronary angiography including stent visualization as well as functional assessment of the cardiac ventricles and valves. In general, our examples indicate that EBCT suffers from a number of shortcomings such as limited spatial resolution and a low contrast-to-noise ratio. Thus, EBCT should now only be used in selected cases where a constant high temporal resolution is a crucial issue, such as dynamic (cine) imaging. Due to isotropic submillimeter spatial resolution and retrospective data selection MSCT seems to be the non-invasive method of choice for cardiac imaging in general, and for assessment of the coronary arteries in particular. However, technical developments are still needed to further improve the temporal resolution in MSCT and to reduce the substantial radiation exposure

  9. Computational proximity excursions in the topology of digital images

    CERN Document Server

    Peters, James F

    2016-01-01

    This book introduces computational proximity (CP) as an algorithmic approach to finding nonempty sets of points that are either close to each other or far apart. Typically in computational proximity, the book starts with some form of proximity space (topological space equipped with a proximity relation) that has an inherent geometry. In CP, two types of near sets are considered, namely, spatially near sets and descriptivelynear sets. It is shown that connectedness, boundedness, mesh nerves, convexity, shapes and shape theory are principal topics in the study of nearness and separation of physical aswell as abstract sets. CP has a hefty visual content. Applications of CP in computer vision, multimedia, brain activity, biology, social networks, and cosmology are included. The book has been derived from the lectures of the author in a graduate course on the topology of digital images taught over the past several years. Many of the students have provided important insights and valuable suggestions. The topics in ...

  10. Computational image analysis of Suspension Plasma Sprayed YSZ coatings

    Directory of Open Access Journals (Sweden)

    Michalak Monika

    2017-01-01

    Full Text Available The paper presents the computational studies of microstructure- and topography- related features of suspension plasma sprayed (SPS coatings of yttria-stabilized zirconia (YSZ. The study mainly covers the porosity assessment, provided by ImageJ software analysis. The influence of boundary conditions, defined by: (i circularity and (ii size limits, on the computed values of porosity is also investigated. Additionally, the digital topography evaluation is performed: confocal laser scanning microscope (CLSM and scanning electron microscope (SEM operating in Shape from Shading (SFS mode measure surface roughness of deposited coatings. Computed values of porosity and roughness are referred to the variables of the spraying process, which influence the morphology of coatings and determines the possible fields of their applications.

  11. Magnetic resonance imaging and computed radiography in Achilles tendon rupture

    International Nuclear Information System (INIS)

    Korenaga, Tateo; Hachiya, Junichi; Miyasaka, Yasuo

    1988-01-01

    Magnetic Resonance Imaging (MRI) and Computed Radiography (CR) were performed in 15 patients with complete Achilles tendon rupture who were treated conservatively without surgery. MRI was obtained using Toshiba MRT 50 A superconductive machine, operaing at 0.5 Tesla. CR was performed by CR-101, Fuji Medical System. In fresh cases, ruptured tendons showed intermediate signal intensity on T1-weighted images and high intensity on T2-weighted images. Thickening of the tendon was observed in all cases except in very acute stage. Configuration of thickend tendons tends to be dumbbell shape in subacute stage and fusiform in chronic stage of more than six months after the initial trauma. In cases which showed high signal intensity at the ruptured area both on T1 and T2 weighted images, migration of fat into the sapces between the ruptured tendons was considered to be the major source of increased signal intensity. Computed radiography showed thickening of the tendon, blurring of anterior margin of the tendon, and decreased translucency of pre-Achilles fat pad. However, MRI better demonstrated the details of ruptured tendons when compared to CR, and thought to be an usefull way of following up the healing process of the ruptured tendon to facilitate more reasonable judgement of the time of removing plaster casts and stating exercise. (author)

  12. [Computational medical imaging (radiomics) and potential for immuno-oncology].

    Science.gov (United States)

    Sun, R; Limkin, E J; Dercle, L; Reuzé, S; Zacharaki, E I; Chargari, C; Schernberg, A; Dirand, A S; Alexis, A; Paragios, N; Deutsch, É; Ferté, C; Robert, C

    2017-10-01

    The arrival of immunotherapy has profoundly changed the management of multiple cancers, obtaining unexpected tumour responses. However, until now, the majority of patients do not respond to these new treatments. The identification of biomarkers to determine precociously responding patients is a major challenge. Computational medical imaging (also known as radiomics) is a promising and rapidly growing discipline. This new approach consists in the analysis of high-dimensional data extracted from medical imaging, to further describe tumour phenotypes. This approach has the advantages of being non-invasive, capable of evaluating the tumour and its microenvironment in their entirety, thus characterising spatial heterogeneity, and being easily repeatable over time. The end goal of radiomics is to determine imaging biomarkers as decision support tools for clinical practice and to facilitate better understanding of cancer biology, allowing the assessment of the changes throughout the evolution of the disease and the therapeutic sequence. This review will develop the process of computational imaging analysis and present its potential in immuno-oncology. Copyright © 2017 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  13. Personal Computer (PC) based image processing applied to fluid mechanics

    Science.gov (United States)

    Cho, Y.-C.; Mclachlan, B. G.

    1987-01-01

    A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.

  14. Diagnosing acute pulmonary embolism with computed tomography: imaging update.

    Science.gov (United States)

    Devaraj, Anand; Sayer, Charlie; Sheard, Sarah; Grubnic, Sisa; Nair, Arjun; Vlahos, Ioannis

    2015-05-01

    Acute pulmonary embolism is recognized as a difficult diagnosis to make. It is potentially fatal if undiagnosed, yet increasing referral rates for imaging and falling diagnostic yields are topics which have attracted much attention. For patients in the emergency department with suspected pulmonary embolism, computed tomography pulmonary angiography (CTPA) is the test of choice for most physicians, and hence radiology has a key role to play in the patient pathway. This review will outline key aspects of the recent literature regarding the following issues: patient selection for imaging, the optimization of CTPA image quality and dose, preferred pathways for pregnant patients and other subgroups, and the role of CTPA beyond diagnosis. The role of newer techniques such as dual-energy CT and single-photon emission-CT will also be discussed.

  15. Analysis of Craniofacial Images using Computational Atlases and Deformation Fields

    DEFF Research Database (Denmark)

    Ólafsdóttir, Hildur

    2008-01-01

    purposes. The basis for most of the applications is non-rigid image registration. This approach brings one image into the coordinate system of another resulting in a deformation field describing the anatomical correspondence between the two images. A computational atlas representing the average anatomy...... of asymmetry. The analyses are applied to the study of three different craniofacial anomalies. The craniofacial applications include studies of Crouzon syndrome (in mice), unicoronal synostosis plagiocephaly and deformational plagiocephaly. Using the proposed methods, the thesis reveals novel findings about...... the craniofacial morphology and asymmetry of Crouzon mice. Moreover, a method to plan and evaluate treatment of children with deformational plagiocephaly, based on asymmetry assessment, is established. Finally, asymmetry in children with unicoronal synostosis is automatically assessed, confirming previous results...

  16. Computer skills for the next generation of healthcare executives.

    Science.gov (United States)

    Côté, Murray J; Van Enyde, Donald F; DelliFraine, Jami L; Tucker, Stephen L

    2005-01-01

    Students beginning a career in healthcare administration must possess an array of professional and management skills in addition to a strong fundamental understanding of the field of healthcare administration. Proficient computer skills are a prime example of an essential management tool for healthcare administrators. However, it is unclear which computer skills are absolutely necessary for healthcare administrators and the extent of congruency between the computer skills possessed by new graduates and the needs of senior healthcare professionals. Our objectives in this research are to assess which computer skills are the most important to senior healthcare executives and recent healthcare administration graduates and examine the level of agreement between the two groups. Based on a survey of senior healthcare executives and graduate healthcare administration students, we identify a comprehensive and pragmatic array of computer skills and categorize them into four groups, according to their importance, for making recent health administration graduates valuable in the healthcare administration workplace. Traditional parametric hypothesis tests are used to assess congruency between responses of senior executives and of recent healthcare administration graduates. For each skill, responses of the two groups are averaged to create an overall ranking of the computer skills. Not surprisingly, both groups agreed on the importance of computer skills for recent healthcare administration graduates. In particular, computer skills such as word processing, graphics and presentation, using operating systems, creating and editing databases, spreadsheet analysis, using imported data, e-mail, using electronic bulletin boards, and downloading information were among the highest ranked computer skills necessary for recent graduates. However, there were statistically significant differences in perceptions between senior executives and healthcare administration students as to the extent

  17. The Generation and Maintenance of Visual Mental Images: Evidence from Image Type and Aging

    Science.gov (United States)

    De Beni, Rossana; Pazzaglia, Francesca; Gardini, Simona

    2007-01-01

    Imagery is a multi-componential process involving different mental operations. This paper addresses whether separate processes underlie the generation, maintenance and transformation of mental images or whether these cognitive processes rely on the same mental functions. We also examine the influence of age on these mental operations for…

  18. A human-assisted computer generated LA-grammar for simple ...

    African Journals Online (AJOL)

    Southern African Linguistics and Applied Language Studies ... of computer programs to generate Left Associative Grammars (LAGs) for natural languages is described. The generation proceeds from examples of correct sentences and needs ...

  19. Three dimensional reconstruction of computed tomographic images by computer graphics method

    International Nuclear Information System (INIS)

    Kashiwagi, Toru; Kimura, Kazufumi.

    1986-01-01

    A three dimensional computer reconstruction system for CT images has been developed in a commonly used radionuclide data processing system using a computer graphics technique. The three dimensional model was constructed from organ surface information of CT images (slice thickness: 5 or 10 mm). Surface contours of the organs were extracted manually from a set of parallel transverse CT slices in serial order and stored in the computer memory. Interpolation was made between a set of the extracted contours by cubic spline functions, then three dimensional models were reconstructed. The three dimensional images were displayed as a wire-frame and/or solid models on the color CRT. Solid model images were obtained as follows. The organ surface constructed from contours was divided into many triangular patches. The intensity of light to each patch was calculated from the direction of incident light, eye position and the normal to the triangular patch. Firstly, this system was applied to the liver phantom. Reconstructed images of the liver phantom were coincident with the actual object. This system also has been applied to human various organs such as brain, lung, liver, etc. The anatomical organ surface was realistically viewed from any direction. The images made us more easily understand the location and configuration of organs in vivo than original CT images. Furthermore, spacial relationship among organs and/or lesions was clearly obtained by superimposition of wire-frame and/or different colored solid models. Therefore, it is expected that this system is clinically useful for evaluating the patho-morphological changes in broad perspective. (author)

  20. High resolution 3D imaging of synchrotron generated microbeams

    Energy Technology Data Exchange (ETDEWEB)

    Gagliardi, Frank M., E-mail: frank.gagliardi@wbrc.org.au [Alfred Health Radiation Oncology, The Alfred, Melbourne, Victoria 3004, Australia and School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia); Cornelius, Iwan [Imaging and Medical Beamline, Australian Synchrotron, Clayton, Victoria 3168, Australia and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales 2500 (Australia); Blencowe, Anton [Division of Health Sciences, School of Pharmacy and Medical Sciences, The University of South Australia, Adelaide, South Australia 5000, Australia and Division of Information Technology, Engineering and the Environment, Mawson Institute, University of South Australia, Mawson Lakes, South Australia 5095 (Australia); Franich, Rick D. [School of Applied Sciences and Health Innovations Research Institute, RMIT University, Melbourne, Victoria 3000 (Australia); Geso, Moshi [School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia)

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  1. 3D MODEL GENERATION USING OBLIQUE IMAGES ACQUIRED BY UAV

    Directory of Open Access Journals (Sweden)

    A. Lingua

    2017-07-01

    Full Text Available In recent years, many studies revealed the advantages of using airborne oblique images for obtaining improved 3D city models (including façades and building footprints. Here the acquisition and use of oblique images from a low cost and open source Unmanned Aerial Vehicle (UAV for the 3D high-level-of-detail reconstruction of historical architectures is evaluated. The critical issues of such acquisitions (flight planning strategies, ground control points distribution, etc. are described. Several problems should be considered in the flight planning: best approach to cover the whole object with the minimum time of flight; visibility of vertical structures; occlusions due to the context; acquisition of all the parts of the objects (the closest and the farthest with similar resolution; suitable camera inclination, and so on. In this paper a solution is proposed in order to acquire oblique images with one only flight. The data processing was realized using Structure-from-Motion-based approach for point cloud generation using dense image-matching algorithms implemented in an open source software. The achieved results are analysed considering some check points and some reference LiDAR data. The system was tested for surveying a historical architectonical complex: the “Sacro Mo nte di Varallo Sesia” in north-west of Italy. This study demonstrates that the use of oblique images acquired from a low cost UAV system and processed through an open source software is an effective methodology to survey cultural heritage, characterized by limited accessibility, need for detail and rapidity of the acquisition phase, and often reduced budgets.

  2. High resolution 3D imaging of synchrotron generated microbeams

    International Nuclear Information System (INIS)

    Gagliardi, Frank M.; Cornelius, Iwan; Blencowe, Anton; Franich, Rick D.; Geso, Moshi

    2015-01-01

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery

  3. A computer-generated image of the LHCb detector

    CERN Multimedia

    Richard Jacobsson

    2004-01-01

    Unlike most of the detectors on the LHC, which use barrel detectors, the LHCb detector will use walls of sub-detectors to study the particles produced in the 14 TeV proton-proton collisions. This arrangement is used as the bottom and anti-bottom quark pairs produced in the collision, whose decays will be studied, travel close to the path of the colliding beams. LHCb will investigate Naure's preference for matter over antimatter through a process known as CP violation.

  4. Grid Computing Application for Brain Magnetic Resonance Image Processing

    International Nuclear Information System (INIS)

    Valdivia, F; Crépeault, B; Duchesne, S

    2012-01-01

    This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

  5. Parameters related to the image quality in computed tomography -CT

    International Nuclear Information System (INIS)

    Alonso, T.C.; Silva, T.A.; Mourão, A.P.; Silva, T.A.

    2015-01-01

    Quality control programs in computed tomography, CT, should be continuously reviewed to always ensure the best image quality with the lowest possible dose for the patient in the diagnostic process. The quality control in CT aims to design and implement a set of procedures that allows the verification of their operating conditions within the specified requirements for its use. In Brazil, the Ministry of Health (MOH), the Technical Rules (Resolution NE in 1016.) - Radiology Medical - 'Equipment and Safety Performance' establishes a reference to the analysis of tests on TC. A large number of factors such as image noise, slice thickness (resolution of the Z axis), low contrast resolution and high contrast resolution and the radiation dose can be affected by the selection of technical parameters in exams. The purpose of this study was to investigate how changes in image acquisition protocols modify its quality and determine the advantages and disadvantages between the different aspects of image quality, especially the reduction of patient radiation dose. A preliminary procedure is to check the operating conditions of the CT measurements were performed on a scanner with 64-MDCT scanner (GE Healthcare, BrightSpeed) in the service of the Molecular Imaging Center (Cimol) of the Federal University of Minas Gerais (UFMG). When performing the image quality tests we used a simulator, Catphan-600, this device has five modules, and in each you can perform a series of tests. Different medical imaging practices have different requirements for acceptable image quality. The results of quality control tests showed that the analyzed equipment is in accordance with the requirements established by current regulations. [pt

  6. Natural language computing an English generative grammar in Prolog

    CERN Document Server

    Dougherty, Ray C

    2013-01-01

    This book's main goal is to show readers how to use the linguistic theory of Noam Chomsky, called Universal Grammar, to represent English, French, and German on a computer using the Prolog computer language. In so doing, it presents a follow-the-dots approach to natural language processing, linguistic theory, artificial intelligence, and expert systems. The basic idea is to introduce meaningful answers to significant problems involved in representing human language data on a computer. The book offers a hands-on approach to anyone who wishes to gain a perspective on natural language

  7. Computer-aided classification of lung nodules on computed tomography images via deep learning technique

    Directory of Open Access Journals (Sweden)

    Hua KL

    2015-08-01

    Full Text Available Kai-Lung Hua,1 Che-Hao Hsu,1 Shintami Chusnul Hidayati,1 Wen-Huang Cheng,2 Yu-Jen Chen3 1Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, 2Research Center for Information Technology Innovation, Academia Sinica, 3Department of Radiation Oncology, MacKay Memorial Hospital, Taipei, Taiwan Abstract: Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain. Keywords: nodule classification, deep learning, deep belief network, convolutional neural network

  8. Pathomorphism of spiral tibial fractures in computed tomography imaging.

    Science.gov (United States)

    Guzik, Grzegorz

    2011-01-01

    Spiral fractures of the tibia are virtually homogeneous with regard to their pathomorphism. The differences that are seen concern the level of fracture of the fibula, and, to a lesser extent, the level of fracture of the tibia, the length of fracture cleft, and limb shortening following the trauma. While conventional radiographs provide sufficient information about the pathomorphism of fractures, computed tomography can be useful in demonstrating the spatial arrangement of bone fragments and topography of soft tissues surrounding the fracture site. Multiple cross-sectional computed tomography views of spiral fractures of the tibia show the details of the alignment of bone chips at the fracture site, axis of the tibial fracture cleft, and topography of soft tissues that are not visible on standard radiographs. A model of a spiral tibial fracture reveals periosteal stretching with increasing spiral and longitudinal displacement. The cleft in tibial fractures has a spiral shape and its line is invariable. Every spiral fracture of both crural bones results in extensive damage to the periosteum and may damage bellies of the long flexor muscle of toes, flexor hallucis longus as well as the posterior tibial muscle. Computed tomography images of spiral fractures of the tibia show details of damage that are otherwise invisible on standard radiographs. Moreover, CT images provide useful information about the spatial location of the bone chips as well as possible threats to soft tissues that surround the fracture site. Every spiral fracture of the tibia is associated with disruption of the periosteum. 1. Computed tomography images of spiral fractures of the tibia show details of damage otherwise invisible on standard radiographs, 2. The sharp end of the distal tibial chip can damage the tibialis posterior muscle, long flexor muscles of the toes and the flexor hallucis longus, 3. Every spiral fracture of the tibia is associated with disruption of the periosteum.

  9. Image storage, cataloguing and retrieval using a personal computer database software application

    International Nuclear Information System (INIS)

    Lewis, G.; Howman-Giles, R.

    1999-01-01

    Full text: Interesting images and cases are collected and collated by most nuclear medicine practitioners throughout the world. Changing imaging technology has altered the way in which images may be presented and are reported, with less reliance on 'hard copy' for both reporting and archiving purposes. Digital image generation and storage is rapidly replacing film in both radiological and nuclear medicine practice. A personal computer database based interesting case filing system is described and demonstrated. The digital image storage format allows instant access to both case information (e.g. history and examination, scan report or teaching point) and the relevant images. The database design allows rapid selection of cases and images appropriate to a particular diagnosis, scan type, age or other search criteria. Correlative X-ray, CT, MRI and ultrasound images can also be stored and accessed. The application is in use at The New Children's Hospital as an aid to postgraduate medical education, with new cases being regularly added to the database

  10. Noise and contrast detection in computed tomography images

    International Nuclear Information System (INIS)

    Faulkner, K.; Moores, B.M.

    1984-01-01

    A discrete representation of the reconstruction process is used in an analysis of noise in computed tomography (CT) images. This model is consistent with the method of data collection in actual machines. An expression is derived which predicts the variance on the measured linear attenuation coefficient of a single pixel in an image. The dependence of the variance on various CT scanner design parameters such as pixel size, slice width, scan time, number of detectors, etc., is then described. The variation of noise with sampling area is theoretically explained. These predictions are in good agreement with a set of experimental measurements made on a range of CT scanners. The equivalent sampling aperture of the CT process is determined and the effect of the reconstruction filter on the variance of the linear attenuation coefficient is also noted, in particular, the choice and its consequences for reconstructed images and noise behaviour. The theory has been extended to include contrast detail behaviour, and these predictions compare favourably with experimental measurements. The theory predicts that image smoothing will have little effect on the contrast-detail detectability behaviour of reconstructed images. (author)

  11. Gallium tomoscintigraphic imaging of esophageal cancer using emission computed tomography

    International Nuclear Information System (INIS)

    Hattori, Takao; Nakagawa, Tsuyoshi; Takeda, Kan; Maeda, Hisato; Taguchi, Mitsuo

    1983-01-01

    Emission computed tomography (ECT) was clinically evaluated in 67 Ga imaging of esophageal cancer. ECT system used in this study is equipped with opposed dual large-field-of-view cameras (GCA 70A-S, Toshiba Co.). Data were acquired by rotating the two cameras 180 0 about the longitudinal axis of the patient. Total acquisition time was about 12 minutes. Multiple slices of transaxial, sagittal and coronal sections were reconstructed in a 64 x 64 matrix form using convolution algorithms. In three out of six cases studied the tumor uptake was not detected on conventional images, because the lesion was small, concentration of activity was poor or the lesion activity was overlapped with the neighbouring activities distributed to normal organs such as sternum, vertebra, liver and hilus. On ECT images, by contrast, abnormal uptake of the tumors was definitively detected in all the six cases. ECT imaging was also useful in estimating the effect of treatment by the decrease in 67 Ga concentration. We have devised a special technique to repeat ECT scan with a thin tube filled with 67 Ga solution inserted through the esophagus. By this technique, comparing paired images with and without the tube activity, exact location of the uptake against the esophagus and extraesophageal extension of the disease could be accurately evaluated in a three-dimensional field of view. ECT in gallium scanning is expected to be of great clinical value to elevate the confidence level of diagnosis in detecting, localizing and following up the diseases. (author)

  12. StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks

    OpenAIRE

    Zhang, Han; Xu, Tao; Li, Hongsheng; Zhang, Shaoting; Wang, Xiaogang; Huang, Xiaolei; Metaxas, Dimitris

    2017-01-01

    Although Generative Adversarial Networks (GANs) have shown remarkable success in various tasks, they still face challenges in generating high quality images. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) aiming at generating high-resolution photo-realistic images. First, we propose a two-stage generative adversarial network architecture, StackGAN-v1, for text-to-image synthesis. The Stage-I GAN sketches the primitive shape and colors of the object based on given...

  13. Data driven model generation based on computational intelligence

    Science.gov (United States)

    Gemmar, Peter; Gronz, Oliver; Faust, Christophe; Casper, Markus

    2010-05-01

    The simulation of discharges at a local gauge or the modeling of large scale river catchments are effectively involved in estimation and decision tasks of hydrological research and practical applications like flood prediction or water resource management. However, modeling such processes using analytical or conceptual approaches is made difficult by both complexity of process relations and heterogeneity of processes. It was shown manifold that unknown or assumed process relations can principally be described by computational methods, and that system models can automatically be derived from observed behavior or measured process data. This study describes the development of hydrological process models using computational methods including Fuzzy logic and artificial neural networks (ANN) in a comprehensive and automated manner. Methods We consider a closed concept for data driven development of hydrological models based on measured (experimental) data. The concept is centered on a Fuzzy system using rules of Takagi-Sugeno-Kang type which formulate the input-output relation in a generic structure like Ri : IFq(t) = lowAND...THENq(t+Δt) = ai0 +ai1q(t)+ai2p(t-Δti1)+ai3p(t+Δti2)+.... The rule's premise part (IF) describes process states involving available process information, e.g. actual outlet q(t) is low where low is one of several Fuzzy sets defined over variable q(t). The rule's conclusion (THEN) estimates expected outlet q(t + Δt) by a linear function over selected system variables, e.g. actual outlet q(t), previous and/or forecasted precipitation p(t ?Δtik). In case of river catchment modeling we use head gauges, tributary and upriver gauges in the conclusion part as well. In addition, we consider temperature and temporal (season) information in the premise part. By creating a set of rules R = {Ri|(i = 1,...,N)} the space of process states can be covered as concise as necessary. Model adaptation is achieved by finding on optimal set A = (aij) of conclusion

  14. Efficient 2-D DCT Computation from an Image Representation Point of View

    OpenAIRE

    Papakostas, G.A.; Koulouriotis, D.E.; Karakasis, E.G.

    2009-01-01

    A novel methodology that ensures the computation of 2-D DCT coefficients in gray-scale images as well as in binary ones, with high computation rates, was presented in the previous sections. Through a new image representation scheme, called ISR (Image Slice Representation) the 2-D DCT coefficients can be computed in significantly reduced time, with the same accuracy.

  15. Generating descriptive visual words and visual phrases for large-scale image applications.

    Science.gov (United States)

    Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen

    2011-09-01

    Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.

  16. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system

    International Nuclear Information System (INIS)

    Moore, C S; Wood, T J; Beavis, A W; Saunderson, J R; Avery, G; Balcam, S; Needler, L

    2014-01-01

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQ m ), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQ m  and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma. (paper)

  17. An investigation of automatic exposure control calibration for chest imaging with a computed radiography system.

    Science.gov (United States)

    Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Beavis, A W; Saunderson, J R

    2014-05-07

    The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.

  18. Interactive Light Stimulus Generation with High Performance Real-Time Image Processing and Simple Scripting

    Directory of Open Access Journals (Sweden)

    László Szécsi

    2017-12-01

    Full Text Available Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software, which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations.

  19. Interactive Light Stimulus Generation with High Performance Real-Time Image Processing and Simple Scripting.

    Science.gov (United States)

    Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter

    2017-01-01

    Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations.

  20. A Frequency Matching Method for Generation of a Priori Sample Models from Training Images

    DEFF Research Database (Denmark)

    Lange, Katrine; Cordua, Knud Skou; Frydendall, Jan

    2011-01-01

    This paper presents a Frequency Matching Method (FMM) for generation of a priori sample models based on training images and illustrates its use by an example. In geostatistics, training images are used to represent a priori knowledge or expectations of models, and the FMM can be used to generate...... new images that share the same multi-point statistics as a given training image. The FMM proceeds by iteratively updating voxel values of an image until the frequency of patterns in the image matches the frequency of patterns in the training image; making the resulting image statistically...... indistinguishable from the training image....

  1. Development of computational small animal models and their applications in preclinical imaging and therapy research

    NARCIS (Netherlands)

    Xie, Tianwu; Zaidi, Habib

    The development of multimodality preclinical imaging techniques and the rapid growth of realistic computer simulation tools have promoted the construction and application of computational laboratory animal models in preclinical research. Since the early 1990s, over 120 realistic computational animal

  2. Computer image analysis of etched tracks from ionizing radiation

    Science.gov (United States)

    Blanford, George E.

    1994-01-01

    I proposed to continue a cooperative research project with Dr. David S. McKay concerning image analysis of tracks. Last summer we showed that we could measure track densities using the Oxford Instruments eXL computer and software that is attached to an ISI scanning electron microscope (SEM) located in building 31 at JSC. To reduce the dependence on JSC equipment, we proposed to transfer the SEM images to UHCL for analysis. Last summer we developed techniques to use digitized scanning electron micrographs and computer image analysis programs to measure track densities in lunar soil grains. Tracks were formed by highly ionizing solar energetic particles and cosmic rays during near surface exposure on the Moon. The track densities are related to the exposure conditions (depth and time). Distributions of the number of grains as a function of their track densities can reveal the modality of soil maturation. As part of a consortium effort to better understand the maturation of lunar soil and its relation to its infrared reflectance properties, we worked on lunar samples 67701,205 and 61221,134. These samples were etched for a shorter time (6 hours) than last summer's sample and this difference has presented problems for establishing the correct analysis conditions. We used computer counting and measurement of area to obtain preliminary track densities and a track density distribution that we could interpret for sample 67701,205. This sample is a submature soil consisting of approximately 85 percent mature soil mixed with approximately 15 percent immature, but not pristine, soil.

  3. Computed Tomography Imaging of the Topographical Anatomy of Canine Prostate

    International Nuclear Information System (INIS)

    Dimtrox, R.; Yonkova, P.; Vladova, D.; Kostov, D.

    2010-01-01

    AIM: To investigate the topographical anatomy of canine prostate gland by computed tomography (CT) for diagnostic imaging purposes. ÐœATERIAL AND METHODS: Seven clinically healthy mongrel male dogs at the age of 3−4 years and body weight of 10−15 kg were submitted to transverse computerized axial tomography (CAT) with cross section thickness of 5 mm. RESULTS: The CT image of canine prostate is visualized throughout the scans of the pelvis in the planes through the first sacral vertebra (S1) dorsally; the bodies of iliac bones laterally and cranially to the pelvic brim (ventrally). The body of prostate appears as an oval homogenous relatively hypo dense finding with soft tissue density. The gland is well differentiated from the adjacent soft tissues. CONCLUSION: By means of CT, the cranial part of prostate gland in adult dogs aged 3−4 years exhibited an abdominal localization. (author)

  4. The image of a brain stroke in a computed tomograph

    International Nuclear Information System (INIS)

    Just, E.G.

    1982-01-01

    On the basis of 100 findings from patients who suffered brain strokes and by the use of 1500 ensured stroke images it was tested whether or not the stroke-predilection typologie outlined by Zuelch is based on a coincidental summation of individual cases. The radio-computed tomography with the possibility of evaluation of non-lethal cases proved itself as a suited method for confirmation or repudiation of this stroke theory. By means of the consistently achieved association of the frontal, respectively horizontal sectional image for the typology it could be proven and - with the exception of a few rather seldom types - also demonstrated that the basic and predilection types of brain stroke repeated themselves in their pattern. In individual cases a specification of lower types could also be undertaken. (orig./TRV) [de

  5. Krypton for computed tomography lung ventilation imaging: preliminary animal data.

    Science.gov (United States)

    Mahnken, Andreas H; Jost, Gregor; Pietsch, Hubertus

    2015-05-01

    The objective of this study was to assess the feasibility and safety of krypton ventilation imaging with intraindividual comparison to xenon ventilation computed tomography (CT). In a first step, attenuation of different concentrations of xenon and krypton was analyzed in a phantom setting. Thereafter, 7 male New Zealand white rabbits (4.4-6.0 kg) were included in an animal study. After orotracheal intubation, an unenhanced CT scan was obtained in end-inspiratory breath-hold. Thereafter, xenon- (30%) and krypton-enhanced (70%) ventilation CT was performed in random order. After a 2-minute wash-in of gas A, CT imaging was performed. After a 45-minute wash-out period and another 2-minute wash-in of gas B, another CT scan was performed using the same scan protocol. Heart rate and oxygen saturation were measured. Unenhanced and krypton or xenon data were registered and subtracted using a nonrigid image registration tool. Enhancement was quantified and statistically analyzed. One animal had to be excluded from data analysis owing to problems during intubation. The CT scans in the remaining 6 animals were completed without complications. There were no relevant differences in oxygen saturation or heart rate between the scans. Xenon resulted in a mean increase of enhancement of 35.3 ± 5.5 HU, whereas krypton achieved a mean increase of 21.9 ± 1.8 HU in enhancement (P = 0.0055). The use of krypton for lung ventilation imaging appears to be feasible and safe. Despite the use of a markedly higher concentration of krypton, enhancement is significantly worse when compared with xenon CT ventilation imaging, but sufficiently high for CT ventilation imaging studies.

  6. Information Security Scheme Based on Computational Temporal Ghost Imaging.

    Science.gov (United States)

    Jiang, Shan; Wang, Yurong; Long, Tao; Meng, Xiangfeng; Yang, Xiulun; Shu, Rong; Sun, Baoqing

    2017-08-09

    An information security scheme based on computational temporal ghost imaging is proposed. A sequence of independent 2D random binary patterns are used as encryption key to multiply with the 1D data stream. The cipher text is obtained by summing the weighted encryption key. The decryption process can be realized by correlation measurement between the encrypted information and the encryption key. Due to the instinct high-level randomness of the key, the security of this method is greatly guaranteed. The feasibility of this method and robustness against both occlusion and additional noise attacks are discussed with simulation, respectively.

  7. Principles of image reconstruction in X-ray computer tomography

    International Nuclear Information System (INIS)

    Schwierz, G.; Haerer, W.; Ruehrnschopf, E.P.

    1978-01-01

    The presented geometrical interpretation elucidates the convergence behavior of the classical iteration technique in X-ray computer tomography. The filter techniques nowadays used in preference are derived from a concept of linear system theory which excels due to its particular clarity. The one-dimensional form of the filtering is of decisive importance for immediate image reproduction as realized by both Siemens systems, the SIRETOM 2000 head scanner and the SOMATOM whole-body machine, as such unique to date for whole-body machines. The equivalence of discrete and continuous filtering when dealing with frequency-band-limited projections is proved. (orig.) [de

  8. High performance computing environment for multidimensional image analysis.

    Science.gov (United States)

    Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo

    2007-07-10

    The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.

  9. Multislice spiral computed tomography imaging in congenital inner ear malformations.

    Science.gov (United States)

    Ma, Hui; Han, Ping; Liang, Bo; Tian, Zhi-liang; Lei, Zi-qiao; Kong, Wei-jia; Feng, Gan-sheng

    2008-01-01

    The purpose of this study is to evaluate the usefulness of multislice spiral computed tomography (CT) in the diagnosis of congenital inner ear malformations. Forty-four patients with sensorineural hearing loss were examined on a Somatom Sensation 16 (Siemens) CT scanner. The 3-dimensional reconstructions and multiplanar reformation (MPR) were performed using the volume-rendering technique (VRT) on the workstation. Of the 44 patients examined for this study, 25 patients were found to be normal and 19 patients (36 ears) were diagnosed with congenital inner ear malformations. Of the malformations, the axial, MPR, and VRT images can all display the site and degree in 33 of the ears. Volume-rendering technique images were superior to the axial images in displaying the malformations in 3 ears with small lateral semicircular canal malformations. The common malformations were Michel deformity (1 ear), common cavity deformity (3 ears), incomplete partition I (3 ears), incomplete partition II (Mondini deformity) (5 ears), vestibular and semicircular canal malformations (14 ears), enlarged vestibular aqueduct (16 ears, 6 of which had other malformations), and internal auditory canal malformation (8 ears, all accompanied by other malformations). Multislice spiral CT allows a comprehensively assessment of various congenital inner ear malformations through high-quality MPR and VRT reconstructions. Volume-rendering technique images can display the site and degree of the malformation 3-dimensionally and intuitionisticly. This is very useful to the cochlear implantation.

  10. Photoelectronic radiology 1983; X-ray imaging with the computer-assisted technologies

    International Nuclear Information System (INIS)

    Chalaoui, J.; Sylvestre, J.; Robillard, P.; Dussault, R.

    1984-01-01

    The development of the discipline of radiology has continued to progress from initial images depicting the structure of organs, to the exploration of dynamic and physiologic phenomena, improvements in the power of X-ray generators and with the refinement of non-toxic contrast media. Until the early part of the 1970s, radiology consisted in extrapolations from a two-dimensional image of a three-dimensional organ, and advances in diagnostic quality related chiefly to improvements in spatial resolution of the flat image. With the advent of cross-sectional imaging using computer reconstruction the emphasis has shifted to contrast resolution, to the acquisition of ''pure'' images in the XY plane and to an area-related approach in diagnosis, rather than to the traditional organ-oriented method. This new trend has only been made possible because of the influence of recent developments in the digital and electronics industry. This history of diagnostic radiology up to 1972 is reviewed, followed by a discussion of the major areas of interaction between X-ray and the computer, as represented by the major leading edge technologies that have already received broad acceptance by the health care profession. (author)

  11. A Novel Image Encryption Algorithm Based on a Fractional-Order Hyperchaotic System and DNA Computing

    Directory of Open Access Journals (Sweden)

    Taiyong Li

    2017-01-01

    Full Text Available In the era of the Internet, image encryption plays an important role in information security. Chaotic systems and DNA operations have been proven to be powerful for image encryption. To further enhance the security of image, in this paper, we propose a novel algorithm that combines the fractional-order hyperchaotic Lorenz system and DNA computing (FOHCLDNA for image encryption. Specifically, the algorithm consists of four parts: firstly, we use a fractional-order hyperchaotic Lorenz system to generate a pseudorandom sequence that will be utilized during the whole encryption process; secondly, a simple but effective diffusion scheme is performed to spread the little change in one pixel to all the other pixels; thirdly, the plain image is encoded by DNA rules and corresponding DNA operations are performed; finally, global permutation and 2D and 3D permutation are performed on pixels, bits, and acid bases. The extensive experimental results on eight publicly available testing images demonstrate that the encryption algorithm can achieve state-of-the-art performance in terms of security and robustness when compared with some existing methods, showing that the FOHCLDNA is promising for image encryption.

  12. Visualization of biomedical image data and irradiation planning using a parallel computing system

    International Nuclear Information System (INIS)

    Lehrig, R.

    1991-01-01

    The contribution explains the development of a novel, low-cost workstation for the processing of biomedical tomographic data sequences. The workstation was to allow both graphical display of the data and implementation of modelling software for irradiation planning, especially for calculation of dose distributions on the basis of the measured tomogram data. The system developed according to these criteria is a parallel computing system which performs secondary, two-dimensional image reconstructions irrespective of the imaging direction of the original tomographic scans. Three-dimensional image reconstructions can be generated from any direction of view, with random selection of sections of the scanned object. (orig./MM) With 69 figs., 2 tabs [de

  13. GPU accelerated generation of digitally reconstructed radiographs for 2-D/3-D image registration.

    Science.gov (United States)

    Dorgham, Osama M; Laycock, Stephen D; Fisher, Mark H

    2012-09-01

    Recent advances in programming languages for graphics processing units (GPUs) provide developers with a convenient way of implementing applications which can be executed on the CPU and GPU interchangeably. GPUs are becoming relatively cheap, powerful, and widely available hardware components, which can be used to perform intensive calculations. The last decade of hardware performance developments shows that GPU-based computation is progressing significantly faster than CPU-based computation, particularly if one considers the execution of highly parallelisable algorithms. Future predictions illustrate that this trend is likely to continue. In this paper, we introduce a way of accelerating 2-D/3-D image registration by developing a hybrid system which executes on the CPU and utilizes the GPU for parallelizing the generation of digitally reconstructed radiographs (DRRs). Based on the advancements of the GPU over the CPU, it is timely to exploit the benefits of many-core GPU technology by developing algorithms for DRR generation. Although some previous work has investigated the rendering of DRRs using the GPU, this paper investigates approximations which reduce the computational overhead while still maintaining a quality consistent with that needed for 2-D/3-D registration with sufficient accuracy to be clinically acceptable in certain applications of radiation oncology. Furthermore, by comparing implementations of 2-D/3-D registration on the CPU and GPU, we investigate current performance and propose an optimal framework for PC implementations addressing the rigid registration problem. Using this framework, we are able to render DRR images from a 256×256×133 CT volume in ~24 ms using an NVidia GeForce 8800 GTX and in ~2 ms using NVidia GeForce GTX 580. In addition to applications requiring fast automatic patient setup, these levels of performance suggest image-guided radiation therapy at video frame rates is technically feasible using relatively low cost PC

  14. Photodeposited diffractive optical elements of computer generated masks

    International Nuclear Information System (INIS)

    Mirchin, N.; Peled, A.; Baal-Zedaka, I.; Margolin, R.; Zagon, M.; Lapsker, I.; Verdyan, A.; Azoulay, J.

    2005-01-01

    Diffractive optical elements (DOE) were synthesized on plastic substrates using the photodeposition (PD) technique by depositing amorphous selenium (a-Se) films with argon lasers and UV spectra light. The thin films were deposited typically onto polymethylmethacrylate (PMMA) substrates at room temperature. Scanned beam and contact mask modes were employed using computer-designed DOE lenses. Optical and electron micrographs characterize the surface details. The films were typically 200 nm thick

  15. Computer program for automatic generation of BWR control rod patterns

    International Nuclear Information System (INIS)

    Taner, M.S.; Levine, S.H.; Hsia, M.Y.

    1990-01-01

    A computer program named OCTOPUS has been developed to automatically determine a control rod pattern that approximates some desired target power distribution as closely as possible without violating any thermal safety or reactor criticality constraints. The program OCTOPUS performs a semi-optimization task based on the method of approximation programming (MAP) to develop control rod patterns. The SIMULATE-E code is used to determine the nucleonic characteristics of the reactor core state

  16. A Computer Program for the Generation of ARIMA Data

    Science.gov (United States)

    Green, Samuel B.; Noles, Keith O.

    1977-01-01

    The autoregressive integrated moving averages model (ARIMA) has been applied to time series data in psychological and educational research. A program is described that generates ARIMA data of a known order. The program enables researchers to explore statistical properties of ARIMA data and simulate systems producing time dependent observations.…

  17. Computer-aided breast MR image feature analysis for prediction of tumor response to chemotherapy

    International Nuclear Information System (INIS)

    Aghaei, Faranak; Tan, Maxine; Liu, Hong; Zheng, Bin; Hollingsworth, Alan B.; Qian, Wei

    2015-01-01

    Purpose: To identify a new clinical marker based on quantitative kinetic image features analysis and assess its feasibility to predict tumor response to neoadjuvant chemotherapy. Methods: The authors assembled a dataset involving breast MR images acquired from 68 cancer patients before undergoing neoadjuvant chemotherapy. Among them, 25 patients had complete response (CR) and 43 had partial and nonresponse (NR) to chemotherapy based on the response evaluation criteria in solid tumors. The authors developed a computer-aided detection scheme to segment breast areas and tumors depicted on the breast MR images and computed a total of 39 kinetic image features from both tumor and background parenchymal enhancement regions. The authors then applied and tested two approaches to classify between CR and NR cases. The first one analyzed each individual feature and applied a simple feature fusion method that combines classification results from multiple features. The second approach tested an attribute selected classifier that integrates an artificial neural network (ANN) with a wrapper subset evaluator, which was optimized using a leave-one-case-out validation method. Results: In the pool of 39 features, 10 yielded relatively higher classification performance with the areas under receiver operating characteristic curves (AUCs) ranging from 0.61 to 0.78 to classify between CR and NR cases. Using a feature fusion method, the maximum AUC = 0.85 ± 0.05. Using the ANN-based classifier, AUC value significantly increased to 0.96 ± 0.03 (p < 0.01). Conclusions: This study demonstrated that quantitative analysis of kinetic image features computed from breast MR images acquired prechemotherapy has potential to generate a useful clinical marker in predicting tumor response to chemotherapy

  18. Computer-aided breast MR image feature analysis for prediction of tumor response to chemotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Aghaei, Faranak; Tan, Maxine; Liu, Hong; Zheng, Bin, E-mail: Bin.Zheng-1@ou.edu [School of Electrical and Computer Engineering, University of Oklahoma, Norman, Oklahoma 73019 (United States); Hollingsworth, Alan B. [Mercy Women’s Center, Mercy Health Center, Oklahoma City, Oklahoma 73120 (United States); Qian, Wei [Department of Electrical and Computer Engineering, University of Texas, El Paso, Texas 79968 (United States)

    2015-11-15

    Purpose: To identify a new clinical marker based on quantitative kinetic image features analysis and assess its feasibility to predict tumor response to neoadjuvant chemotherapy. Methods: The authors assembled a dataset involving breast MR images acquired from 68 cancer patients before undergoing neoadjuvant chemotherapy. Among them, 25 patients had complete response (CR) and 43 had partial and nonresponse (NR) to chemotherapy based on the response evaluation criteria in solid tumors. The authors developed a computer-aided detection scheme to segment breast areas and tumors depicted on the breast MR images and computed a total of 39 kinetic image features from both tumor and background parenchymal enhancement regions. The authors then applied and tested two approaches to classify between CR and NR cases. The first one analyzed each individual feature and applied a simple feature fusion method that combines classification results from multiple features. The second approach tested an attribute selected classifier that integrates an artificial neural network (ANN) with a wrapper subset evaluator, which was optimized using a leave-one-case-out validation method. Results: In the pool of 39 features, 10 yielded relatively higher classification performance with the areas under receiver operating characteristic curves (AUCs) ranging from 0.61 to 0.78 to classify between CR and NR cases. Using a feature fusion method, the maximum AUC = 0.85 ± 0.05. Using the ANN-based classifier, AUC value significantly increased to 0.96 ± 0.03 (p < 0.01). Conclusions: This study demonstrated that quantitative analysis of kinetic image features computed from breast MR images acquired prechemotherapy has potential to generate a useful clinical marker in predicting tumor response to chemotherapy.

  19. Improved computer-assisted nuclear imaging in renovascular hypertension

    International Nuclear Information System (INIS)

    Gross, M.L.; Nally, J.V.; Potvini, W.J.; Clarke, H.S. Jr.; Higgins, J.T.; Windham, J.P.

    1985-01-01

    A computer-assisted program with digital background subtraction has been developed to analyze the initial 90 second Tc-99m DTPA renal flow scans in an attempt to quantitate the early isotope delivery to and uptake by the kidney. This study was designed to compare the computer-assisted 90 second DTPA scan with the conventional 30 minute I-131 Hippuran scan. Six patients with angiographically-proven unilateral renal artery stenosis were studied. The time activity curves for both studies were derived from regions of interest selected from the computer acquired dynamic images. The following parameters were used to assess renal blood flow: differential maximum activity, minimum/maximum activity ratio, and peak width. The computer-assisted DTPA study accurately predicted (6/6) the stenosed side documented angiographically, whereas the conventional Hippuran scan was clearly predictive in only 2/6. In selected cases successfully corrected surgically, the DTPA study proved superior in assessing the degree of patency of the graft. The best discriminatory factors when compared to a template synthesized from curves obtained from normal subjects were differential maximum activity and peak width. The authors conclude that: 1) the computer-assisted 90 second DTPA renal blood flow scan was superior to the conventional I-131 Hippuran scan in demonstrating unilateral reno-vascular disease; 2) the DTPA study was highly predictive of the angiographic findings; and 3) this non-invasive study should prove useful in the diagnosis and serial evaluation following surgery and/or angioplasty for renal artery stenosis

  20. Applying a computer-aided scheme to detect a new radiographic image marker for prediction of chemotherapy outcome

    International Nuclear Information System (INIS)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; Moore, Kathleen; Liu, Hong; Zheng, Bin

    2016-01-01

    To investigate the feasibility of automated segmentation of visceral and subcutaneous fat areas from computed tomography (CT) images of ovarian cancer patients and applying the computed adiposity-related image features to predict chemotherapy outcome. A computerized image processing scheme was developed to segment visceral and subcutaneous fat areas, and compute adiposity-related image features. Then, logistic regression models were applied to analyze association between the scheme-generated assessment scores and progression-free survival (PFS) of patients using a leave-one-case-out cross-validation method and a dataset involving 32 patients. The correlation coefficients between automated and radiologist’s manual segmentation of visceral and subcutaneous fat areas were 0.76 and 0.89, respectively. The scheme-generated prediction scores using adiposity-related radiographic image features significantly associated with patients’ PFS (p < 0.01). Using a computerized scheme enables to more efficiently and robustly segment visceral and subcutaneous fat areas. The computed adiposity-related image features also have potential to improve accuracy in predicting chemotherapy outcome

  1. Computer simulation of orthognathic surgery with video imaging

    Science.gov (United States)

    Sader, Robert; Zeilhofer, Hans-Florian U.; Horch, Hans-Henning

    1994-04-01

    Patients with extreme jaw imbalance must often undergo operative corrections. The goal of therapy is to harmonize the stomatognathic system and an aesthetical correction of the face profile. A new procedure will be presented which supports the maxillo-facial surgeon in planning the operation and which also presents the patient the result of the treatment by video images. Once an x-ray has been digitized it is possible to produce individualized cephalometric analyses. Using a ceph on screen, all current orthognathic operations can be simulated, whereby the bony segments are moved according to given parameters, and a new soft tissue profile can be calculated. The profile of the patient is fed into the computer by way of a video system and correlated to the ceph. Using the simulated operation the computer calculates a new video image of the patient which presents the expected postoperative appearance. In studies of patients treated between 1987-91, 76 out of 121 patients were able to be evaluated. The deviation in profile change varied between .0 and 1.6mm. A side effect of the practical applications was an increase in patient compliance.

  2. Computed tomography and magnetic resonance imaging in vascular surgical emergencies

    International Nuclear Information System (INIS)

    Vogelzang, R.L.; Fisher, M.R.

    1987-01-01

    Computed tomography (CT) scanning is now universally accepted as an extremely useful tool in the investigation of disease throughout the body. CT has revolutionized the practice of medicine in virtually every specialty. In vascular surgery the routine use of CT in a variety of problems has changed the way diagnoses are made. It allows prompt recognition of conditions that were difficult if not impossible to diagnose using older techniques. Nowhere is this concept better epitomized than in the realm of vascular surgical emergencies. In these cases, life or limb threatening conditions such as hemorrhage, prosthetic graft infection, or vascular occlusion exist as the result of aneurysm, trauma, dissection, tumor, or previous arterial surgery. Prompt and appropriate diagnosis of the immediate problem and its cause is afforded by the use of contrast enhanced CT. This frequently obviates the need for angiography and eliminates less accurate tests such as plain films, barium studies, nuclear medicine scans, and/or ultrasound. In the past several years magnetic resonance imaging (MRI) of the body has become a practical reality. The technique offers promise in the imaging of many disease processes. In the neural axis it has become a preferred modality due to inherently higher contrast resolution and freedom from artifacts. Progress in body imaging has been slower due to problems with motion artifact but early results in cardiovascular imaging demonstrate that MRI offers theoretical advantages over CT that may make it the imaging test of choice in vascular disease. This paper identifies those vascular surgical emergencies in which CT and MRI are most useful and clarifies and illustrates the diagnostic features of the various conditions encountered

  3. Computer-aided pulmonary image analysis in small animal models

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Ziyue; Mansoor, Awais; Mollura, Daniel J. [Center for Infectious Disease Imaging (CIDI), Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, Maryland 32892 (United States); Bagci, Ulas, E-mail: ulasbagci@gmail.com [Center for Research in Computer Vision (CRCV), University of Central Florida (UCF), Orlando, Florida 32816 (United States); Kramer-Marek, Gabriela [The Institute of Cancer Research, London SW7 3RP (United Kingdom); Luna, Brian [Microfluidic Laboratory Automation, University of California-Irvine, Irvine, California 92697-2715 (United States); Kubler, Andre [Department of Medicine, Imperial College London, London SW7 2AZ (United Kingdom); Dey, Bappaditya; Jain, Sanjay [Center for Tuberculosis Research, Johns Hopkins University School of Medicine, Baltimore, Maryland 21231 (United States); Foster, Brent [Department of Biomedical Engineering, University of California-Davis, Davis, California 95817 (United States); Papadakis, Georgios Z. [Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, Maryland 32892 (United States); Camp, Jeremy V. [Department of Microbiology and Immunology, University of Louisville, Louisville, Kentucky 40202 (United States); Jonsson, Colleen B. [National Institute for Mathematical and Biological Synthesis, University of Tennessee, Knoxville, Tennessee 37996 (United States); Bishai, William R. [Howard Hughes Medical Institute, Chevy Chase, Maryland 20815 and Center for Tuberculosis Research, Johns Hopkins University School of Medicine, Baltimore, Maryland 21231 (United States); Udupa, Jayaram K. [Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)

    2015-07-15

    Purpose: To develop an automated pulmonary image analysis framework for infectious lung diseases in small animal models. Methods: The authors describe a novel pathological lung and airway segmentation method for small animals. The proposed framework includes identification of abnormal imaging patterns pertaining to infectious lung diseases. First, the authors’ system estimates an expected lung volume by utilizing a regression function between total lung capacity and approximated rib cage volume. A significant difference between the expected lung volume and the initial lung segmentation indicates the presence of severe pathology, and invokes a machine learning based abnormal imaging pattern detection system next. The final stage of the proposed framework is the automatic extraction of airway tree for which new affinity relationships within the fuzzy connectedness image segmentation framework are proposed by combining Hessian and gray-scale morphological reconstruction filters. Results: 133 CT scans were collected from four different studies encompassing a wide spectrum of pulmonary abnormalities pertaining to two commonly used small animal models (ferret and rabbit). Sensitivity and specificity were greater than 90% for pathological lung segmentation (average dice similarity coefficient > 0.9). While qualitative visual assessments of airway tree extraction were performed by the participating expert radiologists, for quantitative evaluation the authors validated the proposed airway extraction method by using publicly available EXACT’09 data set. Conclusions: The authors developed a comprehensive computer-aided pulmonary image analysis framework for preclinical research applications. The proposed framework consists of automatic pathological lung segmentation and accurate airway tree extraction. The framework has high sensitivity and specificity; therefore, it can contribute advances in preclinical research in pulmonary diseases.

  4. Cone beam computed tomography image guidance system for a dedicated intracranial radiosurgery treatment unit.

    Science.gov (United States)

    Ruschin, Mark; Komljenovic, Philip T; Ansell, Steve; Ménard, Cynthia; Bootsma, Gregory; Cho, Young-Bin; Chung, Caroline; Jaffray, David

    2013-01-01

    Image guidance has improved the precision of fractionated radiation treatment delivery on linear accelerators. Precise radiation delivery is particularly critical when high doses are delivered to complex shapes with steep dose gradients near critical structures, as is the case for intracranial radiosurgery. To reduce potential geometric uncertainties, a cone beam computed tomography (CT) image guidance system was developed in-house to generate high-resolution images of the head at the time of treatment, using a dedicated radiosurgery unit. The performance and initial clinical use of this imaging system are described. A kilovoltage cone beam CT system was integrated with a Leksell Gamma Knife Perfexion radiosurgery unit. The X-ray tube and flat-panel detector are mounted on a translational arm, which is parked above the treatment unit when not in use. Upon descent, a rotational axis provides 210° of rotation for cone beam CT scans. Mechanical integrity of the system was evaluated over a 6-month period. Subsequent clinical commissioning included end-to-end testing of targeting performance and subjective image quality performance in phantoms. The system has been used to image 2 patients, 1 of whom received single-fraction radiosurgery and 1 who received 3 fractions, using a relocatable head frame. Images of phantoms demonstrated soft tissue contrast visibility and submillimeter spatial resolution. A contrast difference of 35 HU was easily detected at a calibration dose of 1.2 cGy (center of head phantom). The shape of the mechanical flex vs scan angle was highly reproducible and exhibited cone beam CT image guidance system was successfully adapted to a radiosurgery unit. The system is capable of producing high-resolution images of bone and soft tissue. The system is in clinical use and provides excellent image guidance without invasive frames. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Performance characterization of megavoltage computed tomography imaging on a helical tomotherapy unit

    International Nuclear Information System (INIS)

    Meeks, Sanford L.; Harmon, Joseph F. Jr.; Langen, Katja M.; Willoughby, Twyla R.; Wagner, Thomas H.; Kupelian, Patrick A.

    2005-01-01

    Helical tomotherapy is an innovative means of delivering IGRT and IMRT using a device that combines features of a linear accelerator and a helical computed tomography (CT) scanner. The HI-ART II can generate CT images from the same megavoltage x-ray beam it uses for treatment. These megavoltage CT (MVCT) images offer verification of the patient position prior to and potentially during radiation therapy. Since the unit uses the actual treatment beam as the x-ray source for image acquisition, no surrogate telemetry systems are required to register image space to treatment space. The disadvantage to using the treatment beam for imaging, however, is that the physics of radiation interactions in the megavoltage energy range may force compromises between the dose delivered and the image quality in comparison to diagnostic CT scanners. The performance of the system is therefore characterized in terms of objective measures of noise, uniformity, contrast, and spatial resolution as a function of the dose delivered by the MVCT beam. The uniformity and spatial resolutions of MVCT images generated by the HI-ART II are comparable to that of diagnostic CT images. Furthermore, the MVCT scan contrast is linear with respect to the electron density of material imaged. MVCT images do not have the same performance characteristics as state-of-the art diagnostic CT scanners when one objectively examines noise and low-contrast resolution. These inferior results may be explained, at least partially, by the low doses delivered by our unit; the dose is 1.1 cGy in a 20 cm diameter cylindrical phantom. In spite of the poorer low-contrast resolution, these relatively low-dose MVCT scans provide sufficient contrast to delineate many soft-tissue structures. Hence, these images are useful not only for verifying the patient's position at the time of therapy, but they are also sufficient for delineating many anatomic structures. In conjunction with the ability to recalculate radiotherapy doses on

  6. STUDY OF IMAGE SEGMENTATION TECHNIQUES ON RETINAL IMAGES FOR HEALTH CARE MANAGEMENT WITH FAST COMPUTING

    Directory of Open Access Journals (Sweden)

    Srikanth Prabhu

    2012-02-01

    Full Text Available The role of segmentation in image processing is to separate foreground from background. In this process, the features become clearly visible when appropriate filters are applied on the image. In this paper emphasis has been laid on segmentation of biometric retinal images to filter out the vessels explicitly for evaluating the bifurcation points and features for diabetic retinopathy. Segmentation on images is performed by calculating ridges or morphology. Ridges are those areas in the images where there is sharp contrast in features. Morphology targets the features using structuring elements. Structuring elements are of different shapes like disk, line which is used for extracting features of those shapes. When segmentation was performed on retinal images problems were encountered during image pre-processing stage. Also edge detection techniques have been deployed to find out the contours of the retinal images. After the segmentation has been performed, it has been seen that artifacts of the retinal images have been minimal when ridge based segmentation technique was deployed. In the field of Health Care Management, image segmentation has an important role to play as it determines whether a person is normal or having any disease specially diabetes. During the process of segmentation, diseased features are classified as diseased one’s or artifacts. The problem comes when artifacts are classified as diseased ones. This results in misclassification which has been discussed in the analysis Section. We have achieved fast computing with better performance, in terms of speed for non-repeating features, when compared to repeating features.

  7. Many-core computing for space-based stereoscopic imaging

    Science.gov (United States)

    McCall, Paul; Torres, Gildo; LeGrand, Keith; Adjouadi, Malek; Liu, Chen; Darling, Jacob; Pernicka, Henry

    The potential benefits of using parallel computing in real-time visual-based satellite proximity operations missions are investigated. Improvements in performance and relative navigation solutions over single thread systems can be achieved through multi- and many-core computing. Stochastic relative orbit determination methods benefit from the higher measurement frequencies, allowing them to more accurately determine the associated statistical properties of the relative orbital elements. More accurate orbit determination can lead to reduced fuel consumption and extended mission capabilities and duration. Inherent to the process of stereoscopic image processing is the difficulty of loading, managing, parsing, and evaluating large amounts of data efficiently, which may result in delays or highly time consuming processes for single (or few) processor systems or platforms. In this research we utilize the Single-Chip Cloud Computer (SCC), a fully programmable 48-core experimental processor, created by Intel Labs as a platform for many-core software research, provided with a high-speed on-chip network for sharing information along with advanced power management technologies and support for message-passing. The results from utilizing the SCC platform for the stereoscopic image processing application are presented in the form of Performance, Power, Energy, and Energy-Delay-Product (EDP) metrics. Also, a comparison between the SCC results and those obtained from executing the same application on a commercial PC are presented, showing the potential benefits of utilizing the SCC in particular, and any many-core platforms in general for real-time processing of visual-based satellite proximity operations missions.

  8. Dosimetry in abdominal imaging by 6-slice computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Rodrigues, Sonia Isabel [Hospital de Faro, EPE (Portugal); Abrantes, Antonio Fernando; Ribeiro, Luis Pedro; Almeida, Rui Pedro Pereira [University of Algarve (Portugal). School of Health. Dept. of Radiology

    2012-11-15

    Objective: To determine the effective dose in abdominal computed tomography imaging and to study the influence of patients' characteristics on the received dose. Materials and Methods: Dose values measurements were performed with an ionization chamber on phantoms to check the agreement between dose values and those presented by the computed tomography apparatus, besides their compliance with the recommended reference dose levels. Later, values of dose received by physically able patients submitted to abdominal computed tomography (n = 100) were measured and correlated with their anthropometric characteristics. Finally, the dose to organs was simulated with the Monte Carlo method using the CT-Expo V 1.5 software, and the effect of automatic exposure control on such examinations. Results: The main characteristics directly influencing the dose include the patients' body mass, abdominal perimeter and body mass index, whose correlation is linear and positive. Conclusion: The radiation dose received from abdominal CT scans depends on some patient's characteristics, and it is important to adjust the acquisition parameters to their dimensions (author)

  9. Computer processing of the scintigraphic image using digital filtering techniques

    International Nuclear Information System (INIS)

    Matsuo, Michimasa

    1976-01-01

    The theory of digital filtering was studied as a method for the computer processing of scintigraphic images. The characteristics and design techniques of finite impulse response (FIR) digital filters with linear phases were examined using the z-transform. The conventional data processing method, smoothing, could be recognized as one kind of linear phase FIR low-pass digital filtering. Ten representatives of FIR low-pass digital filters with various cut-off frequencies were scrutinized from the frequency domain in one-dimension and two-dimensions. These filters were applied to phantom studies with cold targets, using a Scinticamera-Minicomputer on-line System. These studies revealed that the resultant images had a direct connection with the magnitude response of the filter, that is, they could be estimated fairly well from the frequency response of the digital filter used. The filter, which was estimated from phantom studies as optimal for liver scintigrams using 198 Au-colloid, was successfully applied in clinical use for detecting true cold lesions and, at the same time, for eliminating spurious images. (J.P.N.)

  10. Optimization of dendrimer structure for sentinel lymph node imaging: Effects of generation and terminal group.

    Science.gov (United States)

    Niki, Yuichiro; Ogawa, Mikako; Makiura, Rie; Magata, Yasuhiro; Kojima, Chie

    2015-11-01

    The detection of the sentinel lymph node (SLN), the first lymph node draining tumor cells, is important in cancer diagnosis and therapy. Dendrimers are synthetic macromolecules with highly controllable structures, and are potent multifunctional imaging agents. In this study, 12 types of dendrimer of different generations (G2, G4, G6, and G8) and different terminal groups (amino, carboxyl, and acetyl) were prepared to determine the optimal dendrimer structure for SLN imaging. Radiolabeled dendrimers were intradermally administrated to the right footpads of rats. All G2 dendrimers were predominantly accumulated in the kidney. Amino-terminal, acetyl-terminal, and carboxyl-terminal dendrimers of greater than G4 were mostly located at the injection site, in the blood, and in the SLN, respectively. The carboxyl-terminal dendrimers were largely unrecognized by macrophages and T-cells in the SLN. Finally, SLN detection was successfully performed by single photon emission computed tomography imaging using carboxyl-terminal dendrimers of greater than G4. The early detection of tumor cells in the sentinel draining lymph nodes (SLN) is of utmost importance in terms of determining cancer prognosis and devising treatment. In this article, the authors investigated various formulations of dendrimers to determine the optimal one for tumor detection. The data generated from this study would help clinicians to fight the cancer battle in the near future. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Computer-generated video fly-through: an aid to visual impact assessment for windfarms

    International Nuclear Information System (INIS)

    Neilson, G.; Leeming, T.; Hall, S.

    1998-01-01

    Computer generated video fly-through provides a new method of assessing the visual impact of wind farms. With a PC, software and digital terrain model of the wind farm it is possible to produce videos ranging from wireframe to realistically shaded models. Using computer generated video fly-through visually sensitive corridors can be explored fully, wind turbine rotors can be seen in motion, critical viewpoints can be identified for photomontages and the context of the wind farm appreciated better. This paper describes the techniques of computer generated video fly through and examines its various applications in visual impact assessment of wind farms. (Author)

  12. Colour computer-generated holography for point clouds utilizing the Phong illumination model.

    Science.gov (United States)

    Symeonidou, Athanasia; Blinder, David; Schelkens, Peter

    2018-04-16

    A technique integrating the bidirectional reflectance distribution function (BRDF) is proposed to generate realistic high-quality colour computer-generated holograms (CGHs). We build on prior work, namely a fast computer-generated holography method for point clouds that handles occlusions. We extend the method by integrating the Phong illumination model so that the properties of the objects' surfaces are taken into account to achieve natural light phenomena such as reflections and shadows. Our experiments show that rendering holograms with the proposed algorithm provides realistic looking objects without any noteworthy increase to the computational cost.

  13. Applying a CAD-generated imaging marker to assess short-term breast cancer risk

    Science.gov (United States)

    Mirniaharikandehei, Seyedehnafiseh; Zarafshani, Ali; Heidari, Morteza; Wang, Yunzhi; Aghaei, Faranak; Zheng, Bin

    2018-02-01

    Although whether using computer-aided detection (CAD) helps improve radiologists' performance in reading and interpreting mammograms is controversy due to higher false-positive detection rates, objective of this study is to investigate and test a new hypothesis that CAD-generated false-positives, in particular, the bilateral summation of false-positives, is a potential imaging marker associated with short-term breast cancer risk. An image dataset involving negative screening mammograms acquired from 1,044 women was retrospectively assembled. Each case involves 4 images of craniocaudal (CC) and mediolateral oblique (MLO) view of the left and right breasts. In the next subsequent mammography screening, 402 cases were positive for cancer detected and 642 remained negative. A CAD scheme was applied to process all "prior" negative mammograms. Some features from CAD scheme were extracted, which include detection seeds, the total number of false-positive regions, an average of detection scores and the sum of detection scores in CC and MLO view images. Then the features computed from two bilateral images of left and right breasts from either CC or MLO view were combined. In order to predict the likelihood of each testing case being positive in the next subsequent screening, two logistic regression models were trained and tested using a leave-one-case-out based cross-validation method. Data analysis demonstrated the maximum prediction accuracy with an area under a ROC curve of AUC=0.65+/-0.017 and the maximum adjusted odds ratio of 4.49 with a 95% confidence interval of [2.95, 6.83]. The results also illustrated an increasing trend in the adjusted odds ratio and risk prediction scores (pbreast cancer risk.

  14. [Diagnostic value of high-resolution computed tomography imaging in congenital inner ear malformations].

    Science.gov (United States)

    Sun, Xiaowei; Ding, Yuanping; Zhang, Jianji; Chen, Ying; Xu, Anting; Dou, Fenfen; Zhang, Zihe

    2007-02-01

    To observe the inner ear structure with volume rendering (VR) reconstruction and to evaluate the role of high-resolution computed tomography (HRCT) in congenital inner ear malformations. HRCT scanning was performed in 10 patients (20 ears) without ear disease (control group) and 7 patients (11 ears) with inner ear malformations (IEM group) and the original data was processed with VR reconstruction. The inner ear osseous labyrinth structure in the images generated by these techniques was observed respectively in the normal ears and malformation ears. The inner ear osseous labyrinth structure and the relationship was displayed clearly in VR imaging in the control group,meanwhile, characters and degree of malformed structure were also displayed clearly in the IEA group. Of seven patients (11 ears) with congenital inner ear malformations, the axial, MPR and VR images can display the site and degree in 9 ears. VR images were superior to the axial images in displaying the malformations in 2 ears with the small lateral semicircular canal malformations. The malformations included Mondini deformity (7 ears), vestibular and semicircular canal malformations (3 ears), vestibular aqueduct dilate (7 ears, of which 6 ears accompanied by other malformations) , the internal auditory canal malformation (2 ears, all accompanied by other malformations). HRCT can display the normal structure of bone inner ear through high quality VR reconstructions. VR images can also display the site and degree of the malformations three-dimensionally and intuitively. HRCT is valuable in diagnosing the inner ear malformation.

  15. Promises and challenges for the implementation of computational medical imaging (radiomics) in oncology.

    Science.gov (United States)

    Limkin, E J; Sun, R; Dercle, L; Zacharaki, E I; Robert, C; Reuzé, S; Schernberg, A; Paragios, N; Deutsch, E; Ferté, C

    2017-06-01

    Medical image processing and analysis (also known as Radiomics) is a rapidly growing discipline that maps digital medical images into quantitative data, with the end goal of generating imaging biomarkers as decision support tools for clinical practice. The use of imaging data from routine clinical work-up has tremendous potential in improving cancer care by heightening understanding of tumor biology and aiding in the implementation of precision medicine. As a noninvasive method of assessing the tumor and its microenvironment in their entirety, radiomics allows the evaluation and monitoring of tumor characteristics such as temporal and spatial heterogeneity. One can observe a rapid increase in the number of computational medical imaging publications-milestones that have highlighted the utility of imaging biomarkers in oncology. Nevertheless, the use of radiomics as clinical biomarkers still necessitates amelioration and standardization in order to achieve routine clinical adoption. This Review addresses the critical issues to ensure the proper development of radiomics as a biomarker and facilitate its implementation in clinical practice. © The Author 2017. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  16. A kind of video image digitizing circuit based on computer parallel port

    International Nuclear Information System (INIS)

    Wang Yi; Tang Le; Cheng Jianping; Li Yuanjing; Zhang Binquan

    2003-01-01

    A kind of video images digitizing circuit based on parallel port was developed to digitize the flash x ray images in our Multi-Channel Digital Flash X ray Imaging System. The circuit can digitize the video images and store in static memory. The digital images can be transferred to computer through parallel port and can be displayed, processed and stored. (authors)

  17. Analysis of an industrial process simulator column using third-generation computed tomography

    International Nuclear Information System (INIS)

    Kirita, Rodrigo; Carvalho, Diego Vergacas de Sousa; Mesquita, Carlos Henrique de; Vasquez, Pablo Antonio S.; Hamada, Margarida Mizue

    2011-01-01

    The CT methodology must be tested using a simulator column in the laboratory before applying it in the industrial plants. In this work, using the third-generation industrial computed tomography developed at the IPEN, a gas absorption column, used as a simulator column for industrial process was evaluated. It is a glass cylindrical tube of 90 mm diameter and 1400 mm height constituted the following parts: random packed column, liquid circuit (water), gas circuit and analysis was used as a simulator column. Gamma ray tomography experiments were carried out, using this simulator column empty and filled with water. In this work the scanner was setting for 90 views and 19 projections for each detector totalizing 11970 projections. The resulting images describe the presence of liquid or gas phases and are possible to evaluate the linear attenuation coefficients inside the column. In this case, linear attenuation coefficient for water was 0.0813 cm-1. It was established that the newly developed third-generation fan-beam arrangement gamma scanner unit has a good spatial resolution acceptable given the size of the column used in this study. (author)

  18. Independent component analysis of dynamic contrast-enhanced computed tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Koh, T S [School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798 (Singapore); Yang, X [School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798 (Singapore); Bisdas, S [Department of Diagnostic and Interventional Radiology, Johann Wolfgang Goethe University Hospital, Theodor-Stern-Kai 7, D-60590 Frankfurt (Germany); Lim, C C T [Department of Neuroradiology, National Neuroscience Institute, 11 Jalan Tan Tock Seng, Singapore 308433 (Singapore)

    2006-10-07

    Independent component analysis (ICA) was applied on dynamic contrast-enhanced computed tomography images of cerebral tumours to extract spatial component maps of the underlying vascular structures, which correspond to different haemodynamic phases as depicted by the passage of the contrast medium. The locations of arteries, veins and tumours can be separately identified on these spatial component maps. As the contrast enhancement behaviour of the cerebral tumour differs from the normal tissues, ICA yields a tumour component map that reveals the location and extent of the tumour. Tumour outlines can be generated using the tumour component maps, with relatively simple segmentation methods. (note)

  19. Automatic generation of computer programs servicing TFTR console displays

    International Nuclear Information System (INIS)

    Eisenberg, H.

    1983-01-01

    A number of alternatives were considered in providing programs to support the several hundred displays required for control and monitoring of TFTR equipment. Since similar functions were performed, an automated method of creating programs was suggested. The complexity of a single program servicing as many as thirty consoles mitigated against that approach. Similarly, creation of a syntactic language while elegant, was deemed to be too time consuming, and had the disadvantage of requiring a working knowledge of the language on a programming level. It was elected to pursue a method of generating an individual program to service a particular display. A feasibility study was conducted and the Control and Monitor Display Generator system (CMDG) was developed. A Control and Monitor Display Service Program (CMDS) provides a means of performing monitor and control functions for devices associated with TFTR subsystems, as well as other user functions, via TFTR Control Consoles. This paper discusses the specific capabilities provided by CMDS in a usage context, as well as the mechanics of implementation

  20. A method for generating high resolution satellite image time series

    Science.gov (United States)

    Guo, Tao

    2014-10-01

    There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation

  1. Classification of bacterial contamination using image processing and distributed computing.

    Science.gov (United States)

    Ahmed, W M; Bayraktar, B; Bhunia, A; Hirleman, E D; Robinson, J P; Rajwa, B

    2013-01-01

    Disease outbreaks due to contaminated food are a major concern not only for the food-processing industry but also for the public at large. Techniques for automated detection and classification of microorganisms can be a great help in preventing outbreaks and maintaining the safety of the nations food supply. Identification and classification of foodborne pathogens using colony scatter patterns is a promising new label-free technique that utilizes image-analysis and machine-learning tools. However, the feature-extraction tools employed for this approach are computationally complex, and choosing the right combination of scatter-related features requires extensive testing with different feature combinations. In the presented work we used computer clusters to speed up the feature-extraction process, which enables us to analyze the contribution of different scatter-based features to the overall classification accuracy. A set of 1000 scatter patterns representing ten different bacterial strains was used. Zernike and Chebyshev moments as well as Haralick texture features were computed from the available light-scatter patterns. The most promising features were first selected using Fishers discriminant analysis, and subsequently a support-vector-machine (SVM) classifier with a linear kernel was used. With extensive testing we were able to identify a small subset of features that produced the desired results in terms of classification accuracy and execution speed. The use of distributed computing for scatter-pattern analysis, feature extraction, and selection provides a feasible mechanism for large-scale deployment of a light scatter-based approach to bacterial classification.

  2. Validation of a low dose simulation technique for computed tomography images.

    Directory of Open Access Journals (Sweden)

    Daniela Muenzel

    Full Text Available PURPOSE: Evaluation of a new software tool for generation of simulated low-dose computed tomography (CT images from an original higher dose scan. MATERIALS AND METHODS: Original CT scan data (100 mAs, 80 mAs, 60 mAs, 40 mAs, 20 mAs, 10 mAs; 100 kV of a swine were acquired (approved by the regional governmental commission for animal protection. Simulations of CT acquisition with a lower dose (simulated 10-80 mAs were calculated using a low-dose simulation algorithm. The simulations were compared to the originals of the same dose level with regard to density values and image noise. Four radiologists assessed the realistic visual appearance of the simulated images. RESULTS: Image characteristics of simulated low dose scans were similar to the originals. Mean overall discrepancy of image noise and CT values was -1.2% (range -9% to 3.2% and -0.2% (range -8.2% to 3.2%, respectively, p>0.05. Confidence intervals of discrepancies ranged between 0.9-10.2 HU (noise and 1.9-13.4 HU (CT values, without significant differences (p>0.05. Subjective observer evaluation of image appearance showed no visually detectable difference. CONCLUSION: Simulated low dose images showed excellent agreement with the originals concerning image noise, CT density values, and subjective assessment of the visual appearance of the simulated images. An authentic low-dose simulation opens up opportunity with regard to staff education, protocol optimization and introduction of new techniques.

  3. Edge artifact correction for industrial computed tomography images

    International Nuclear Information System (INIS)

    Cai Yufang; Li Dan; Wang Jue

    2013-01-01

    To eliminate the edge artifacts of industrial CT images, and improve the identification ability of the image and the precision of the dimension measurement, a coefficient adjusting method for reducing crosstalk noise is proposed. It is concluded from theoretical analysis that crosstalk generated from adjacent detectors by Compton scattering is the major reason for the edge artifacts. According to the mathematic model of the detector crosstalk, we design a special detector system configuration and stair-step phantom for estimating the quantity of crosstalk noise. The relationship between crosstalk ratio and intensity of the incident X-ray is acquired by regressing experimental data with least square method. The experimental result shows that the first-order crosstalk ratio between detectors is about 9.0%, and the second-order crosstalk ratio is about 1.2%. Thus the first-order crosstalk is the main factor causing edge artifacts. The proposed method can reduce the edge artifacts significantly, and meanwhile maintain the detail and edge of CT images. (authors)

  4. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  5. Computing a Non-trivial Lower Bound on the Joint Entropy between Two Images

    Energy Technology Data Exchange (ETDEWEB)

    Perumalla, Kalyan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-03-01

    In this report, a non-trivial lower bound on the joint entropy of two non-identical images is developed, which is greater than the individual entropies of the images. The lower bound is the least joint entropy possible among all pairs of images that have the same histograms as those of the given images. New algorithms are presented to compute the joint entropy lower bound with a computation time proportional to S log S where S is the number of histogram bins of the images. This is faster than the traditional methods of computing the exact joint entropy with a computation time that is quadratic in S .

  6. Generation of realistic virtual nodules based on three-dimensional spatial resolution in lung computed tomography: A pilot phantom study.

    Science.gov (United States)

    Narita, Akihiro; Ohkubo, Masaki; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi

    2017-10-01

    The aim of this feasibility study using phantoms was to propose a novel method for obtaining computer-generated realistic virtual nodules in lung computed tomography (CT). In the proposed methodology, pulmonary nodule images obtained with a CT scanner are deconvolved with the point spread function (PSF) in the scan plane and slice sensitivity profile (SSP) measured for the scanner; the resultant images are referred to as nodule-like object functions. Next, by convolving the nodule-like object function with the PSF and SSP of another (target) scanner, the virtual nodule can be generated so that it has the characteristics of the spatial resolution of the target scanner. To validate the methodology, the authors applied physical nodules of 5-, 7- and 10-mm-diameter (uniform spheres) included in a commercial CT test phantom. The nodule-like object functions were calculated from the sphere images obtained with two scanners (Scanner A and Scanner B); these functions were referred to as nodule-like object functions A and B, respectively. From these, virtual nodules were generated based on the spatial resolution of another scanner (Scanner C). By investigating the agreement of the virtual nodules generated from the nodule-like object functions A and B, the equivalence of the nodule-like object functions obtained from different scanners could be assessed. In addition, these virtual nodules were compared with the real (true) sphere images obtained with Scanner C. As a practical validation, five types of laboratory-made physical nodules with various complicated shapes and heterogeneous densities, similar to real lesions, were used. The nodule-like object functions were calculated from the images of these laboratory-made nodules obtained with Scanner A. From them, virtual nodules were generated based on the spatial resolution of Scanner C and compared with the real images of laboratory-made nodules obtained with Scanner C. Good agreement of the virtual nodules generated from

  7. Cloud Computing Infusion for Generating ESDRs of Visible Spectra Radiances

    Science.gov (United States)

    Golpayegani, N.; Halem, M.; Nguyen, P.

    2008-12-01

    The AIRS and AVHRR instruments have been collecting radiances of the Earth in the visible spectrum for over 25 years. These measurements have been used to develop such useful products as NDVI, Snow cover and depth, Outgoing long wave radiation and other products. Yet, no long-term data record of the level 1b visible spectra is available in a grid form to researchers for various climate studies. We present here an Earth System Data Record observed in the visible spectrum as gridded radiance fields of 8kmx10km grid resolution for the six years in the case of AIRS and from 1981 to the present for AVHRR. The AIRS data has four visible channels from 0.41μm to 0.94μm with an IFOV of 1 km and AVHRR has two visible channels in the 0.58μm to 1.00μm range also at 1 km. In order to process such large amounts of data on demand, two components need to be implemented,(i) a processing system capable of gridding TBs of data in a reasonable amount of time and (ii) a download mechanism to access and deliver the data to the processing system. We implemented a cloud computing approach to be able to process such large amounts of data. We use Hadoop, a distributed computation system developed by the Apache Software Foundation. With Hadoop, we are able to store the data in a distributed fashion, taking advantage of Hadoop's distributed file system (dfs). We also take advantage of Hadoop's MapReduce functionality to perform as much computations as is possible on available nodes of the UMBC bluegrit Cell cluster system that contain the data. We make use of the SOAR system developed under the ACCESS program to acquire and process the AIRS and AVHRR observations. Comparisons of the AIRS data witth selected periods of MODIS visible spectral channels on the same sattelite indicate the two instruments have maintained calibration consistency and continuity of their measurements over the six year period. Our download mechanism transfers the data from these instruments into hadoop's dfs. Our

  8. "Simulated molecular evolution" or computer-generated artifacts?

    Science.gov (United States)

    Darius, F; Rojas, R

    1994-11-01

    1. The authors define a function with value 1 for the positive examples and 0 for the negative ones. They fit a continuous function but do not deal at all with the error margin of the fit, which is almost as large as the function values they compute. 2. The term "quality" for the value of the fitted function gives the impression that some biological significance is associated with values of the fitted function strictly between 0 and 1, but there is no justification for this kind of interpretation and finding the point where the fit achieves its maximum does not make sense. 3. By neglecting the error margin the authors try to optimize the fitted function using differences in the second, third, fourth, and even fifth decimal place which have no statistical significance. 4. Even if such a fit could profit from more data points, the authors should first prove that the region of interest has some kind of smoothness, that is, that a continuous fit makes any sense at all. 5. "Simulated molecular evolution" is a misnomer. We are dealing here with random search. Since the margin of error is so large, the fitted function does not provide statistically significant information about the points in search space where strings with cleavage sites could be found. This implies that the method is a highly unreliable stochastic search in the space of strings, even if the neural network is capable of learning some simple correlations. 6. Classical statistical methods are for these kind of problems with so few data points clearly superior to the neural networks used as a "black box" by the authors, which in the way they are structured provide a model with an error margin as large as the numbers being computed.7. And finally, even if someone would provide us with a function which separates strings with cleavage sites from strings without them perfectly, so-called simulated molecular evolution would not be better than random selection.Since a perfect fit would only produce exactly ones or

  9. Assessing Human Judgment of Computationally Generated Swarming Behavior

    Directory of Open Access Journals (Sweden)

    John Harvey

    2018-02-01

    Full Text Available Computer-based swarm systems, aiming to replicate the flocking behavior of birds, were first introduced by Reynolds in 1987. In his initial work, Reynolds noted that while it was difficult to quantify the dynamics of the behavior from the model, observers of his model immediately recognized them as a representation of a natural flock. Considerable analysis has been conducted since then on quantifying the dynamics of flocking/swarming behavior. However, no systematic analysis has been conducted on human identification of swarming. In this paper, we assess subjects’ assessment of the behavior of a simplified version of Reynolds’ model. Factors that affect the identification of swarming are discussed and future applications of the resulting models are proposed. Differences in decision times for swarming-related questions asked during the study indicate that different brain mechanisms may be involved in different elements of the behavior assessment task. The relatively simple but finely tunable model used in this study provides a useful methodology for assessing individual human judgment of swarming behavior.

  10. Thermoelectric cooling of microelectronic circuits and waste heat electrical power generation in a desktop personal computer

    International Nuclear Information System (INIS)

    Gould, C.A.; Shammas, N.Y.A.; Grainger, S.; Taylor, I.

    2011-01-01

    Thermoelectric cooling and micro-power generation from waste heat within a standard desktop computer has been demonstrated. A thermoelectric test system has been designed and constructed, with typical test results presented for thermoelectric cooling and micro-power generation when the computer is executing a number of different applications. A thermoelectric module, operating as a heat pump, can lower the operating temperature of the computer's microprocessor and graphics processor to temperatures below ambient conditions. A small amount of electrical power, typically in the micro-watt or milli-watt range, can be generated by a thermoelectric module attached to the outside of the computer's standard heat sink assembly, when a secondary heat sink is attached to the other side of the thermoelectric module. Maximum electrical power can be generated by the thermoelectric module when a water cooled heat sink is used as the secondary heat sink, as this produces the greatest temperature difference between both sides of the module.

  11. Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology (Final Report)

    Science.gov (United States)

    EPA announced the release of the final report, Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology. This report describes new approaches that are faster, less resource intensive, and more robust that can help ...

  12. Quantum Computation-Based Image Representation, Processing Operations and Their Applications

    Directory of Open Access Journals (Sweden)

    Fei Yan

    2014-10-01

    Full Text Available A flexible representation of quantum images (FRQI was proposed to facilitate the extension of classical (non-quantum-like image processing applications to the quantum computing domain. The representation encodes a quantum image in the form of a normalized state, which captures information about colors and their corresponding positions in the images. Since its conception, a handful of processing transformations have been formulated, among which are the geometric transformations on quantum images (GTQI and the CTQI that are focused on the color information of the images. In addition, extensions and applications of FRQI representation, such as multi-channel representation for quantum images (MCQI, quantum image data searching, watermarking strategies for quantum images, a framework to produce movies on quantum computers and a blueprint for quantum video encryption and decryption have also been suggested. These proposals extend classical-like image and video processing applications to the quantum computing domain and offer a significant speed-up with low computational resources in comparison to performing the same tasks on traditional computing devices. Each of the algorithms and the mathematical foundations for their execution were simulated using classical computing resources, and their results were analyzed alongside other classical computing equivalents. The work presented in this review is intended to serve as the epitome of advances made in FRQI quantum image processing over the past five years and to simulate further interest geared towards the realization of some secure and efficient image and video processing applications on quantum computers.

  13. The comparative effect of individually-generated vs. collaboratively-generated computer-based concept mapping on science concept learning

    Science.gov (United States)

    Kwon, So Young

    Using a quasi-experimental design, the researcher investigated the comparative effects of individually-generated and collaboratively-generated computer-based concept mapping on middle school science concept learning. Qualitative data were analyzed to explain quantitative findings. One hundred sixty-one students (74 boys and 87 girls) in eight, seventh grade science classes at a middle school in Southeast Texas completed the entire study. Using prior science performance scores to assure equivalence of student achievement across groups, the researcher assigned the teacher's classes to one of the three experimental groups. The independent variable, group, consisted of three levels: 40 students in a control group, 59 students trained to individually generate concept maps on computers, and 62 students trained to collaboratively generate concept maps on computers. The dependent variables were science concept learning as demonstrated by comprehension test scores, and quality of concept maps created by students in experimental groups as demonstrated by rubric scores. Students in the experimental groups received concept mapping training and used their newly acquired concept mapping skills to individually or collaboratively construct computer-based concept maps during study time. The control group, the individually-generated concept mapping group, and the collaboratively-generated concept mapping group had equivalent learning experiences for 50 minutes during five days, excepting that students in a control group worked independently without concept mapping activities, students in the individual group worked individually to construct concept maps, and students in the collaborative group worked collaboratively to construct concept maps during their study time. Both collaboratively and individually generated computer-based concept mapping had a positive effect on seventh grade middle school science concept learning but neither strategy was more effective than the other. However

  14. SU-E-I-66: Radiomics and Image Registration Updates for the Computational Environment for Radiotherapy Research (CERR)

    Energy Technology Data Exchange (ETDEWEB)

    Apte, A; Wang, Y; Deasy, J [Memorial Sloan Kettering Cancer Center, NY, NY (United States)

    2014-06-01

    Purpose: To present new tools in CERR for Radiomics, image registration and other software updates and additions. Methods: Radiomics: CERR supports generating 3-D texture metrics based on gray scale co-occurance. Two new ways to calculate texture features were added: (1) Local Texture Averaging: Local texture is calculated around a voxel within the userdefined bounding box. The final texture metrics are the average of local textures for all the voxels. This is useful to detect any local texture patterns within an image. (2) Image Smoothing: A convolution ball of user-defined radius is rolled over an image to smooth out artifacts. The texture metrics are then computed on the smooth image. Image Registration: (1) Support was added to import deformation vector fields as well as non-deformable transformation matrices generated by vendor software and stored in standard DICOM format. (2) Support was added to use image within masks while computing image deformations. CT to MR registration is supported. This registration uses morphological edge information within the images to guide the deformation process. In addition to these features, other noteworthy additions to CERR include (1) Irregularly shaped ROI: This is done by taking intersection between infinitely extended irregular polygons drawn on any of the two views. Such an ROI is more conformal and useful in avoiding any unwanted parts of images that cannot be avoided with the conventional cubic box. The ROI is useful to generate Radiomics metrics. (2) Ability to insert RTDOSE in DICOM format to existing CERR plans. (3) Ability to import multi-frame PET-CT and SPECT-CT while maintaining spatial registration between the two modalities. (4) Ability to compile CERR on Unix-like systems. Results: The new features and updates are available via https://www.github.com/adityaapte/cerr . Conclusion: Features added to CERR increase its utility in Radiomics, Image-Registration and Outcomes modeling.

  15. SU-E-I-66: Radiomics and Image Registration Updates for the Computational Environment for Radiotherapy Research (CERR)

    International Nuclear Information System (INIS)

    Apte, A; Wang, Y; Deasy, J

    2014-01-01

    Purpose: To present new tools in CERR for Radiomics, image registration and other software updates and additions. Methods: Radiomics: CERR supports generating 3-D texture metrics based on gray scale co-occurance. Two new ways to calculate texture features were added: (1) Local Texture Averaging: Local texture is calculated around a voxel within the userdefined bounding box. The final texture metrics are the average of local textures for all the voxels. This is useful to detect any local texture patterns within an image. (2) Image Smoothing: A convolution ball of user-defined radius is rolled over an image to smooth out artifacts. The texture metrics are then computed on the smooth image. Image Registration: (1) Support was added to import deformation vector fields as well as non-deformable transformation matrices generated by vendor software and stored in standard DICOM format. (2) Support was added to use image within masks while computing image deformations. CT to MR registration is supported. This registration uses morphological edge information within the images to guide the deformation process. In addition to these features, other noteworthy additions to CERR include (1) Irregularly shaped ROI: This is done by taking intersection between infinitely extended irregular polygons drawn on any of the two views. Such an ROI is more conformal and useful in avoiding any unwanted parts of images that cannot be avoided with the conventional cubic box. The ROI is useful to generate Radiomics metrics. (2) Ability to insert RTDOSE in DICOM format to existing CERR plans. (3) Ability to import multi-frame PET-CT and SPECT-CT while maintaining spatial registration between the two modalities. (4) Ability to compile CERR on Unix-like systems. Results: The new features and updates are available via https://www.github.com/adityaapte/cerr . Conclusion: Features added to CERR increase its utility in Radiomics, Image-Registration and Outcomes modeling

  16. Extracting morphologies from third harmonic generation images of structurally normal human brain tissue

    NARCIS (Netherlands)

    Zhang, Zhiqing; Kuzmin, Nikolay V.; Groot, Marie Louise; de Munck, Jan C.

    2017-01-01

    Motivation: The morphologies contained in 3D third harmonic generation (THG) images of human brain tissue can report on the pathological state of the tissue. However, the complexity of THG brain images makes the usage of modern image processing tools, especially those of image filtering,

  17. A pseudo-discrete algebraic reconstruction technique (PDART) prior image-based suppression of high density artifacts in computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Pua, Rizza; Park, Miran; Wi, Sunhee; Cho, Seungryong, E-mail: scho@kaist.ac.kr

    2016-12-21

    We propose a hybrid metal artifact reduction (MAR) approach for computed tomography (CT) that is computationally more efficient than a fully iterative reconstruction method, but at the same time achieves superior image quality to the interpolation-based in-painting techniques. Our proposed MAR method, an image-based artifact subtraction approach, utilizes an intermediate prior image reconstructed via PDART to recover the background information underlying the high density objects. For comparison, prior images generated by total-variation minimization (TVM) algorithm, as a realization of fully iterative approach, were also utilized as intermediate images. From the simulation and real experimental results, it has been shown that PDART drastically accelerates the reconstruction to an acceptable quality of prior images. Incorporating PDART-reconstructed prior images in the proposed MAR scheme achieved higher quality images than those by a conventional in-painting method. Furthermore, the results were comparable to the fully iterative MAR that uses high-quality TVM prior images. - Highlights: • An accelerated reconstruction method, PDART, is proposed for exterior problems. • With a few iterations, soft prior image was reconstructed from the exterior data. • PDART framework has enabled an efficient hybrid metal artifact reduction in CT.

  18. Human Detection Based on the Generation of a Background Image by Using a Far-Infrared Light Camera

    Directory of Open Access Journals (Sweden)

    Eun Som Jeon

    2015-03-01

    Full Text Available The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external light in the evening and night. Consequently, human detection based on thermal (far-infrared light cameras has been considered as an alternative. However, its performance is influenced by the factors, such as low image resolution, low contrast and the large noises of thermal images. It is also affected by the high temperature of backgrounds during the day. To solve these problems, we propose a new method for detecting human areas in thermal camera images. Compared to previous works, the proposed research is novel in the following four aspects. One background image is generated by median and average filtering. Additional filtering procedures based on maximum gray level, size filtering and region erasing are applied to remove the human areas from the background image. Secondly, candidate human regions in the input image are located by combining the pixel and edge difference images between the input and background images. The thresholds for the difference images are adaptively determined based on the brightness of the generated background image. Noise components are removed by component labeling, a morphological operation and size filtering. Third, detected areas that may have more than two human regions are merged or separated based on the information in the horizontal and vertical histograms of the detected area. This procedure is adaptively operated based on the brightness of the generated background image. Fourth, a further procedure for the separation and removal of the candidate human regions is performed based on the size and ratio of the height to width information of the candidate regions considering the camera viewing direction

  19. Multi Detector Computed Tomography Fistulography In Patients of Fistula-in-Ano: An Imaging Collage.

    Science.gov (United States)

    Bhatt, Shuchi; Jain, Bhupendra Kumar; Singh, Vikas Kumar

    2017-01-01

    Fistula-in-ano, or perianal fistula, is a challenging clinical condition for both diagnosis and treatment. Imaging modalities such as fistulography, anal endosonography, perineal sonography, magnetic resonance imaging (MRI), and computed tomography (CT) are available for its evaluation. MRI is considered as the modality of choice for an accurate delineation of the tract in relation to the sphincter complex and for the detection of associated complications. However, its availability and affordability is always an issue. Moreover, the requirement to obtain multiple sequences to depict the fistula in detail is cumbersome and confusing for the clinicians to interpret. The inability to show the fistula in relation to normal anatomical structures in a single image is also a limitation. Multi detector computed tomography fistulography ( MDCTF ) is an underutilized technique for defining perianal fistulas. Acquisition of iso-volumetric data sets with instillation of contrast into the fistula delineates the tract and its components. Post-processing with thin sections allows for a generation of good quality images for presentation in various planes (multi-planar reconstructions) and formats (volume rendered technique, maximum intensity projection). MDCTF demonstrates the type of fistula, its extent, whether it is simple or complex, and shows the site of internal opening and associated complications; all in easy to understand images that can be used by the surgeons. Its capability to represent the entire pathology in relation to normal anatomical structures in few images is a definite advantage. MDCTF can be utilized when MRI is contraindicated or not feasible. This pictorial review shares our initial experience with MDCT fistulography in evaluating fistula-in-ano, demonstrates various components of fistulas, and discusses the types of fistulas according to the standard Parks classification.

  20. Positron emission tomography/computed tomography imaging and rheumatoid arthritis.

    Science.gov (United States)

    Wang, Shi-Cun; Xie, Qiang; Lv, Wei-Fu

    2014-03-01

    Rheumatoid arthritis (RA) is a phenotypically heterogeneous, chronic, destructive inflammatory disease of the synovial joints. A number of imaging tools are currently available for evaluation of inflammatory conditions. By targeting the upgraded glucose uptake of infiltrating granulocytes and tissue macrophages, positron emission tomography/computed tomography with fluorine-18 fluorodeoxyglucose ((18) F-FDG PET/CT) is available to delineate inflammation with high sensitivity. Recently, several studies have indicated that FDG uptake in affected joints reflects the disease activity of RA. In addition, usage of FDG PET for the sensitive detection and monitoring of the response to treatment has been reported. Combined FDG PET/CT enables the detailed assessment of disease in large joints throughout the whole body. These unique capabilities of FDG PET/CT imaging are also able to detect RA-complicated diseases. Therefore, PET/CT has become an excellent ancillary tool to assess disease activity and prognosis in RA. © 2014 Asia Pacific League of Associations for Rheumatology and Wiley Publishing Asia Pty Ltd.