WorldWideScience

Sample records for video images acquired

  1. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization

    National Research Council Canada - National Science Library

    Kedzierski, Michal; Delis, Paulina

    2016-01-01

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial...

  2. Acquiring a dataset of labeled video images showing discomfort in demented elderly.

    Science.gov (United States)

    Bonroy, Bert; Schiepers, Pieter; Leysens, Greet; Miljkovic, Dragana; Wils, Maartje; De Maesschalck, Lieven; Quanten, Stijn; Triau, Eric; Exadaktylos, Vasileios; Berckmans, Daniel; Vanrumste, Bart

    2009-05-01

    One of the effects of late-stage dementia is the loss of the ability to communicate verbally. Patients become unable to call for help if they feel uncomfortable. The first objective of this article was to record facial expressions of bedridden demented elderly. For this purpose, we developed a video acquisition system (ViAS) that records synchronized video coming from two cameras. Each camera delivers uncompressed color images of 1,024 x 768 pixels, up to 30 frames per second. It is the first time that such a system has been placed in a patient's room. The second objective was to simultaneously label these video recordings with respect to discomfort expressions of the patients. Therefore, we developed a Digital Discomfort Labeling Tool (DDLT). This tool provides an easy-to-use software representation on a tablet PC of validated "paper" discomfort scales. With ViAS and DDLT, 80 different datasets were obtained of about 15 minutes of recordings. Approximately 80% of the recorded datasets delivered the labeled video recordings. The remainder were not usable due to under- or overexposed images and due to the patients being out of view as the system was not properly replaced after care. In one of 6 observed patients, nurses recognized a higher discomfort level that would not have been observed without the DDLT.

  3. Sensor Fusion of a Mobile Device to Control and Acquire Videos or Images of Coffee Branches and for Georeferencing Trees.

    Science.gov (United States)

    Giraldo, Paula Jimena Ramos; Aguirre, Álvaro Guerrero; Muñoz, Carlos Mario; Prieto, Flavio Augusto; Oliveros, Carlos Eugenio

    2017-04-06

    Smartphones show potential for controlling and monitoring variables in agriculture. Their processing capacity, instrumentation, connectivity, low cost, and accessibility allow farmers (among other users in rural areas) to operate them easily with applications adjusted to their specific needs. In this investigation, the integration of inertial sensors, a GPS, and a camera are presented for the monitoring of a coffee crop. An Android-based application was developed with two operating modes: (i) Navigation: for georeferencing trees, which can be as close as 0.5 m from each other; and (ii) Acquisition: control of video acquisition, based on the movement of the mobile device over a branch, and measurement of image quality, using clarity indexes to select the most appropriate frames for application in future processes. The integration of inertial sensors in navigation mode, shows a mean relative error of ±0.15 m, and total error ±5.15 m. In acquisition mode, the system correctly identifies the beginning and end of mobile phone movement in 99% of cases, and image quality is determined by means of a sharpness factor which measures blurriness. With the developed system, it will be possible to obtain georeferenced information about coffee trees, such as their production, nutritional state, and presence of plagues or diseases.

  4. Sensor Fusion of a Mobile Device to Control and Acquire Videos or Images of Coffee Branches and for Georeferencing Trees

    Directory of Open Access Journals (Sweden)

    Paula Jimena Ramos Giraldo

    2017-04-01

    Full Text Available Smartphones show potential for controlling and monitoring variables in agriculture. Their processing capacity, instrumentation, connectivity, low cost, and accessibility allow farmers (among other users in rural areas to operate them easily with applications adjusted to their specific needs. In this investigation, the integration of inertial sensors, a GPS, and a camera are presented for the monitoring of a coffee crop. An Android-based application was developed with two operating modes: (i Navigation: for georeferencing trees, which can be as close as 0.5 m from each other; and (ii Acquisition: control of video acquisition, based on the movement of the mobile device over a branch, and measurement of image quality, using clarity indexes to select the most appropriate frames for application in future processes. The integration of inertial sensors in navigation mode, shows a mean relative error of ±0.15 m, and total error ±5.15 m. In acquisition mode, the system correctly identifies the beginning and end of mobile phone movement in 99% of cases, and image quality is determined by means of a sharpness factor which measures blurriness. With the developed system, it will be possible to obtain georeferenced information about coffee trees, such as their production, nutritional state, and presence of plagues or diseases.

  5. Enhanced Video Surveillance (EVS) with speckle imaging

    Energy Technology Data Exchange (ETDEWEB)

    Carrano, C J

    2004-01-13

    Enhanced Video Surveillance (EVS) with Speckle Imaging is a high-resolution imaging system that substantially improves resolution and contrast in images acquired over long distances. This technology will increase image resolution up to an order of magnitude or greater for video surveillance systems. The system's hardware components are all commercially available and consist of a telescope or large-aperture lens assembly, a high-performance digital camera, and a personal computer. The system's software, developed at LLNL, extends standard speckle-image-processing methods (used in the astronomical community) to solve the atmospheric blurring problem associated with imaging over medium to long distances (hundreds of meters to tens of kilometers) through horizontal or slant-path turbulence. This novel imaging technology will not only enhance national security but also will benefit law enforcement, security contractors, and any private or public entity that uses video surveillance to protect their assets.

  6. Color image and video enhancement

    CERN Document Server

    Lecca, Michela; Smolka, Bogdan

    2015-01-01

    This text covers state-of-the-art color image and video enhancement techniques. The book examines the multivariate nature of color image/video data as it pertains to contrast enhancement, color correction (equalization, harmonization, normalization, balancing, constancy, etc.), noise removal and smoothing. This book also discusses color and contrast enhancement in vision sensors and applications of image and video enhancement.   ·         Focuses on enhancement of color images/video ·         Addresses algorithms for enhancing color images and video ·         Presents coverage on super resolution, restoration, in painting, and colorization.

  7. Video library for video imaging detection at intersection stop lines.

    Science.gov (United States)

    2010-04-01

    The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...

  8. Detectors for scanning video imagers

    Science.gov (United States)

    Webb, Robert H.; Hughes, George W.

    1993-11-01

    In scanning video imagers, a single detector sees each pixel for only 100 ns, so the bandwidth of the detector needs to be about 10 MHz. How this fact influences the choice of detectors for scanning systems is described here. Some important parametric quantities obtained from manufacturer specifications are related and it is shown how to compare detectors when specified quantities differ.

  9. Temporal compressive imaging for video

    Science.gov (United States)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  10. Multimedia image and video processing

    CERN Document Server

    Guan, Ling

    2012-01-01

    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  11. Acquired portosystemic collaterals: anatomy and imaging

    Energy Technology Data Exchange (ETDEWEB)

    Leite, Andrea Farias de Melo; Mota Junior, Americo, E-mail: andreafariasm@gmail.com [Instituto de Medicina Integral Professor Fernando Figueira de Pernambuco (IMIP), Recife, PE (Brazil); Chagas-Neto, Francisco Abaete [Universidade de Fortaleza (UNIFOR), Fortaleza, CE (Brazil); Teixeira, Sara Reis; Elias Junior, Jorge; Muglia, Valdair Francisco [Universidade de Sao Paulo (FMRP/USP), Ribeirao Preto, SP (Brazil). Faculdade de Medicina

    2016-07-15

    Portosystemic shunts are enlarged vessels that form collateral pathological pathways between the splanchnic circulation and the systemic circulation. Although their causes are multifactorial, portosystemic shunts all have one mechanism in common - increased portal venous pressure, which diverts the blood flow from the gastrointestinal tract to the systemic circulation. Congenital and acquired collateral pathways have both been described in the literature. The aim of this pictorial essay was to discuss the distinct anatomic and imaging features of portosystemic shunts, as well as to provide a robust method of differentiating between acquired portosystemic shunts and similar pathologies, through the use of illustrations and schematic drawings. Imaging of portosystemic shunts provides subclinical markers of increased portal venous pressure. Therefore, radiologists play a crucial role in the identification of portosystemic shunts. Early detection of portosystemic shunts can allow ample time to perform endovascular shunt operations, which can relieve portal hypertension and prevent acute or chronic complications in at-risk patient populations. (author)

  12. Still image and video compression with MATLAB

    CERN Document Server

    Thyagarajan, K

    2010-01-01

    This book describes the principles of image and video compression techniques and introduces current and popular compression standards, such as the MPEG series. Derivations of relevant compression algorithms are developed in an easy-to-follow fashion. Numerous examples are provided in each chapter to illustrate the concepts. The book includes complementary software written in MATLAB SIMULINK to give readers hands-on experience in using and applying various video compression methods. Readers can enhance the software by including their own algorithms.

  13. High-sensitivity hyperspectral imager for biomedical video diagnostic applications

    Science.gov (United States)

    Leitner, Raimund; Arnold, Thomas; De Biasio, Martin

    2010-04-01

    Video endoscopy allows physicians to visually inspect inner regions of the human body using a camera and only minimal invasive optical instruments. It has become an every-day routine in clinics all over the world. Recently a technological shift was done to increase the resolution from PAL/NTSC to HDTV. But, despite a vast literature on invivo and in-vitro experiments with multi-spectral point and imaging instruments that suggest that a wealth of information for diagnostic overlays is available in the visible spectrum, the technological evolution from colour to hyper-spectral video endoscopy is overdue. There were two approaches (NBI, OBI) that tried to increase the contrast for a better visualisation by using more than three wavelengths. But controversial discussions about the real benefit of a contrast enhancement alone, motivated a more comprehensive approach using the entire spectrum and pattern recognition algorithms. Up to now the hyper-spectral equipment was too slow to acquire a multi-spectral image stack at reasonable video rates rendering video endoscopy applications impossible. Recently, the availability of fast and versatile tunable filters with switching times below 50 microseconds made an instrumentation for hyper-spectral video endoscopes feasible. This paper describes a demonstrator for hyper-spectral video endoscopy and the results of clinical measurements using this demonstrator for measurements after otolaryngoscopic investigations and thorax surgeries. The application investigated here is the detection of dysplastic tissue, although hyper-spectral video endoscopy is of course not limited to cancer detection. Other applications are the detection of dysplastic tissue or polyps in the colon or the gastrointestinal tract.

  14. Feature Extraction in IR Images Via Synchronous Video Detection

    Science.gov (United States)

    Shepard, Steven M.; Sass, David T.

    1989-03-01

    IR video images acquired by scanning imaging radiometers are subject to several problems which make measurement of small temperature differences difficult. Among these problems are 1) aliasing, which occurs When events at frequencies higher than the video frame rate are observed, 2) limited temperature resolution imposed by the 3-bit digitization available in existing commercial systems, and 3) susceptibility to noise and background clutter. Bandwidth narrowing devices (e.g. lock-in amplifiers or boxcar averagers) are routinely used to achieve a high degree of signal to noise improvement for time-varying 1-dimensional signals. We will describe techniques which allow similar S/N improvement for 2-dimensional imagery acquired with an off the shelf scanning imaging radiometer system. These techniques are iplemented in near-real-time, utilizing a microcomputer and specially developed hardware and software . We will also discuss the application of the system to feature extraction in cluttered images, and to acquisition of events which vary faster than the frame rate.

  15. Content-based image and video compression

    Science.gov (United States)

    Du, Xun; Li, Honglin; Ahalt, Stanley C.

    2002-08-01

    The term Content-Based appears often in applications for which MPEG-7 is expected to play a significant role. MPEG-7 standardizes descriptors of multimedia content, and while compression is not the primary focus of MPEG-7, the descriptors defined by MPEG-7 can be used to reconstruct a rough representation of an original multimedia source. In contrast, current image and video compression standards such as JPEG and MPEG are not designed to encode at the very low bit-rates that could be accomplished with MPEG-7 using descriptors. In this paper we show that content-based mechanisms can be introduced into compression algorithms to improve the scalability and functionality of current compression methods such as JPEG and MPEG. This is the fundamental idea behind Content-Based Compression (CBC). Our definition of CBC is a compression method that effectively encodes a sufficient description of the content of an image or a video in order to ensure that the recipient is able to reconstruct the image or video to some degree of accuracy. The degree of accuracy can be, for example, the classification error rate of the encoded objects, since in MPEG-7 the classification error rate measures the performance of the content descriptors. We argue that the major difference between a content-based compression algorithm and conventional block-based or object-based compression algorithms is that content-based compression replaces the quantizer with a more sophisticated classifier, or with a quantizer which minimizes classification error. Compared to conventional image and video compression methods such as JPEG and MPEG, our results show that content-based compression is able to achieve more efficient image and video coding by suppressing the background while leaving the objects of interest nearly intact.

  16. Structural image and video understanding

    NARCIS (Netherlands)

    Lou, Z.

    2016-01-01

    In this thesis, we have discussed how to exploit the structures in several computer vision topics. The five chapters addressed five computer vision topics using the image structures. In chapter 2, we proposed a structural model to jointly predict the age, expression and gender of a face. By modeling

  17. Image and Video for Hearing Impaired People

    Directory of Open Access Journals (Sweden)

    Thomas Burger

    2008-04-01

    Full Text Available We present a global overview of image- and video-processing-based methods to help the communication of hearing impaired people. Two directions of communication have to be considered: from a hearing person to a hearing impaired person and vice versa. In this paper, firstly, we describe sign language (SL and the cued speech (CS language which are two different languages used by the deaf community. Secondly, we present existing tools which employ SL and CS video processing and recognition for the automatic communication between deaf people and hearing people. Thirdly, we present the existing tools for reverse communication, from hearing people to deaf people that involve SL and CS video synthesis.

  18. Image and Video for Hearing Impaired People

    Directory of Open Access Journals (Sweden)

    Aran Oya

    2007-01-01

    Full Text Available We present a global overview of image- and video-processing-based methods to help the communication of hearing impaired people. Two directions of communication have to be considered: from a hearing person to a hearing impaired person and vice versa. In this paper, firstly, we describe sign language (SL and the cued speech (CS language which are two different languages used by the deaf community. Secondly, we present existing tools which employ SL and CS video processing and recognition for the automatic communication between deaf people and hearing people. Thirdly, we present the existing tools for reverse communication, from hearing people to deaf people that involve SL and CS video synthesis.

  19. GPM GROUND VALIDATION PRECIPITATION VIDEO IMAGER (PVI) GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Precipitation Video Imager (PVI) GCPEx dataset collected precipitation particle images and drop size distribution data from November 2011...

  20. Practical image and video processing using MATLAB

    CERN Document Server

    Marques, Oge

    2011-01-01

    "The book provides a practical introduction to the most important topics in image and video processing using MATLAB (and its Image Processing Toolbox) as a tool to demonstrate the most important techniques and algorithms. The contents are presented in a clear, technically accurate, objective way, with just enough mathematical detail. Most of the chapters are supported by figures, examples, illustrative problems, MATLAB scripts, suggestions for further reading, bibliographical references, useful Web sites, and exercises and computer projects to extend the understanding of their contents"--

  1. Multimodal location estimation of videos and images

    CERN Document Server

    Friedland, Gerald

    2015-01-01

    This book presents an overview of the field of multimodal location estimation, i.e. using acoustic, visual, and/or textual cues to estimate the shown location of a video recording. The authors' sample research results in this field in a unified way integrating research work on this topic that focuses on different modalities, viewpoints, and applications. The book describes fundamental methods of acoustic, visual, textual, social graph, and metadata processing as well as multimodal integration methods used for location estimation. In addition, the text covers benchmark metrics and explores the limits of the technology based on a human baseline. ·         Discusses localization of multimedia data; ·         Examines fundamental methods of establishing location metadata for images and videos (other than GPS tagging); ·         Covers Data-Driven as well as Semantic Location Estimation.

  2. Hardware implementation of machine vision systems: image and video processing

    Science.gov (United States)

    Botella, Guillermo; García, Carlos; Meyer-Bäse, Uwe

    2013-12-01

    This contribution focuses on different topics covered by the special issue titled `Hardware Implementation of Machine vision Systems' including FPGAs, GPUS, embedded systems, multicore implementations for image analysis such as edge detection, segmentation, pattern recognition and object recognition/interpretation, image enhancement/restoration, image/video compression, image similarity and retrieval, satellite image processing, medical image processing, motion estimation, neuromorphic and bioinspired vision systems, video processing, image formation and physics based vision, 3D processing/coding, scene understanding, and multimedia.

  3. Dynamic Image Stitching for Panoramic Video

    Directory of Open Access Journals (Sweden)

    Jen-Yu Shieh

    2014-10-01

    Full Text Available The design of this paper is based on the Dynamic image titching for panoramic video. By utilizing OpenCV visual function data library and SIFT algorithm as the basis for presentation, this article brings forward Gaussian second differenced MoG which is processed basing on DoG Gaussian Difference Map to reduce order in synthesizing dynamic images and simplify the algorithm of the Gaussian pyramid structure. MSIFT matches with overlapping segmentation method to simplify the scope of feature extraction in order to enhance speed. And through this method traditional image synthesis can be improved without having to take lots of time in calculation and being limited by space and angle. This research uses four normal Webcams and two IPCAM coupled with several-wide angle lenses. By using wide-angle lenses to monitor over a wide range of an area and then by using image stitching panoramic effect is achieved. In terms of overall image application and control interface, Microsoft Visual Studio C# is adopted to a construct software interface. On a personal computer with 2.4-GHz CPU and 2-GB RAM and with the cameras fixed to it, the execution speed is three images per second, which reduces calculation time of the traditional algorithm.

  4. Does Instructor's Image Size in Video Lectures Affect Learning Outcomes?

    Science.gov (United States)

    Pi, Z.; Hong, J.; Yang, J.

    2017-01-01

    One of the most commonly used forms of video lectures is a combination of an instructor's image and accompanying lecture slides as a picture-in-picture. As the image size of the instructor varies significantly across video lectures, and so do the learning outcomes associated with this technology, the influence of the instructor's image size should…

  5. Multimodal interaction in image and video applications

    CERN Document Server

    Sappa, Angel D

    2013-01-01

    Traditional Pattern Recognition (PR) and Computer Vision (CV) technologies have mainly focused on full automation, even though full automation often proves elusive or unnatural in many applications, where the technology is expected to assist rather than replace the human agents. However, not all the problems can be automatically solved being the human interaction the only way to tackle those applications. Recently, multimodal human interaction has become an important field of increasing interest in the research community. Advanced man-machine interfaces with high cognitive capabilities are a hot research topic that aims at solving challenging problems in image and video applications. Actually, the idea of computer interactive systems was already proposed on the early stages of computer science. Nowadays, the ubiquity of image sensors together with the ever-increasing computing performance has open new and challenging opportunities for research in multimodal human interaction. This book aims to show how existi...

  6. Video Vortex reader II: moving images beyond YouTube

    NARCIS (Netherlands)

    Lovink, G.; Somers Miles, R.

    2011-01-01

    Video Vortex Reader II is the Institute of Network Cultures' second collection of texts that critically explore the rapidly changing landscape of online video and its use. With the success of YouTube ('2 billion views per day') and the rise of other online video sharing platforms, the moving image

  7. Image and video compression fundamentals, techniques, and applications

    CERN Document Server

    Joshi, Madhuri A; Dandawate, Yogesh H; Joshi, Kalyani R; Metkar, Shilpa P

    2014-01-01

    Image and video signals require large transmission bandwidth and storage, leading to high costs. The data must be compressed without a loss or with a small loss of quality. Thus, efficient image and video compression algorithms play a significant role in the storage and transmission of data.Image and Video Compression: Fundamentals, Techniques, and Applications explains the major techniques for image and video compression and demonstrates their practical implementation using MATLAB® programs. Designed for students, researchers, and practicing engineers, the book presents both basic principles

  8. Despeckle filtering for ultrasound imaging and video II selected applications

    CERN Document Server

    Loizou, Christos P

    2015-01-01

    In ultrasound imaging and video visual perception is hindered by speckle multiplicative noise that degrades the quality. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image/video segmentation, texture analysis and encoding in ultrasound imaging and video. The goal of the first book (book 1 of 2 books) was to introduce the problem of speckle in ultrasound image and video as well as the theoretical background, algorithmic steps, and the MatlabTM for the following group of despeckle filters:

  9. Moving object detection in top-view aerial videos improved by image stacking

    Science.gov (United States)

    Teutsch, Michael; Krüger, Wolfgang; Beyerer, Jürgen

    2017-08-01

    Image stacking is a well-known method that is used to improve the quality of images in video data. A set of consecutive images is aligned by applying image registration and warping. In the resulting image stack, each pixel has redundant information about its intensity value. This redundant information can be used to suppress image noise, resharpen blurry images, or even enhance the spatial image resolution as done in super-resolution. Small moving objects in the videos usually get blurred or distorted by image stacking and thus need to be handled explicitly. We use image stacking in an innovative way: image registration is applied to small moving objects only, and image warping blurs the stationary background that surrounds the moving objects. Our video data are coming from a small fixed-wing unmanned aerial vehicle (UAV) that acquires top-view gray-value images of urban scenes. Moving objects are mainly cars but also other vehicles such as motorcycles. The resulting images, after applying our proposed image stacking approach, are used to improve baseline algorithms for vehicle detection and segmentation. We improve precision and recall by up to 0.011, which corresponds to a reduction of the number of false positive and false negative detections by more than 3 per second. Furthermore, we show how our proposed image stacking approach can be implemented efficiently.

  10. Semi-Supervised Image-to-Video Adaptation for Video Action Recognition.

    Science.gov (United States)

    Zhang, Jianguang; Han, Yahong; Tang, Jinhui; Hu, Qinghua; Jiang, Jianmin

    2017-04-01

    Human action recognition has been well explored in applications of computer vision. Many successful action recognition methods have shown that action knowledge can be effectively learned from motion videos or still images. For the same action, the appropriate action knowledge learned from different types of media, e.g., videos or images, may be related. However, less effort has been made to improve the performance of action recognition in videos by adapting the action knowledge conveyed from images to videos. Most of the existing video action recognition methods suffer from the problem of lacking sufficient labeled training videos. In such cases, over-fitting would be a potential problem and the performance of action recognition is restrained. In this paper, we propose an adaptation method to enhance action recognition in videos by adapting knowledge from images. The adapted knowledge is utilized to learn the correlated action semantics by exploring the common components of both labeled videos and images. Meanwhile, we extend the adaptation method to a semi-supervised framework which can leverage both labeled and unlabeled videos. Thus, the over-fitting can be alleviated and the performance of action recognition is improved. Experiments on public benchmark datasets and real-world datasets show that our method outperforms several other state-of-the-art action recognition methods.

  11. 4DCAPTURE: a general purpose software package for capturing and analyzing two- and three-dimensional motion data acquired from video sequences

    Science.gov (United States)

    Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake

    2003-07-01

    4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The

  12. GPM GROUND VALIDATION PRECIPITATION VIDEO IMAGER (PVI) GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The Precipitation Video Imager (PVI) collected precipitation particle images and drop size distribution data during November 2011 through March 2012 as part of the...

  13. Influence of violent video gaming on determinants of the acquired capability for suicide.

    Science.gov (United States)

    Teismann, Tobias; Förtsch, Eva-Maria A D; Baumgart, Patrick; Het, Serkan; Michalak, Johannes

    2014-01-30

    The interpersonal theory of suicidal behavior proposes that fearlessness of death and physical pain insensitivity is a necessary requisite for self-inflicted lethal self-harm. Repeated experiences with painful and provocative events are supposed to cause an incremental increase in acquired capability. The present study examined whether playing a first-person shooter-game in contrast to a first-person racing game increases pain tolerance, a dimension of the acquired capability construct, and risk-taking behavior, a risk factor for developing acquired capability. N=81 male participants were randomly assigned to either play an action-shooter or a racing game before engaging in a game on risk-taking behavior and performing a cold pressor task (CPT). Participants exhibited higher pain tolerance after playing an action shooter game than after playing a racing game. Furthermore, playing an action shooter was generally associated with heightened risk-taking behavior. Group-differences were not attributable to the effects of the different types of games on self-reported mood and arousal. Overall these results indicate that action-shooter gaming alters pain tolerance and risk-taking behavior. Therefore, it may well be that long-term consumption of violent video games increases a person's capability to enact lethal self-harm. © 2013 Published by Elsevier Ireland Ltd.

  14. Tracking of multiple points using color video image analyzer

    Science.gov (United States)

    Nennerfelt, Leif

    1990-08-01

    The Videomex-X is a new product intended for use in biomechanical measurement. It tracks up to six points at 60 frames per second using colored markers placed on the subject. The system can be used for applications such as gait analysis, studying facial movements, or tracking the pattern of movements of individuals in a group. The Videomex-X is comprised of a high speed color image analyzer, an RBG color video camera, an IBM AT compatible computer and motion analysis software. The markers are made from brightly colored plastic disks and each marker is a different color. Since the markers are unique, the problem of misidentification of markers does not occur. The Videomex-X performs realtime analysis so that the researcher can get immediate feedback on the subject's performance. High speed operation is possible because the system uses distributed processing. The image analyzer is a hardwired parallel image processor which identifies the markers within the video picture and computes their x-y locations. The image analyzer sends the x-y coordinates to the AT computer which performs additional analysis and presents the result. The x-y coordinate data acquired during the experiment may be streamed to the computer's hard disk. This allows the data to be re-analyzed repeatedly using different analysis criteria. The original Videomex-X tracked in two dimensions. However, a 3-D system has recently been completed. The algorithm used by the system to derive performance results from the x-y coordinates is contained in a separate ASCII file. These files can be modified by the operator to produce the required type of data reduction.

  15. Image and video search engine for the World Wide Web

    Science.gov (United States)

    Smith, John R.; Chang, Shih-Fu

    1997-01-01

    We describe a visual information system prototype for searching for images and videos on the World-Wide Web. New visual information in the form of images, graphics, animations and videos is being published on the Web at an incredible rate. However, cataloging this visual data is beyond the capabilities of current text-based Web search engines. In this paper, we describe a complete system by which visual information on the Web is (1) collected by automated agents, (2) processed in both text and visual feature domains, (3) catalogued and (4) indexed for fast search and retrieval. We introduce an image and video search engine which utilizes both text-based navigation and content-based technology for searching visually through the catalogued images and videos. Finally, we provide an initial evaluation based upon the cataloging of over one half million images and videos collected from the Web.

  16. Acquired lesions of the corpus callosum: MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Uchino, A.; Takase, Y.; Nomiyama, K.; Egashira, R.; Kudo, S. [Saga Medical School, Department of Radiology, Saga (Japan)

    2006-04-15

    In this pictorial review, we illustrate acquired diseases or conditions of the corpus callosum that may be found by magnetic resonance (MR) imaging of the brain, including infarction, bleeding, diffuse axonal injury, multiple sclerosis, acute disseminated encephalomyelitis, Marchiafava-Bignami disease, glioblastoma, gliomatosis cerebri, lymphoma, metastasis, germinoma, infections, metabolic diseases, transient splenial lesion, dilated Virchow-Robin spaces, wallerian degeneration after hemispheric damage and focal splenial gliosis. MR imaging is useful for the detection and differential diagnosis of corpus callosal lesions. Due to the anatomical shape and location of the corpus callosum, both coronal and sagittal fluid-attenuated inversion recovery images are most useful for visualizing lesions of this structure. (orig.)

  17. Eye-Movement Tracking Using Compressed Video Images

    Science.gov (United States)

    Mulligan, Jeffrey B.; Beutter, Brent R.; Hull, Cynthia H. (Technical Monitor)

    1994-01-01

    Infrared video cameras offer a simple noninvasive way to measure the position of the eyes using relatively inexpensive equipment. Several commercial systems are available which use special hardware to localize features in the image in real time, but the constraint of realtime performance limits the complexity of the applicable algorithms. In order to get better resolution and accuracy, we have used off-line processing to apply more sophisticated algorithms to the images. In this case, a major technical challenge is the real-time acquisition and storage of the video images. This has been solved using a strictly digital approach, exploiting the burgeoning field of hardware video compression. In this paper we describe the algorithms we have developed for tracking the movements of the eyes in video images, and present experimental results showing how the accuracy is affected by the degree of video compression.

  18. VICAR - VIDEO IMAGE COMMUNICATION AND RETRIEVAL

    Science.gov (United States)

    Wall, R. J.

    1994-01-01

    VICAR (Video Image Communication and Retrieval) is a general purpose image processing software system that has been under continuous development since the late 1960's. Originally intended for data from the NASA Jet Propulsion Laboratory's unmanned planetary spacecraft, VICAR is now used for a variety of other applications including biomedical image processing, cartography, earth resources, and geological exploration. The development of this newest version of VICAR emphasized a standardized, easily-understood user interface, a shield between the user and the host operating system, and a comprehensive array of image processing capabilities. Structurally, VICAR can be divided into roughly two parts; a suite of applications programs and an executive which serves as the interfaces between the applications, the operating system, and the user. There are several hundred applications programs ranging in function from interactive image editing, data compression/decompression, and map projection, to blemish, noise, and artifact removal, mosaic generation, and pattern recognition and location. An information management system designed specifically for handling image related data can merge image data with other types of data files. The user accesses these programs through the VICAR executive, which consists of a supervisor and a run-time library. From the viewpoint of the user and the applications programs, the executive is an environment that is independent of the operating system. VICAR does not replace the host computer's operating system; instead, it overlays the host resources. The core of the executive is the VICAR Supervisor, which is based on NASA Goddard Space Flight Center's Transportable Applications Executive (TAE). Various modifications and extensions have been made to optimize TAE for image processing applications, resulting in a user friendly environment. The rest of the executive consists of the VICAR Run-Time Library, which provides a set of subroutines (image

  19. Spatio-temporal image inpainting for video applications

    Directory of Open Access Journals (Sweden)

    Voronin Viacheslav

    2017-01-01

    Full Text Available Video inpainting or completion is a vital video improvement technique used to repair or edit digital videos. This paper describes a framework for temporally consistent video completion. The proposed method allows to remove dynamic objects or restore missing or tainted regions presented in a video sequence by utilizing spatial and temporal information from neighboring scenes. Masking algorithm is used for detection of scratches or damaged portions in video frames. The algorithm iteratively performs the following operations: achieve frame; update the scene model; update positions of moving objects; replace parts of the frame occupied by the objects marked for remove by using a background model. In this paper, we extend an image inpainting algorithm based texture and structure reconstruction by incorporating an improved strategy for video. Our algorithm is able to deal with a variety of challenging situations which naturally arise in video inpainting, such as the correct reconstruction of dynamic textures, multiple moving objects and moving background. Experimental comparisons to state-of-the-art video completion methods demonstrate the effectiveness of the proposed approach. It is shown that the proposed spatio-temporal image inpainting method allows restoring a missing blocks and removing a text from the scenes on videos.

  20. Study of recognizing multiple persons' complicated hand gestures from the video sequence acquired by a moving camera

    Science.gov (United States)

    Dan, Luo; Ohya, Jun

    2010-02-01

    Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.

  1. Research on defogging technology of video image based on FPGA

    Science.gov (United States)

    Liu, Shuo; Piao, Yan

    2015-03-01

    As the effect of atmospheric particles scattering, the video image captured by outdoor surveillance system has low contrast and brightness, which directly affects the application value of the system. The traditional defogging technology is mostly studied by software for the defogging algorithms of the single frame image. Moreover, the algorithms have large computation and high time complexity. Then, the defogging technology of video image based on Digital Signal Processing (DSP) has the problem of complex peripheral circuit. It can't be realized in real-time processing, and it's hard to debug and upgrade. In this paper, with the improved dark channel prior algorithm, we propose a kind of defogging technology of video image based on Field Programmable Gate Array (FPGA). Compared to the traditional defogging methods, the video image with high resolution can be processed in real-time. Furthermore, the function modules of the system have been designed by hardware description language. At last, the results show that the defogging system based on FPGA can process the video image with minimum resolution of 640×480 in real-time. After defogging, the brightness and contrast of video image are improved effectively. Therefore, the defogging technology proposed in the paper has a great variety of applications including aviation, forest fire prevention, national security and other important surveillance.

  2. PIZZARO: Forensic analysis and restoration of image and video data.

    Science.gov (United States)

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Method and apparatus for reading meters from a video image

    Science.gov (United States)

    Lewis, Trevor J.; Ferguson, Jeffrey J.

    1997-01-01

    A method and system to enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.

  4. Method to acquire regions of fruit, branch and leaf from image of red apple in orchard

    Science.gov (United States)

    Lv, Jidong; Xu, Liming

    2017-07-01

    This work proposed a method to acquire regions of fruit, branch and leaf from red apple image in orchard. To acquire fruit image, R-G image was extracted from the RGB image for corrosive working, hole filling, subregion removal, expansive working and opening operation in order. Finally, fruit image was acquired by threshold segmentation. To acquire leaf image, fruit image was subtracted from RGB image before extracting 2G-R-B image. Then, leaf image was acquired by subregion removal and threshold segmentation. To acquire branch image, dynamic threshold segmentation was conducted in the R-G image. Then, the segmented image was added to fruit image to acquire adding fruit image which was subtracted from RGB image with leaf image. Finally, branch image was acquired by opening operation, subregion removal and threshold segmentation after extracting the R-G image from the subtracting image. Compared with previous methods, more complete image of fruit, leaf and branch can be acquired from red apple image with this method.

  5. 3D MODEL GENERATION USING OBLIQUE IMAGES ACQUIRED BY UAV

    Directory of Open Access Journals (Sweden)

    A. Lingua

    2017-07-01

    Full Text Available In recent years, many studies revealed the advantages of using airborne oblique images for obtaining improved 3D city models (including façades and building footprints. Here the acquisition and use of oblique images from a low cost and open source Unmanned Aerial Vehicle (UAV for the 3D high-level-of-detail reconstruction of historical architectures is evaluated. The critical issues of such acquisitions (flight planning strategies, ground control points distribution, etc. are described. Several problems should be considered in the flight planning: best approach to cover the whole object with the minimum time of flight; visibility of vertical structures; occlusions due to the context; acquisition of all the parts of the objects (the closest and the farthest with similar resolution; suitable camera inclination, and so on. In this paper a solution is proposed in order to acquire oblique images with one only flight. The data processing was realized using Structure-from-Motion-based approach for point cloud generation using dense image-matching algorithms implemented in an open source software. The achieved results are analysed considering some check points and some reference LiDAR data. The system was tested for surveying a historical architectonical complex: the “Sacro Mo nte di Varallo Sesia” in north-west of Italy. This study demonstrates that the use of oblique images acquired from a low cost UAV system and processed through an open source software is an effective methodology to survey cultural heritage, characterized by limited accessibility, need for detail and rapidity of the acquisition phase, and often reduced budgets.

  6. Communicating pictures a course in image and video coding

    CERN Document Server

    Bull, David R

    2014-01-01

    Communicating Pictures starts with a unique historical perspective of the role of images in communications and then builds on this to explain the applications and requirements of a modern video coding system. It draws on the author's extensive academic and professional experience of signal processing and video coding to deliver a text that is algorithmically rigorous, yet accessible, relevant to modern standards, and practical. It offers a thorough grounding in visual perception, and demonstrates how modern image and video compression methods can be designed in order to meet the rate-quality performance levels demanded by today's applications, networks and users. With this book you will learn: Practical issues when implementing a codec, such as picture boundary extension and complexity reduction, with particular emphasis on efficient algorithms for transforms, motion estimators and error resilience Conflicts between conventional video compression, based on variable length coding and spatiotemporal prediction,...

  7. Radionuclide brain imaging in acquired immunodeficiency syndrome (AIDS)

    Energy Technology Data Exchange (ETDEWEB)

    Costa, D.C.; Gacinovic, S.; Miller, R.F. [London University College Medical School, Middlesex Hospital, London (United Kingdom)

    1995-09-01

    Infection with the Human Immunodeficiency Virus type 1 (HIV-1) may produce a variety of central nervous system (CNS) symptoms and signs. CNS involvement in patients with the Acquired Immunodeficiency Syndrome (AIDS) includes AIDS dementia complex or HIV-1 associated cognitive/motor complex (widely known as HIV encephalopathy), progressive multifocal leucoencephalopathy (PML), opportunistic infections such as Toxoplasma gondii, TB, Cryptococcus and infiltration by non-Hodgkin`s B cell lymphoma. High resolution structural imaging investigations, either X-ray Computed Tomography (CT scan) or Magnetic Resonance Imaging (MRI) have contributed to the understanding and definition of cerebral damage caused by HIV encephalopathy. Atrophy and mainly high signal scattered white matter abnormalities are commonly seen with MRI. PML produces focal white matter high signal abnormalities due to multiple foci of demyelination. However, using structural imaging techniques there are no reliable parameters to distinguish focal lesions due to opportunistic infection (Toxoplasma gondii abscess) from neoplasm (lymphoma infiltration). It is studied the use of radionuclide brain imaging techniques in the investigation of HIV infected patients. Brain perfusion Single Photon Emission Tomography (SPET), neuroreceptor and Positron Emission Tomography (PET) studies are reviewed. Greater emphasis is put on the potential of some radiopharmaceuticals, considered to be brain tumour markers, to distinguish intracerebral lymphoma infiltration from Toxoplasma infection. SPET with {sup 201}Tl using quantification (tumour to non-tumour radioactivity ratios) appears a very promising technique to identify intracerebral lymphoma.

  8. Compression of mixed video and graphics images for TV systems

    Science.gov (United States)

    van der Schaar-Mitrea, Mihaela; de With, Peter H. N.

    1998-01-01

    The diversity in TV images has augmented with the increased application of computer graphics. In this paper we study z coding system that supports both the lossless coding of such graphics data and regular lossy video compression. The lossless coding techniques are based on runlength and arithmetical coding. For video compression, we introduce a simple block predictive coding technique featuring individual pixel access, so that it enables a gradual shift from lossless coding of graphics to the lossy coding of video. An overall bit rate control completes the system. Computer simulations show a very high quality with a compression factor between 2-3.

  9. New Directions for Academic Video Game Collections: Strategies for Acquiring, Supporting, and Managing Online Materials

    Science.gov (United States)

    Robson, Diane; Durkee, Patrick

    2012-01-01

    The work of collection development in academic video game collections is at a crucial point of transformation--gaming librarians are ready to expand beyond console games collected in disc and cartridge format to the world of Internet games. At the same time, forms and genres of video games such as serious and independent games are increasingly…

  10. Applying deep learning to classify pornographic images and videos

    OpenAIRE

    Moustafa, Mohamed

    2015-01-01

    It is no secret that pornographic material is now a one-click-away from everyone, including children and minors. General social media networks are striving to isolate adult images and videos from normal ones. Intelligent image analysis methods can help to automatically detect and isolate questionable images in media. Unfortunately, these methods require vast experience to design the classifier including one or more of the popular computer vision feature descriptors. We propose to build a clas...

  11. Self-acquired patient images: the promises and the pitfalls.

    Science.gov (United States)

    Damanpour, Shadi; Srivastava, Divya; Nijhawan, Rajiv I

    2016-03-01

    Self-acquired patient images, also known as selfies, are increasingly utilized in the practice of dermatology; however, research on their utility is somewhat limited. While the implementation of selfies has yet to be universally accepted, their role in triage appears to be especially useful. The potential for reducing office wait times, expediting referrals, and providing dermatologic services to patients with limited access to care is promising. In addition, as technology advances, the number of smartphone applications related to dermatology that are available to the general public has risen exponentially. With appropriate standardization, regulation, and confidentiality measures, these tools can be feasible adjuncts in clinical practice, dermatologic surgery, and teledermatology. Selfies likely will have a large role in dermatologic practice and delivery in the future. ©2015 Frontline Medical Communications.

  12. Mathematics from Still and Video Images.

    Science.gov (United States)

    Oldknow, Adrian

    2003-01-01

    Discusses simple tools for digitizing objects of interest from image files for treatment in other software such as graph plotters, data-handling software, or graphic calculators. Explores methods using MS Paint, Excel, DigitiseImage and TI Interactive (TII). (Author/NB)

  13. Recent advances in intelligent image search and video retrieval

    CERN Document Server

    2017-01-01

    This book initially reviews the major feature representation and extraction methods and effective learning and recognition approaches, which have broad applications in the context of intelligent image search and video retrieval. It subsequently presents novel methods, such as improved soft assignment coding, Inheritable Color Space (InCS) and the Generalized InCS framework, the sparse kernel manifold learner method, the efficient Support Vector Machine (eSVM), and the Scale-Invariant Feature Transform (SIFT) features in multiple color spaces. Lastly, the book presents clothing analysis for subject identification and retrieval, and performance evaluation methods of video analytics for traffic monitoring. Digital images and videos are proliferating at an amazing speed in the fields of science, engineering and technology, media and entertainment. With the huge accumulation of such data, keyword searches and manual annotation schemes may no longer be able to meet the practical demand for retrieving relevant conte...

  14. The advantages of using photographs and video images in ...

    African Journals Online (AJOL)

    Background: The purpose of this study was to evaluate the advantages of a telephone consultation with a specialist in paediatric surgery after taking photographs and video images by a general practitioner for the diagnosis of some diseases. Materials and Methods: This was a prospective study of the reliability of paediatric ...

  15. Low-noise video amplifiers for imaging CCD's

    Science.gov (United States)

    Scinicariello, F.

    1976-01-01

    Various techniques were developed which enable the CCD (charge coupled device) imaging array user to obtain optimum performance from the device. A CCD video channel was described, and detector-preamplifier interface requirements were examined. A noise model for the system was discussed at length and laboratory data presented and compared to predicted results.

  16. Image-guided transorbital procedures with endoscopic video augmentation.

    Science.gov (United States)

    DeLisi, Michael P; Mawn, Louise A; Galloway, Robert L

    2014-09-01

    Surgical interventions to the orbital space behind the eyeball are limited to highly invasive procedures due to the confined nature of the region along with the presence of several intricate soft tissue structures. A minimally invasive approach to orbital surgery would enable several therapeutic options, particularly new treatment protocols for optic neuropathies such as glaucoma. The authors have developed an image-guided system for the purpose of navigating a thin flexible endoscope to a specified target region behind the eyeball. Navigation within the orbit is particularly challenging despite its small volume, as the presence of fat tissue occludes the endoscopic visual field while the surgeon must constantly be aware of optic nerve position. This research investigates the impact of endoscopic video augmentation to targeted image-guided navigation in a series of anthropomorphic phantom experiments. A group of 16 surgeons performed a target identification task within the orbits of four skull phantoms. The task consisted of identifying the correct target, indicated by the augmented video and the preoperative imaging frames, out of four possibilities. For each skull, one orbital intervention was performed with video augmentation, while the other was done with the standard image guidance technique, in random order. The authors measured a target identification accuracy of 95.3% and 85.9% for the augmented and standard cases, respectively, with statistically significant improvement in procedure time (Z=-2.044, p=0.041) and intraoperator mean procedure time (Z=2.456, p=0.014) when augmentation was used. Improvements in both target identification accuracy and interventional procedure time suggest that endoscopic video augmentation provides valuable additional orientation and trajectory information in an image-guided procedure. Utilization of video augmentation in transorbital interventions could further minimize complication risk and enhance surgeon comfort and

  17. Object Tracking in Frame-Skipping Video Acquired Using Wireless Consumer Cameras

    Directory of Open Access Journals (Sweden)

    Anlong Ming

    2012-10-01

    Full Text Available Object tracking is an important and fundamental task in computer vision and its high-level applications, e.g., intelligent surveillance, motion-based recognition, video indexing, traffic monitoring and vehicle navigation. However, the recent widespread use of wireless consumer cameras often produces low quality videos with frame-skipping and this makes object tracking difficult. Previous tracking methods, for example, generally depend heavily on object appearance or motion continuity and cannot be directly applied to frame-skipping videos. In this paper, we propose an improved particle filter for object tracking to overcome the frame-skipping difficulties. The novelty of our particle filter lies in using the detection result of erratic motion to ameliorate the transition model for a better trial distribution. Experimental results show that the proposed approach improves the tracking accuracy in comparison with the state-of-the-art methods, even when both the object and the consumer are in motion.

  18. Recognition of Bullet Holes Based on Video Image Analysis

    Science.gov (United States)

    Ruolin, Zhu; Jianbo, Liu; Yuan, Zhang; Xiaoyu, Wu

    2017-10-01

    The technology of computer vision is used in the training of military shooting. In order to overcome the limitation of the bullet holes recognition using Video Image Analysis that exists over-detection or leak-detection, this paper adopts the support vector machine algorithm and convolutional neural network to extract and recognize Bullet Holes in the digital video and compares their performance. It extracts HOG characteristics of bullet holes and train SVM classifier quickly, though the target is under outdoor environment. Experiments show that support vector machine algorithm used in this paper realize a fast and efficient extraction and recognition of bullet holes, improving the efficiency of shooting training.

  19. Guided filtering for solar image/video processing

    Science.gov (United States)

    Xu, Long; Yan, Yihua; Cheng, Jun

    2017-06-01

    A new image enhancement algorithm employing guided filtering is proposed in this work for enhancement of solar images and videos, so that users can easily figure out important fine structures imbedded in the recorded images/movies for solar observation. The proposed algorithm can efficiently remove image noises, including Gaussian and impulse noises. Meanwhile, it can further highlight fibrous structures on/beyond the solar disk. These fibrous structures can clearly demonstrate the progress of solar flare, prominence coronal mass emission, magnetic field, and so on. The experimental results prove that the proposed algorithm gives significant enhancement of visual quality of solar images beyond original input and several classical image enhancement algorithms, thus facilitating easier determi-nation of interesting solar burst activities from recorded images/movies.

  20. Venus in motion: An animated video catalog of Pioneer Venus Orbiter Cloud Photopolarimeter images

    Science.gov (United States)

    Limaye, Sanjay S.

    1992-01-01

    Images of Venus acquired by the Pioneer Venus Orbiter Cloud Photopolarimeter (OCPP) during the 1982 opportunity have been utilized to create a short video summary of the data. The raw roll by roll images were first navigated using the spacecraft attitude and orbit information along with the CPP instrument pointing information. The limb darkening introduced by the variation of solar illumination geometry and the viewing angle was then modelled and removed. The images were then projected to simulate a view obtained from a fixed perspective with the observer at 10 Venus radii away and located above a Venus latitude of 30 degrees south and a longitude 60 degrees west. A total of 156 images from the 1982 opportunity have been animated at different dwell rates.

  1. Thermal imagers: from ancient analog video output to state-of-the-art video streaming

    Science.gov (United States)

    Haan, Hubertus; Feuchter, Timo; Münzberg, Mario; Fritze, Jörg; Schlemmer, Harry

    2013-06-01

    The video output of thermal imagers stayed constant over almost two decades. When the famous Common Modules were employed a thermal image at first was presented to the observer in the eye piece only. In the early 1990s TV cameras were attached and the standard output was CCIR. In the civil camera market output standards changed to digital formats a decade ago with digital video streaming being nowadays state-of-the-art. The reasons why the output technique in the thermal world stayed unchanged over such a long time are: the very conservative view of the military community, long planning and turn-around times of programs and a slower growth of pixel number of TIs in comparison to consumer cameras. With megapixel detectors the CCIR output format is not sufficient any longer. The paper discusses the state-of-the-art compression and streaming solutions for TIs.

  2. Registration and recognition in images and videos

    CERN Document Server

    Battiato, Sebastiano; Farinella, Giovanni

    2014-01-01

    Computer vision is the science and technology of making machines that see. It is concerned with the theory, design and implementation of algorithms that can automatically process visual data to recognize objects, track and recover their shape and spatial layout. The International Computer Vision Summer School - ICVSS was established in 2007 to provide both an objective and clear overview and an in-depth analysis of the state-of-the-art  research in Computer Vision. The courses are delivered by world renowned experts in the field, from both academia and industry, and cover both theoretical and practical aspects of real Computer Vision problems.  The school is organized every year by University of Cambridge (Computer Vision and Robotics Group) and University of Catania (Image Processing Lab). Different topics are covered each year.This edited volume contains a selection of articles covering some of the talks and tutorials held during the last editions of the school. The chapters provide an in-depth overview o...

  3. Video-rate optical coherence tomography imaging with smart pixels

    Science.gov (United States)

    Beer, Stephan; Waldis, Severin; Seitz, Peter

    2003-10-01

    A novel concept for video-rate parallel acquisition of optical coherence tomography imaging is presented based on in-pixel demodulation. The main restrictions for parallel detection such as data rate, power consumption, circuit size and poor sensitivity are overcome with a smart pixel architecture incorporating an offset compensation circuit, a synchronous sampling stage, programmable time averaging and random pixel accessing, allowing envelope and phase detection in large 1D and 2D arrays.

  4. ΤND: a thyroid nodule detection system for analysis of ultrasound images and videos.

    Science.gov (United States)

    Keramidas, Eystratios G; Maroulis, Dimitris; Iakovidis, Dimitris K

    2012-06-01

    In this paper, we present a computer-aided-diagnosis (CAD) system prototype, named TND (Thyroid Nodule Detector), for the detection of nodular tissue in ultrasound (US) thyroid images and videos acquired during thyroid US examinations. The proposed system incorporates an original methodology that involves a novel algorithm for automatic definition of the boundaries of the thyroid gland, and a novel approach for the extraction of noise resilient image features effectively representing the textural and the echogenic properties of the thyroid tissue. Through extensive experimental evaluation on real thyroid US data, its accuracy in thyroid nodule detection has been estimated to exceed 95%. These results attest to the feasibility of the clinical application of TND, for the provision of a second more objective opinion to the radiologists by exploiting image evidences.

  5. Survey on attacks in image and video watermarking

    Science.gov (United States)

    Vassaux, Boris; Nguyen, Philippe; Baudry, Severine; Bas, Patrick; Chassery, Jean-Marc

    2002-11-01

    Watermarking techniques have been considerably improved for the last past years, aiming at being always more resistant to attacks. In fact, if the main goal of watermarking at the beginning was to secure digital data (audio, image and video), numerous attacks are still now able to cast doubts on the owner's authenticity ; we can distinguish three different groups of attacks : these one which consist to remove the watermark, these one which aim at impairing the data sufficiently to falsify the detection, and finally these one which try to alter the detection process so that another person becomes the apparent owner of the data. By considering the growing development of always more efficient attacks, this paper firstly presents a recent and exhaustive review of attacks in image and video watermarking. In a second part, the consequences of still image watermarking attacks on video sequences will be outlined and a particular attention will be given to the recently created benchmarks : Stirmark, the benchmark proposed by the University of Geneva Vision Group, this one proposed by the Department of Informatics of the University of Thessaloniki and finally we will speak of the current work of the European Project Certimark ; we will present a comparison of these various benchmarks and show how difficult it is to develop a self-sufficient benchmark, especially because of the complexity of intentional attacks.

  6. The research on binocular stereo video imaging and display system based on low-light CMOS

    Science.gov (United States)

    Xie, Ruobing; Li, Li; Jin, Weiqi; Guo, Hong

    2015-10-01

    It is prevalent for the low-light night-vision helmet to equip the binocular viewer with image intensifiers. Such equipment can not only acquire night vision ability, but also obtain the sense of stereo vision to achieve better perception and understanding of the visual field. However, since the image intensifier is for direct-observation, it is difficult to apply the modern image processing technology. As a result, developing digital video technology in night vision is of great significance. In this paper, we design a low-light night-vision helmet with digital imaging device. It consists of three parts: a set of two low-illumination CMOS cameras, a binocular OLED micro display and an image processing PCB. Stereopsis is achieved through the binocular OLED micro display. We choose Speed-Up Robust Feature (SURF) algorithm for image registration. Based on the image matching information and the cameras' calibration parameters, disparity can be calculated in real-time. We then elaborately derive the constraints of binocular stereo display. The sense of stereo vision can be obtained by dynamically adjusting the content of the binocular OLED micro display. There is sufficient space for function extensions in our system. The performance of this low-light night-vision helmet can be further enhanced in combination with The HDR technology and image fusion technology, etc.

  7. Classification of images acquired with colposcopy using artificial neural networks.

    Science.gov (United States)

    Simões, Priscyla W; Izumi, Narjara B; Casagrande, Ramon S; Venson, Ramon; Veronezi, Carlos D; Moretti, Gustavo P; da Rocha, Edroaldo L; Cechinel, Cristian; Ceretta, Luciane B; Comunello, Eros; Martins, Paulo J; Casagrande, Rogério A; Snoeyer, Maria L; Manenti, Sandra A

    2014-01-01

    To explore the advantages of using artificial neural networks (ANNs) to recognize patterns in colposcopy to classify images in colposcopy. Transversal, descriptive, and analytical study of a quantitative approach with an emphasis on diagnosis. The training test e validation set was composed of images collected from patients who underwent colposcopy. These images were provided by a gynecology clinic located in the city of Criciúma (Brazil). The image database (n = 170) was divided; 48 images were used for the training process, 58 images were used for the tests, and 64 images were used for the validation. A hybrid neural network based on Kohonen self-organizing maps and multilayer perceptron (MLP) networks was used. After 126 cycles, the validation was performed. The best results reached an accuracy of 72.15%, a sensibility of 69.78%, and a specificity of 68%. Although the preliminary results still exhibit an average efficiency, the present approach is an innovative and promising technique that should be deeply explored in the context of the present study.

  8. Methods for identification of images acquired with digital cameras

    Science.gov (United States)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  9. CONTEXT-BASED URBAN TERRAIN RECONSTRUCTION FROM IMAGES AND VIDEOS

    Directory of Open Access Journals (Sweden)

    D. Bulatov

    2012-07-01

    Full Text Available Detection of buildings and vegetation, and even more reconstruction of urban terrain from sequences of aerial images and videos is known to be a challenging task. It has been established that those methods that have as input a high-quality Digital Surface Model (DSM, are more straight-forward and produce more robust and reliable results than those image-based methods that require matching line segments or even whole regions. This motivated us to develop a new dense matching technique for DSM generation that is capable of simultaneous integration of multiple images in the reconstruction process. The DSMs generated by this new multi-image matching technique can be used for urban object extraction. In the first contribution of this paper, two examples of external sources of information added to the reconstruction pipeline will be shown. The GIS layers are used for recognition of streets and suppressing false alarms in the depth maps that were caused by moving vehicles while the near infrared channel is applied for separating vegetation from buildings. Three examples of data sets including both UAV-borne video sequences with a relatively high number of frames and high-resolution (10 cm ground sample distance data sets consisting of (few spatial-temporarily diverse images from large-format aerial frame cameras, will be presented. By an extensive quantitative evaluation of the Vaihingen block from the ISPRS benchmark on urban object detection, it will become clear that our procedure allows a straight-forward, efficient, and reliable instantiation of 3D city models.

  10. The Relationship Between Video Game Play and the Acquired Capability for Suicide: An Examination of Differences by Category of Video Game and Gender.

    Science.gov (United States)

    Mitchell, Sean M; Jahn, Danielle R; Guidry, Evan T; Cukrowicz, Kelly C

    2015-12-01

    This study examined the relationship between video game (VG) play and the acquired capability for suicide (ACS), as well as the moderating effects of VG category and gender on this relationship. Participants were 228 college students who played VGs on a weekly basis and who completed self-report assessments of VG play, painful and provocative events, and the ACS. Results indicated that there was a significant positive association between hours of VG play and the ACS. The action category of VGs was a significant moderator of the relationship between hours of VG play and the ACS after adjusting for previous painful and provocative events. Gender did not significantly moderate the relationship between hours of VG play and the ACS, and there was no significant three-way interaction between hours of VG play, playing action category VGs, and gender. This suggests that individuals who play many hours of action VGs may be more capable of lethal self-harm if they experience suicide ideation, although this association does not exist for individuals who play other categories of VGs.

  11. Series of aerial images over Bear River Migratory Bird Refuge, acquired in 1937

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This dataset includes 40 georeferenced images, acquired on September 25th, October 12th-13th, November 10th and December 1st, 1937 over portions of Bear River...

  12. Series of aerial images over Marais des Cygnes National Wildlife Refuge, acquired in 1957

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This dataset includes 8 georeferenced images, acquired on May 5th, 6th and 26th, 1957 over portions of Marais des Cygnes National Refuge in eastern Kansas. This data...

  13. Series of aerial images over Marais des Cygnes National Wildlife Refuge, acquired in 1950

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This dataset includes 10 georeferenced images, acquired on July 13, 1950 over portions of Marais des Cygnes National Refuge in eastern Kansas. This data set is a...

  14. Using underwater video imaging as an assessment tool for coastal condition

    Science.gov (United States)

    As part of an effort to monitor ecological conditions in nearshore habitats, from 2009-2012 underwater videos were captured at over 400 locations throughout the Laurentian Great Lakes. This study focuses on developing a video rating system and assessing video images. This ratin...

  15. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    Science.gov (United States)

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  16. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    Science.gov (United States)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  17. A video precipitation sensor for imaging and velocimetry of hydrometeors

    Science.gov (United States)

    Liu, X. C.; Gao, T. C.; Liu, L.

    2014-07-01

    A new method to determine the shape and fall velocity of hydrometeors by using a single CCD camera is proposed in this paper, and a prototype of a video precipitation sensor (VPS) is developed. The instrument consists of an optical unit (collimated light source with multi-mode fibre cluster), an imaging unit (planar array CCD sensor), an acquisition and control unit, and a data processing unit. The cylindrical space between the optical unit and imaging unit is sampling volume (300 mm × 40 mm × 30 mm). As the precipitation particles fall through the sampling volume, the CCD camera exposes twice in a single frame, which allows the double exposure of particles images to be obtained. The size and shape can be obtained by the images of particles; the fall velocity can be calculated by particle displacement in the double-exposure image and interval time; the drop size distribution and velocity distribution, precipitation intensity, and accumulated precipitation amount can be calculated by time integration. The innovation of VPS is that the shape, size, and velocity of precipitation particles can be measured by only one planar array CCD sensor, which can address the disadvantages of a linear scan CCD disdrometer and an impact disdrometer. Field measurements of rainfall demonstrate the VPS's capability to measure micro-physical properties of single particles and integral parameters of precipitation.

  18. A Master-Slave Surveillance System to Acquire Panoramic and Multiscale Videos

    Directory of Open Access Journals (Sweden)

    Yu Liu

    2014-01-01

    Full Text Available This paper describes a master-slave visual surveillance system that uses stationary-dynamic camera assemblies to achieve wide field of view and selective focus of interest. In this system, the fish-eye panoramic camera is capable of monitoring a large area, and the PTZ dome camera has high mobility and zoom ability. In order to achieve the precise interaction, preprocessing spatial calibration between these two cameras is required. This paper introduces a novel calibration approach to automatically calculate a transformation matrix model between two coordinate systems by matching feature points. In addition, a distortion correction method based on Midpoint Circle Algorithm is proposed to handle obvious horizontal distortion in the captured panoramic image. Experimental results using realistic scenes have demonstrated the efficiency and applicability of the system with real-time surveillance.

  19. Video dosimetry: evaluation of X-radiation dose by video fluoroscopic image; Videodosimetria: avaliacao da dose da radiacao X atraves da imagem videofluroscopica

    Energy Technology Data Exchange (ETDEWEB)

    Nova, Joao Luiz Leocadio da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Centro de Ciencias da Saude. Nucleo de Tecnologia Educacional para a Saude; Lopes, Ricardo Tadeu [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Lab. de Instrumentacao Nuclear

    1996-12-31

    A new methodology to evaluate the entrance surface dose on patients under radiodiagnosis is presented. A phantom is used in video fluoroscopic procedures in on line video signal system. The images are obtained from a Siemens Polymat 50 and are digitalized. The results show that the entrance surface dose can be obtained in real time from video imaging 3 refs., 2 figs., 2 tabs.

  20. Hounsfield unit recovery in clinical cone beam CT images of the thorax acquired for image guided radiation therapy

    DEFF Research Database (Denmark)

    Thing, Rune Slot; Bernchou, Uffe; Mainegra-Hing, Ernesto

    2016-01-01

    A comprehensive artefact correction method for clinical cone beam CT (CBCT) images acquired for image guided radiation therapy (IGRT) on a commercial system is presented. The method is demonstrated to reduce artefacts and recover CT-like Hounsfield units (HU) in reconstructed CBCT images of five...

  1. RF Device for Acquiring Images of the Human Body

    Science.gov (United States)

    Gaier, Todd C.; McGrath, William R.

    2010-01-01

    A safe, non-invasive method for forming images through clothing of large groups of people, in order to search for concealed weapons either made of metal or not, has been developed. A millimeter wavelength scanner designed in a unique, ring-shaped configuration can obtain a full 360 image of the body with a resolution of less than a millimeter in only a few seconds. Millimeter waves readily penetrate normal clothing, but are highly reflected by the human body and concealed objects. Millimeter wave signals are nonionizing and are harmless to human tissues when used at low power levels. The imager (see figure) consists of a thin base that supports a small-diameter vertical post about 7 ft (=2.13 m) tall. Attached to the post is a square-shaped ring 2 in. (=5 cm) wide and 3 ft (=91 cm) on a side. The ring is oriented horizontally, and is supported halfway along one side by a connection to a linear bearing on the vertical post. A planar RF circuit board is mounted to the inside of each side of the ring. Each circuit board contains an array of 30 receivers, one transmitter, and digitization electronics. Each array element has a printed-circuit patch antenna coupled to a pair of mixers by a 90 coupler. The mixers receive a reference local oscillator signal to a subharmonic of the transmitter frequency. A single local oscillator line feeds all 30 receivers on the board. The resulting MHz IF signals are amplified and carried to the edge of the board where they are demodulated and digitized. The transmitted signal is derived from the local oscillator at a frequency offset determined by a crystal oscillator. One antenna centrally located on each side of the square ring provides the source illumination power. The total transmitted power is less than 100 mW, resulting in an exposure level that is completely safe to humans. The output signals from all four circuit boards are fed via serial connection to a data processing computer. The computer processes the approximately 1-MB

  2. IT Infrastructure to support the secondary use of routinely acquired clinical imaging data for research.

    Science.gov (United States)

    Leung, Kai Yan Eugene; van der Lijn, Fedde; Vrooman, Henri A; Sturkenboom, Miriam C J M; Niessen, Wiro J

    2015-01-01

    We propose an infrastructure for the automated anonymization, extraction and processing of image data stored in clinical data repositories to make routinely acquired imaging data available for research purposes. The automated system, which was tested in the context of analyzing routinely acquired MR brain imaging data, consists of four modules: subject selection using PACS query, anonymization of privacy sensitive information and removal of facial features, quality assurance on DICOM header and image information, and quantitative imaging biomarker extraction. In total, 1,616 examinations were selected based on the following MRI scanning protocols: dementia protocol (246), multiple sclerosis protocol (446) and open question protocol (924). We evaluated the effectiveness of the infrastructure in accessing and successfully extracting biomarkers from routinely acquired clinical imaging data. To examine the validity, we compared brain volumes between patient groups with positive and negative diagnosis, according to the patient reports. Overall, success rates of image data retrieval and automatic processing were 82.5 %, 82.3 % and 66.2 % for the three protocol groups respectively, indicating that a large percentage of routinely acquired clinical imaging data can be used for brain volumetry research, despite image heterogeneity. In line with the literature, brain volumes were found to be significantly smaller (p-value <0.001) in patients with a positive diagnosis of dementia (915 ml) compared to patients with a negative diagnosis (939 ml). This study demonstrates that quantitative image biomarkers such as intracranial and brain volume can be extracted from routinely acquired clinical imaging data. This enables secondary use of clinical images for research into quantitative biomarkers at a hitherto unprecedented scale.

  3. Automatic flame tracking technique for atrium fire from video images

    Science.gov (United States)

    Li, Jin; Lu, Puyi; Fong, Naikong; Chow, Wanki; Wong, Lingtim; Xu, Dianguo

    2005-02-01

    Smoke control is one of the important aspects in atrium fire. For an efficient smoke control strategy, it is very important to identify the smoke and fire source in a very short period of time. However, traditional methods such as point type detectors are not effective for smoke and fire detection in large space such as atrium. Therefore, video smoke and fire detection systems are proposed. For the development of the system, automatic extraction and tracking of flame are two important problems needed to be solved. Based on entropy theory, region growing and Otsu method, a new automatic integrated algorithm, which is used to track flame from video images, is proposed in this paper. It can successfully identify flames from different environment, different background and in different form. The experimental results show that this integrated algorithm has stronger robustness and wider adaptability. In addition, because of the low computational demand of this algorithm, it is also possible to be used as part of a robust, real-time smoke and fire detection system.

  4. Tracking cells in Life Cell Imaging videos using topological alignments

    Directory of Open Access Journals (Sweden)

    Ersoy Ilker

    2009-07-01

    Full Text Available Abstract Background With the increasing availability of live cell imaging technology, tracking cells and other moving objects in live cell videos has become a major challenge for bioimage informatics. An inherent problem for most cell tracking algorithms is over- or under-segmentation of cells – many algorithms tend to recognize one cell as several cells or vice versa. Results We propose to approach this problem through so-called topological alignments, which we apply to address the problem of linking segmentations of two consecutive frames in the video sequence. Starting from the output of a conventional segmentation procedure, we align pairs of consecutive frames through assigning sets of segments in one frame to sets of segments in the next frame. We achieve this through finding maximum weighted solutions to a generalized "bipartite matching" between two hierarchies of segments, where we derive weights from relative overlap scores of convex hulls of sets of segments. For solving the matching task, we rely on an integer linear program. Conclusion Practical experiments demonstrate that the matching task can be solved efficiently in practice, and that our method is both effective and useful for tracking cells in data sets derived from a so-called Large Scale Digital Cell Analysis System (LSDCAS. Availability The source code of the implementation is available for download from http://www.picb.ac.cn/patterns/Software/topaln.

  5. The Video Mesh: A Data Structure for Image-based Three-dimensional Video Editing

    OpenAIRE

    Chen, Jiawen; Paris, Sylvain; Wang, Jue; Matusik, Wojciech; Cohen, Michael; Durand, Fredo

    2011-01-01

    This paper introduces the video mesh, a data structure for representing video as 2.5D “paper cutouts.” The video mesh allows interactive editing of moving objects and modeling of depth, which enables 3D effects and post-exposure camera control. The video mesh sparsely encodes optical flow as well as depth, and handles occlusion using local layering and alpha mattes. Motion is described by a sparse set of points tracked over time. Each point also stores a depth value. The video mesh is a trian...

  6. Imaging of community-acquired pneumonia: Roles of imaging examinations, imaging diagnosis of specific pathogens and discrimination from noninfectious diseases

    Science.gov (United States)

    Nambu, Atsushi; Ozawa, Katsura; Kobayashi, Noriko; Tago, Masao

    2014-01-01

    This article reviews roles of imaging examinations in the management of community-acquired pneumonia (CAP), imaging diagnosis of specific CAP and discrimination between CAP and noninfectious diseases. Chest radiography is usually enough to confirm the diagnosis of CAP, whereas computed tomography is required to suggest specific pathogens and to discriminate from noninfectious diseases. Mycoplasma pneumoniae pneumonia, tuberculosis, Pneumocystis jirovecii pneumonia and some cases of viral pneumonia sometimes show specific imaging findings. Peribronchial nodules, especially tree-in-bud appearance, are fairly specific for infection. Evidences of organization, such as concavity of the opacities, traction bronchiectasis, visualization of air bronchograms over the entire length of the bronchi, or mild parenchymal distortion are suggestive of organizing pneumonia. We will introduce tips to effectively make use of imaging examinations in the management of CAP. PMID:25349662

  7. Cryptanalysis of a spatiotemporal chaotic image/video cryptosystem

    Energy Technology Data Exchange (ETDEWEB)

    Rhouma, Rhouma [6' com laboratory, Ecole Nationale d' Ingenieurs de Tunis (ENIT) (Tunisia)], E-mail: rhoouma@yahoo.fr; Belghith, Safya [6' com laboratory, Ecole Nationale d' Ingenieurs de Tunis (ENIT) (Tunisia)

    2008-09-01

    This Letter proposes two different attacks on a recently proposed chaotic cryptosystem for images and videos in [S. Lian, Chaos Solitons Fractals (2007), (doi: 10.1016/j.chaos.2007.10.054)]. The cryptosystem under study displays weakness in the generation of the keystream. The encryption is made by generating a keystream mixed with blocks generated from the plaintext and the ciphertext in a CBC mode design. The so obtained keystream remains unchanged for every encryption procedure. Guessing the keystream leads to guessing the key. Two possible attacks are then able to break the whole cryptosystem based on this drawback in generating the keystream. We propose also to change the description of the cryptosystem to be robust against the described attacks by making it in a PCBC mode design.

  8. IT Infrastructure to Support the Secondary Use of Routinely Acquired Clinical Imaging Data for Research

    NARCIS (Netherlands)

    K.Y.E. Leung (Esther); F. van der Lijn (Fedde); H.A. Vrooman (Henri); M.C.J.M. Sturkenboom (Miriam); W.J. Niessen (Wiro)

    2014-01-01

    textabstractWe propose an infrastructure for the automated anonymization, extraction and processing of image data stored in clinical data repositories to make routinely acquired imaging data available for research purposes. The automated system, which was tested in the context of analyzing routinely

  9. High-speed three-frame image recording system using colored flash units and low-cost video equipment

    Science.gov (United States)

    Racca, Roberto G.; Scotten, Larry N.

    1995-05-01

    This article describes a method that allows the digital recording of sequences of three black and white images at rates of several thousand frames per second using a system consisting of an ordinary CCD camcorder, three flash units with color filters, a PC-based frame grabber board and some additional electronics. The maximum framing rate is determined by the duration of the flashtube emission, and for common photographic flash units lasting about 20 microsecond(s) it can exceed 10,000 frames per second in actual use. The subject under study is strobe- illuminated using a red, a green and a blue flash unit controlled by a special sequencer, and the three images are captured by a color CCD camera on a single video field. Color is used as the distinguishing parameter that allows the overlaid exposures to be resolved. The video output for that particular field will contain three individual scenes, one for each primary color component, which potentially can be resolved with no crosstalk between them. The output is electronically decoded into the primary color channels, frame grabbed and stored into digital memory, yielding three time-resolved images of the subject. A synchronization pulse provided by the flash sequencer triggers the frame grabbing so that the correct video field is acquired. A scheme involving the use of videotape as intermediate storage allows the frame grabbing to be performed using a monochrome video digitizer. Ideally each flash- illuminated scene would be confined to one color channel, but in practice various factors, both optical and electronic, affect color separation. Correction equations have been derived that counteract these effects in the digitized images and minimize 'ghosting' between frames. Once the appropriate coefficients have been established through a calibration procedure that needs to be performed only once for a given configuration of the equipment, the correction process is carried out transparently in software every time a

  10. A reduced-reference perceptual image and video quality metric based on edge preservation

    Science.gov (United States)

    Martini, Maria G.; Villarini, Barbara; Fiorucci, Federico

    2012-12-01

    In image and video compression and transmission, it is important to rely on an objective image/video quality metric which accurately represents the subjective quality of processed images and video sequences. In some scenarios, it is also important to evaluate the quality of the received video sequence with minimal reference to the transmitted one. For instance, for quality improvement of video transmission through closed-loop optimisation, the video quality measure can be evaluated at the receiver and provided as feedback information to the system controller. The original image/video sequence--prior to compression and transmission--is not usually available at the receiver side, and it is important to rely at the receiver side on an objective video quality metric that does not need reference or needs minimal reference to the original video sequence. The observation that the human eye is very sensitive to edge and contour information of an image underpins the proposal of our reduced reference (RR) quality metric, which compares edge information between the distorted and the original image. Results highlight that the metric correlates well with subjective observations, also in comparison with commonly used full-reference metrics and with a state-of-the-art RR metric.

  11. Digital Path Approach Despeckle Filter for Ultrasound Imaging and Video

    Directory of Open Access Journals (Sweden)

    Marek Szczepański

    2017-01-01

    Full Text Available We propose a novel filtering technique capable of reducing the multiplicative noise in ultrasound images that is an extension of the denoising algorithms based on the concept of digital paths. In this approach, the filter weights are calculated taking into account the similarity between pixel intensities that belongs to the local neighborhood of the processed pixel, which is called a path. The output of the filter is estimated as the weighted average of pixels connected by the paths. The way of creating paths is pivotal and determines the effectiveness and computational complexity of the proposed filtering design. Such procedure can be effective for different types of noise but fail in the presence of multiplicative noise. To increase the filtering efficiency for this type of disturbances, we introduce some improvements of the basic concept and new classes of similarity functions and finally extend our techniques to a spatiotemporal domain. The experimental results prove that the proposed algorithm provides the comparable results with the state-of-the-art techniques for multiplicative noise removal in ultrasound images and it can be applied for real-time image enhancement of video streams.

  12. Real-time UAV trajectory generation using feature points matching between video image sequences

    Science.gov (United States)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  13. Composing with Images: A Study of High School Video Producers.

    Science.gov (United States)

    Reilly, Brian

    At Bell High School (Los Angeles, California), students have been using video cameras, computers and editing machines to create videos in a variety of forms and on a variety of topics; in this setting, video is the textual medium of expression. A study was conducted using participant-observation and interviewing over the course of one school year…

  14. Getting the bigger picture: Using precision Remotely Operated Vehicle (ROV) videography to acquire high-definition mosaic images of newly discovered hydrothermal vents in the Southern Ocean

    Science.gov (United States)

    Marsh, Leigh; Copley, Jonathan T.; Huvenne, Veerle A. I.; Tyler, Paul A.; Isis ROV Facility

    2013-08-01

    Direct visual observations from submersible vehicles at hydrothermal vents typically only reveal a fraction of the vent environment at any one time. We describe the use of precision Remotely Operated Vehicle (ROV) videography to produce extensive mosaic images of hydrothermal vent chimneys and surrounding seafloor areas (c. 250 m2), with sufficient resolution to determine distributions of macro- and megafauna. Doppler velocity log navigation (DVLNAV) was used to follow overlapping vertical survey lines in a fixed plane facing a vent chimney, while acquiring high-definition video imagery using a forward-looking camera. The DVLNAV also enabled the vehicle to follow overlapping horizontal survey lines while acquiring seafloor imagery from a downward-looking video camera and mapping variations in seawater temperature. Digital stills images extracted from video were used to compile high-resolution composite views of the surveyed areas. Applying these image acquisition techniques at vent fields on the East Scotia Ridge, Southern Ocean, revealed consistent patterns of faunal zonation around vent sources, variations in proportions of faunal assemblage types on different faces of a vent chimney, and differences in proportions of faunal assemblages between two different vent fields. The technique can therefore be used to determine the composition and spatial distribution of fauna across complex areas of topography, such as vent fields, where mosaic images of vertical structures cannot currently be acquired using other platforms such as autonomous underwater vehicles (AUVs). These image acquisition techniques, demonstrated here in the first ROV dives at newly discovered vent fields, may offer an appropriate technology for rapid baseline studies required by the potential mining of seafloor massive sulfides (SMS).

  15. Energy efficient image/video data transmission on commercial multi-core processors.

    Science.gov (United States)

    Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee

    2012-11-01

    In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2~5 without compromising image/video quality.

  16. Technology and Technique Standards for Camera-Acquired Digital Dermatologic Images: A Systematic Review.

    Science.gov (United States)

    Quigley, Elizabeth A; Tokay, Barbara A; Jewell, Sarah T; Marchetti, Michael A; Halpern, Allan C

    2015-08-01

    Photographs are invaluable dermatologic diagnostic, management, research, teaching, and documentation tools. Digital Imaging and Communications in Medicine (DICOM) standards exist for many types of digital medical images, but there are no DICOM standards for camera-acquired dermatologic images to date. To identify and describe existing or proposed technology and technique standards for camera-acquired dermatologic images in the scientific literature. Systematic searches of the PubMed, EMBASE, and Cochrane databases were performed in January 2013 using photography and digital imaging, standardization, and medical specialty and medical illustration search terms and augmented by a gray literature search of 14 websites using Google. Two reviewers independently screened titles of 7371 unique publications, followed by 3 sequential full-text reviews, leading to the selection of 49 publications with the most recent (1985-2013) or detailed description of technology or technique standards related to the acquisition or use of images of skin disease (or related conditions). No universally accepted existing technology or technique standards for camera-based digital images in dermatology were identified. Recommendations are summarized for technology imaging standards, including spatial resolution, color resolution, reproduction (magnification) ratios, postacquisition image processing, color calibration, compression, output, archiving and storage, and security during storage and transmission. Recommendations are also summarized for technique imaging standards, including environmental conditions (lighting, background, and camera position), patient pose and standard view sets, and patient consent, privacy, and confidentiality. Proposed standards for specific-use cases in total body photography, teledermatology, and dermoscopy are described. The literature is replete with descriptions of obtaining photographs of skin disease, but universal imaging standards have not been developed

  17. Digital video image processing from dental operating microscope in endodontic treatment.

    Science.gov (United States)

    Suehara, Masataka; Nakagawa, Kan-Ichi; Aida, Natsuko; Ushikubo, Toshihiro; Morinaga, Kazuki

    2012-01-01

    Recently, optical microscopes have been used in endodontic treatment, as they offer advantages in terms of magnification, illumination, and documentation. Documentation is particularly important in presenting images to patients, and can take the form of both still images and motion video. Although high-quality still images can be obtained using a 35-mm film or CCD camera, the quality of still images produced by a video camera is significantly lower. The purpose of this study was to determine the potential of RegiStax in obtaining high-quality still images from a continuous video stream from an optical microscope. Video was captured continuously and sections with the highest luminosity chosen for frame alignment and stacking using the RegiStax program. The resulting stacked images were subjected to wavelet transformation. The results indicate that high-quality images with a large depth of field could be obtained using this method.

  18. Frequency Identification of Vibration Signals Using Video Camera Image Data

    Directory of Open Access Journals (Sweden)

    Chia-Hung Wu

    2012-10-01

    Full Text Available This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  19. An infrared high rate video imager for various space applications

    Science.gov (United States)

    Svedhem, Hâkan; Koschny, Detlef

    2010-05-01

    Modern spacecraft with high data transmission capabilities have opened up the possibility to fly video rate imagers in space. Several fields concerned with observations of transient phenomena can benefit significantly from imaging at video frame rate. Some applications are observations and characterization of bolides/meteors, sprites, lightning, volcanic eruptions, and impacts on airless bodies. Applications can be found both on low and high Earth orbiting spacecraft as well as on planetary and lunar orbiters. The optimum wavelength range varies depending on the application but we will focus here on the near infrared, partly since it allows exploration of a new field and partly because it, in many cases, allows operation both during day and night. Such an instrument has to our knowledge never flown in space so far. The only sensors of a similar kind fly on US defense satellites for monitoring launches of ballistic missiles. The data from these sensors, however, is largely inaccessible to scientists. We have developed a bread-board version of such an instrument, the SPOSH-IR. The instrument is based on an earlier technology development - SPOSH - a Smart Panoramic Optical Sensor Head, for operation in the visible range, but with the sensor replace by a cooled IR detector and new optics. The instrument is using a Sofradir 320x256 pixel HgCdTe detector array with 30µm pixel size, mounted directly on top of a four stage thermoelectric Peltier cooler. The detector-cooler combination is integrated into an evacuated closed package with a glass window on its front side. The detector has a sensitive range between 0.8 and 2.5 µm. The optical part is a seven lens design with a focal length of 6 mm and a FOV 90deg by 72 deg optimized for use at SWIR. The detector operates at 200K while the optics operates at ambient temperature. The optics and electronics for the bread-board has been designed and built by Jena-Optronik, Jena, Germany. This talk will present the design and the

  20. The effects of frame-rate and image quality on perceived video quality in videoconferencing

    OpenAIRE

    Thakur, Aruna; Gao, Chaunsi; Larsson, Andreas; Parnes, Peter

    2001-01-01

    This report discusses the effect of frame-rate and image quality on the perceived video quality in a specific videoconferencing application (MarratechPro). Subjects with various videoconferencing experiences took part in four experiments wherein they gave their opinions on the quality of video upon the variations in frame-rate and image quality. The results of the experiments showed that the subjects preferred high frame rate over high image quality, under the condition of limited bandwidth. ...

  1. Quality Assessment of Adaptive Bitrate Videos using Image Metrics and Machine Learning

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Brunnström, Kjell

    2015-01-01

    Adaptive bitrate (ABR) streaming is widely used for distribution of videos over the internet. In this work, we investigate how well we can predict the quality of such videos using well-known image metrics, information about the bitrate levels, and a relatively simple machine learning method....... Quality assessment of ABR videos is a hard problem, but our initial results are promising. We obtain a Spearman rank order correlation of 0.88 using content-independent cross-validation....

  2. Gimbal Influence on the Stability of Exterior Orientation Parameters of UAV Acquired Images

    OpenAIRE

    Mateo Gašparović; Luka Jurjević

    2017-01-01

    In this paper, results from the analysis of the gimbal impact on the determination of the camera exterior orientation parameters of an Unmanned Aerial Vehicle (UAV) are presented and interpreted. Additionally, a new approach and methodology for testing the influence of gimbals on the exterior orientation parameters of UAV acquired images is presented. The main motive of this study is to examine the possibility of obtaining better geometry and favorable spatial bundles of rays of images in UAV...

  3. High-Performance Motion Estimation for Image Sensors with Video Compression

    OpenAIRE

    Weizhi Xu; Shouyi Yin; Leibo Liu; Zhiyong Liu; Shaojun Wei

    2015-01-01

    It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME) is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME). Pixels of reconstructed...

  4. What do we do with all this video? Better understanding public engagement for image and video annotation

    Science.gov (United States)

    Wiener, C.; Miller, A.; Zykov, V.

    2016-12-01

    Advanced robotic vehicles are increasingly being used by oceanographic research vessels to enable more efficient and widespread exploration of the ocean, particularly the deep ocean. With cutting-edge capabilities mounted onto robotic vehicles, data at high resolutions is being generated more than ever before, enabling enhanced data collection and the potential for broader participation. For example, high resolution camera technology not only improves visualization of the ocean environment, but also expands the capacity to engage participants remotely through increased use of telepresence and virtual reality techniques. Schmidt Ocean Institute is a private, non-profit operating foundation established to advance the understanding of the world's oceans through technological advancement, intelligent observation and analysis, and open sharing of information. Telepresence-enabled research is an important component of Schmidt Ocean Institute's science research cruises, which this presentation will highlight. Schmidt Ocean Institute is one of the only research programs that make their entire underwater vehicle dive series available online, creating a collection of video that enables anyone to follow deep sea research in real time. We encourage students, educators and the general public to take advantage of freely available dive videos. Additionally, other SOI-supported internet platforms, have engaged the public in image and video annotation activities. Examples of these new online platforms, which utilize citizen scientists to annotate scientific image and video data will be provided. This presentation will include an introduction to SOI-supported video and image tagging citizen science projects, real-time robot tracking, live ship-to-shore communications, and an array of outreach activities that enable scientists to interact with the public and explore the ocean in fascinating detail.

  5. 17 CFR 232.304 - Graphic, image, audio and video material.

    Science.gov (United States)

    2010-04-01

    ... delivered to investors and others is deemed part of the electronic filing and subject to the civil liability..., image, audio or video material, they are not subject to the civil liability and anti-fraud provisions of...

  6. Do Stereotypic Images in Video Games Affect Attitudes and Behavior? Adolescents' Perspectives.

    Science.gov (United States)

    Henning, Alexandra; Brenick, Alaina; Killen, Melanie; O'Connor, Alexander; Collins, Michael J

    This study examined adolescents' attitudes about video games along with their self-reported play frequency. Ninth and eleventh grade students (N = 361), approximately evenly divided by grade and gender, were surveyed about whether video games have stereotypic images, involve harmful consequences or affect one's attitudes, whether game playing should be regulated by parents or the government, and whether game playing is a personal choice. Adolescents who played video games frequently showed decreased concern about the effects that games with negatively stereotyped images may have on the players' attitudes compared to adolescents who played games infrequently or not at all. With age, adolescents were more likely to view images as negative, but were also less likely to recognize stereotypic images of females as harmful and more likely to judge video-game playing as a personal choice. The paper discusses other findings in relation to research on adolescents' social cognitive judgments.

  7. Do Stereotypic Images in Video Games Affect Attitudes and Behavior? Adolescents’ Perspectives

    Science.gov (United States)

    Henning, Alexandra; Brenick, Alaina; Killen, Melanie; O’Connor, Alexander; Collins, Michael J.

    2015-01-01

    This study examined adolescents’ attitudes about video games along with their self-reported play frequency. Ninth and eleventh grade students (N = 361), approximately evenly divided by grade and gender, were surveyed about whether video games have stereotypic images, involve harmful consequences or affect one’s attitudes, whether game playing should be regulated by parents or the government, and whether game playing is a personal choice. Adolescents who played video games frequently showed decreased concern about the effects that games with negatively stereotyped images may have on the players’ attitudes compared to adolescents who played games infrequently or not at all. With age, adolescents were more likely to view images as negative, but were also less likely to recognize stereotypic images of females as harmful and more likely to judge video-game playing as a personal choice. The paper discusses other findings in relation to research on adolescents’ social cognitive judgments. PMID:25729336

  8. Research and implementation of video image acquisition and processing based on Java and JMF

    Science.gov (United States)

    Qin, Jinlei; Li, Zheng; Niu, Yuguang

    2012-01-01

    The article put forward a method which had been used for video image acquisition and processing, and a system based on Java media framework (JMF) had been implemented by it. The method could be achieved not only by B/S mode but also by C/S mode taking advantage of the predominance of the Java language. Some key issues such as locating video data source, playing video, video image acquisition and processing and so on had been expatiated in detail. The operation results of the system show that this method is fully compatible with common video capture device. At the same time the system possesses many excellences as lower cost, more powerful, easier to develop and cross-platform etc. Finally the application prospect of the method which is based on java and JMF is pointed out.

  9. Ultra-high resolution color images of the surface of comet 67P acquired by ROLIS

    Science.gov (United States)

    Schröder, Stefan; Mottola, Stefano; Arnold, Gabriele; Grothues, Hans-Georg; Hamm, Maximilian; Jaumann, Ralf; Michaelis, Harald; Pelivan, Ivanka; Proffe, Gerrit; Bibring, Jean-Pierre

    2015-04-01

    On Nov 12, 2014, the Rosetta Philae lander descended towards comet 67P/Churyumov-Gerasimenko. The onboard ROLIS camera successfully acquired high resolution images of the surface looking down from its vantage point on the instrument platform. ROLIS is a compact CCD imager with a 1024×1024 pixel sensor and a 57° field of view (Mottola et al., 2007, SSR 128, 241). It is equipped with an infinity lens (IFL), without which the camera focus is 30 cm. At Philae's final landing site, ROLIS removed the IFL and initiated an imaging sequence that shows the surface at the highest resolution ever obtained for a cometary surface (~0.5 mm per pixel). Illumination of the scene was provided by an onboard array of LEDs in four different colors: red, green, blue, and near-IR. ROLIS acquired one image for each color and a single dark exposure. The images show a unique, almost fractal morphology for the surface below the landing site that defies easy interpretation. However, there are similarities with some structures seen by the CIVA camera. Color and albedo variations over the surface are minor, and individual grains cannot be distinguished. The images are out-of-focus, indicating the surface was further away than the nominal 30 cm. The location of the illumination spot and the change of focus over the image are consistent with an inclined surface, indicating that Philae's final resting position is strongly tilted. In fact, it was inclined so much that we see the local horizon, even though ROLIS is downward-looking. Remarkably, the scene beyond the horizon is illuminated by the Sun, and out-of-focus particles can be seen to travel in the sky. The images suggest the environment of the lander is laden with fine dust, but a final assessment requires careful consideration of possible sources of stray light. Just before Philae went to sleep, ROLIS acquired an additional exposure with the IFL and the red LED. The resulting image is fully in focus. Because Philae had rotated and lifted

  10. An Experimental Video Disc for Map and Image Display,

    Science.gov (United States)

    1984-01-01

    a member of the American Y Codes Society of Photogrammetry . D va±1 apnd/or ABSTRACT A cooperative effort between four government recently resulted in...video tapes# to movie film, to transparencies, to paper photographic prints, to paper maps, charts, and documents. Bach of these media has its own space...perspective terrain views, engineering "* drawihgs, harbor charts, ground photographs, slides, movies , video tapes# documents, and organizaticnal logos

  11. Improving the quality of radiographic images acquired with conical radiation beams through divergence correction and filtering

    Science.gov (United States)

    Silvani, M. I.; Almeida, G. L.; Latini, R. M.; Bellido, A. V. B.; Souza, E. S.; Lopes, R. T.

    2015-07-01

    Earlier works have shown the feasibility to correct the deformation of the attenuation map in radiographs acquired with conical radiation beams provided that the inspected object could be expressed into analytical geometry terms. This correction reduces the contribution of the main object in the radiograph, allowing thus the visualization of its otherwise concealed heterogeneities. However, the non-punctual character of the source demanded a cumbersome trial-and-error approach in order to determine the proper correction parameters for the algorithm. Within this frame, this work addresses the improvement of radiographs of specially tailored test-objects acquired with a conical beam through correction of its divergence by using the information contained in the image itself. The corrected images have afterwards undergone a filtration in the frequency domain aiming at the reduction of statistical fluctuation and noise by using a 2D Fourier transform. All radiographs have been acquired using 165Dy and 198Au gamma-ray sources produced at the Argonauta research reactor in Institutode Engenharia Nuclear - CNEN, and an X-ray sensitive imaging plate as detector. The processed images exhibit features otherwise invisible in the original ones. Their processing by conventional histogram equalization carried out for comparison purposes did not succeed to detect those features.

  12. Video-rate computational super-resolution and integral imaging at longwave-infrared wavelengths

    OpenAIRE

    Preciado, Miguel A.; Carles, Guillem; Harvey, Andrew R.

    2017-01-01

    We report the first computational super-resolved, multi-camera integral imaging at long-wave infrared (LWIR) wavelengths. A synchronized array of FLIR Lepton cameras was assembled, and computational super-resolution and integral-imaging reconstruction employed to generate video with light-field imaging capabilities, such as 3D imaging and recognition of partially obscured objects, while also providing a four-fold increase in effective pixel count. This approach to high-resolution imaging enab...

  13. Innovative Video Diagnostic Equipment for Material Science

    Science.gov (United States)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  14. Uncompressed video image transmission of laparoscopic or endoscopic surgery for telemedicine.

    Science.gov (United States)

    Huang, Ke-Jian; Qiu, Zheng-Jun; Fu, Chun-Yu; Shimizu, Shuji; Okamura, Koji

    2008-06-01

    Traditional narrowband telemedicine cannot provide quality dynamic images. We conducted videoconferences of laparoscopic and endoscopic operations via an uncompressed video transmission technique. A superfast broadband Internet link was set up between Shanghai in the People's Republic of China and Fukuoka in Japan. Uncompressed dynamic video images of laparoscopic and endoscopic operations were transmitted by a digital video transfer system (DVTS). Seven teleconferences were conducted between June 2005 and June 2007. Of the 7 teleconferences, 5 were live surgical demonstrations and 3 were recorded video teleconsultations. Smoothness of the motion picture, sharpness of images, and clarity of sound were benefited by this form of telemedicine based upon DVTS. Telemedicine based upon DVTS is a superior choice for laparoscopic and endoscopic skill training across the borders.

  15. Adaptive sensing and optimal power allocation for wireless video sensors with sigma-delta imager.

    Science.gov (United States)

    Marijan, Malisa; Demirkol, Ilker; Maricić I, Danijel; Sharma, Gaurav; Ignjatovi, Zeljko

    2010-10-01

    We consider optimal power allocation for wireless video sensors (WVSs), including the image sensor subsystem in the system analysis. By assigning a power-rate-distortion (P-R-D) characteristic for the image sensor, we build a comprehensive P-R-D optimization framework for WVSs. For a WVS node operating under a power budget, we propose power allocation among the image sensor, compression, and transmission modules, in order to minimize the distortion of the video reconstructed at the receiver. To demonstrate the proposed optimization method, we establish a P-R-D model for an image sensor based upon a pixel level sigma-delta (Σ∆) image sensor design that allows investigation of the tradeoff between the bit depth of the captured images and spatio-temporal characteristics of the video sequence under the power constraint. The optimization results obtained in this setting confirm that including the image sensor in the system optimization procedure can improve the overall video quality under power constraint and prolong the lifetime of the WVSs. In particular, when the available power budget for a WVS node falls below a threshold, adaptive sensing becomes necessary to ensure that the node communicates useful information about the video content while meeting its power budget.

  16. From computer images to video presentation: Enhancing technology transfer

    Science.gov (United States)

    Beam, Sherilee F.

    1994-01-01

    With NASA placing increased emphasis on transferring technology to outside industry, NASA researchers need to evaluate many aspects of their efforts in this regard. Often it may seem like too much self-promotion to many researchers. However, industry's use of video presentations in sales, advertising, public relations and training should be considered. Today, the most typical presentation at NASA is through the use of vu-graphs (overhead transparencies) which can be effective for text or static presentations. For full blown color and sound presentations, however, the best method is videotape. In fact, it is frequently more convenient due to its portability and the availability of viewing equipment. This talk describes techniques for creating a video presentation through the use of a combined researcher and video professional team.

  17. Logarithmic Type Image Processing Framework for Enhancing Photographs Acquired in Extreme Lighting

    Directory of Open Access Journals (Sweden)

    FLOREA, C.

    2013-05-01

    Full Text Available The Logarithmic Type Image Processing (LTIP tools are mathematical models that were constructed for the representation and processing of gray tones images. By careful redefinition of the fundamental operations, namely addition and scalar multiplication, a set of mathematical properties are achieved. Here we propose the extension of LTIP models by a novel parameterization rule that ensures preservation of the required cone space structure. To prove the usability of the proposed extension we present an application for low-light image enhancement in images acquired with digital still camera. The closing property of the named model facilitates similarity with human visual system and digital camera processing pipeline, thus leading to superior behavior when compared with state of the art methods.

  18. [A new laser scan system for video ophthalmoscopy. Initial clinical experiences also in relation to digital image processing].

    Science.gov (United States)

    Fabian, E; Mertz, M; Hofmann, H; Wertheimer, R; Foos, C

    1990-06-01

    The clinical advantages of a scanning laser ophthalmoscope (SLO) and video imaging of fundus pictures are described. Image quality (contrast, depth of field) and imaging possibilities (confocal stop) are assessed. Imaging with different lasers (argon, He-Ne) and changes in imaging rendered possible by confocal alignment of the imaging optics are discussed. Hard copies from video images are still of inferior quality compared to fundus photographs. Methods of direct processing and retrieval of digitally stored SLO video fundus images are illustrated by examples. Modifications for a definitive laser scanning system - in regard to the field of view and the quality of hard copies - are proposed.

  19. Exemplar-Based Image and Video Stylization Using Fully Convolutional Semantic Features.

    Science.gov (United States)

    Zhu, Feida; Yan, Zhicheng; Bu, Jiajun; Yu, Yizhou

    2017-05-10

    Color and tone stylization in images and videos strives to enhance unique themes with artistic color and tone adjustments. It has a broad range of applications from professional image postprocessing to photo sharing over social networks. Mainstream photo enhancement softwares, such as Adobe Lightroom and Instagram, provide users with predefined styles, which are often hand-crafted through a trial-and-error process. Such photo adjustment tools lack a semantic understanding of image contents and the resulting global color transform limits the range of artistic styles it can represent. On the other hand, stylistic enhancement needs to apply distinct adjustments to various semantic regions. Such an ability enables a broader range of visual styles. In this paper, we first propose a novel deep learning architecture for exemplar-based image stylization, which learns local enhancement styles from image pairs. Our deep learning architecture consists of fully convolutional networks (FCNs) for automatic semantics-aware feature extraction and fully connected neural layers for adjustment prediction. Image stylization can be efficiently accomplished with a single forward pass through our deep network. To extend our deep network from image stylization to video stylization, we exploit temporal superpixels (TSPs) to facilitate the transfer of artistic styles from image exemplars to videos. Experiments on a number of datasets for image stylization as well as a diverse set of video clips demonstrate the effectiveness of our deep learning architecture.

  20. Facial attractiveness ratings from video-clips and static images tell the same story.

    Science.gov (United States)

    Rhodes, Gillian; Lie, Hanne C; Thevaraja, Nishta; Taylor, Libby; Iredell, Natasha; Curran, Christine; Tan, Shi Qin Claire; Carnemolla, Pia; Simmons, Leigh W

    2011-01-01

    Most of what we know about what makes a face attractive and why we have the preferences we do is based on attractiveness ratings of static images of faces, usually photographs. However, several reports that such ratings fail to correlate significantly with ratings made to dynamic video clips, which provide richer samples of appearance, challenge the validity of this literature. Here, we tested the validity of attractiveness ratings made to static images, using a substantial sample of male faces. We found that these ratings agreed very strongly with ratings made to videos of these men, despite the presence of much more information in the videos (multiple views, neutral and smiling expressions and speech-related movements). Not surprisingly, given this high agreement, the components of video-attractiveness were also very similar to those reported previously for static-attractiveness. Specifically, averageness, symmetry and masculinity were all significant components of attractiveness rated from videos. Finally, regression analyses yielded very similar effects of attractiveness on success in obtaining sexual partners, whether attractiveness was rated from videos or static images. These results validate the widespread use of attractiveness ratings made to static images in evolutionary and social psychological research. We speculate that this validity may stem from our tendency to make rapid and robust judgements of attractiveness.

  1. Automated segmentation and enhancement of optical coherence tomography-acquired images of rodent brain.

    Science.gov (United States)

    Baran, Utku; Zhu, Wenbin; Choi, Woo June; Omori, Michael; Zhang, Wenri; Alkayed, Nabil J; Wang, Ruikang K

    2016-09-01

    Optical coherence tomography (OCT) is a non-invasive optical imaging method that has proven useful in various fields such as ophthalmology, dermatology and neuroscience. In ophthalmology, significant progress has been made in retinal layer segmentation and enhancement of OCT images. There are also segmentation algorithms to separate epidermal and dermal layers in OCT-acquired images of human skin. We describe simple image processing methods that allow automatic segmentation and enhancement of OCT images of rodent brain. We demonstrate the effectiveness of the proposed methods for OCT-based microangiography (OMAG) and tissue injury mapping (TIM) of mouse cerebral cortex. The results show significant improvement in image contrast, delineation of tissue injury, allowing visualization of different layers of capillary beds. Previously reported methods for other applications are yet to be used in neuroscience due to the complexity of tissue anatomy, unique physiology and technical challenges. OCT is a promising tool that provides high resolution in vivo microvascular and structural images of rodent brain. By automatically segmenting and enhancing OCT images, structural and microvascular changes in mouse cerebral cortex after stroke can be monitored in vivo with high contrast. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Thinking Images: Doing Philosophy in Film and Video

    Science.gov (United States)

    Parkes, Graham

    2009-01-01

    Over the past several decades film and video have been steadily infiltrating the philosophy curriculum at colleges and universities. Traditionally, teachers of philosophy have not made much use of "audiovisual aids" in the classroom beyond the chalk board or overhead projector, with only the more adventurous playing audiotapes, for example, or…

  3. Video surveillance of epilepsy patients using color image processing

    DEFF Research Database (Denmark)

    Bager, Gitte; Vilic, Kenan; Vilic, Adnan

    2014-01-01

    This paper introduces a method for tracking patients under video surveillance based on a color marker system. The patients are not restricted in their movements, which requires a tracking system that can overcome non-ideal scenes e.g. occlusions, very fast movements, lighting issues and other mov...

  4. Nearshore subtidal bathymetry from time-exposure video images

    NARCIS (Netherlands)

    Aarninkhof, S.G.J.; Ruessink, B.G.; Roelvink, J.A.

    2005-01-01

    Time-averaged (over many wave periods) nearshore video observations show the process of wave breaking as one or more white alongshore bands of high intensity. Across a known depth profile, similar bands of dissipation can be predicted with a model describing the time-averaged cross-shore evolution

  5. Video Surveillance of Epilepsy Patients using Color Image Processing

    DEFF Research Database (Denmark)

    Bager, Gitte; Vilic, Kenan; Alving, Jørgen

    2007-01-01

    This report introduces a method for tracking of patients under video surveillance based on a marker system. The patients are not restricted in their movements, which requires a tracking system that can overcome non-ideal scenes e.g. occlusions, very fast movements, lightning issues and other moving...

  6. Measuring coupled oscillations using an automated video analysis technique based on image recognition

    Energy Technology Data Exchange (ETDEWEB)

    Monsoriu, Juan A; Gimenez, Marcos H; Riera, Jaime; Vidaurre, Ana [Departamento de Fisica Aplicada, Universidad Politecnica de Valencia, E-46022 Valencia (Spain)

    2005-11-01

    The applications of the digital video image to the investigation of physical phenomena have increased enormously in recent years. The advances in computer technology and image recognition techniques allow the analysis of more complex problems. In this work, we study the movement of a damped coupled oscillation system. The motion is considered as a linear combination of two normal modes, i.e. the symmetric and antisymmetric modes. The image of the experiment is recorded with a video camera and analysed by means of software developed in our laboratory. The results show a very good agreement with the theory.

  7. VIPER: a general-purpose digital image-processing system applied to video microscopy.

    Science.gov (United States)

    Brunner, M; Ittner, W

    1988-01-01

    This paper describes VIPER, the video image-processing system Erlangen. It consists of a general purpose microcomputer, commercially available image-processing hardware modules connected directly to the computer, video input/output-modules such as a TV camera, video recorders and monitors, and a software package. The modular structure and the capabilities of this system are explained. The software is user-friendly, menu-driven and performs image acquisition, transfers, greyscale processing, arithmetics, logical operations, filtering display, colour assignment, graphics, and a couple of management functions. More than 100 image-processing functions are implemented. They are available either by typing a key or by a simple call to the function-subroutine library in application programs. Examples are supplied in the area of biomedical research, e.g. in in-vivo microscopy.

  8. Spectral optical coherence tomography in video-rate and 3D imaging of contact lens wear.

    Science.gov (United States)

    Kaluzny, Bartlomiej J; Fojt, Wojciech; Szkulmowska, Anna; Bajraszewski, Tomasz; Wojtkowski, Maciej; Kowalczyk, Andrzej

    2007-12-01

    To present the applicability of spectral optical coherence tomography (SOCT) for video-rate and three-dimensional imaging of a contact lens on the eye surface. The SOCT prototype instrument constructed at Nicolaus Copernicus University (Torun, Poland) is based on Fourier domain detection, which enables high sensitivity (96 dB) and increases the speed of imaging 60 times compared with conventional optical coherence tomography techniques. Consequently, video-rate imaging and three-dimensional reconstructions can be achieved, preserving the high quality of the image. The instrument operates under clinical conditions in the Ophthalmology Department (Collegium Medicum Nicolaus Copernicus University, Bydgoszcz, Poland). A total of three eyes fitted with different contact lenses were examined with the aid of the instrument. Before SOCT measurements, slit lamp examinations were performed. Data, which are representative for each imaging mode, are presented. The instrument provided high-resolution (4 microm axial x 10 microm transverse) tomograms with an acquisition time of 40 micros per A-scan. Video-rate imaging allowed the simultaneous quantitative evaluation of the movement of the contact lens and assessment of the fitting relationship between the lens and the ocular surface. Three-dimensional scanning protocols further improved lens visualization and fit evaluation. SOCT allows video-rate and three-dimensional cross-sectional imaging of the eye fitted with a contact lens. The analysis of both imaging modes suggests the future applicability of this technology to the contact lens field.

  9. Achieving real-time capsule endoscopy (CE) video visualization through panoramic imaging

    Science.gov (United States)

    Yi, Steven; Xie, Jean; Mui, Peter; Leighton, Jonathan A.

    2013-02-01

    In this paper, we mainly present a novel and real-time capsule endoscopy (CE) video visualization concept based on panoramic imaging. Typical CE videos run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. To date, there is no commercially available tool capable of providing stabilized and processed CE video that is easy to analyze in real time. The burden on physicians' disease finding efforts is thus big. In fact, since the CE camera sensor has a limited forward looking view and low image frame rate (typical 2 frames per second), and captures very close range imaging on the GI tract surface, it is no surprise that traditional visualization method based on tracking and registration often fails to work. This paper presents a novel concept for real-time CE video stabilization and display. Instead of directly working on traditional forward looking FOV (field of view) images, we work on panoramic images to bypass many problems facing traditional imaging modalities. Methods on panoramic image generation based on optical lens principle leading to real-time data visualization will be presented. In addition, non-rigid panoramic image registration methods will be discussed.

  10. A new method to acquire 3-D images of a dental cast

    Science.gov (United States)

    Li, Zhongke; Yi, Yaxing; Zhu, Zhen; Li, Hua; Qin, Yongyuan

    2006-01-01

    This paper introduced our newly developed method to acquire three-dimensional images of a dental cast. A rotatable table, a laser-knife, a mirror, a CCD camera and a personal computer made up of a three-dimensional data acquiring system. A dental cast is placed on the table; the mirror is installed beside the table; a linear laser is projected to the dental cast; the CCD camera is put up above the dental cast, it can take picture of the dental cast and the shadow in the mirror; while the table rotating, the camera records the shape of the laser streak projected on the dental cast, and transmit the data to the computer. After the table rotated one circuit, the computer processes the data, calculates the three-dimensional coordinates of the dental cast's surface. In data processing procedure, artificial neural networks are enrolled to calibrate the lens distortion, map coordinates form screen coordinate system to world coordinate system. According to the three-dimensional coordinates, the computer reconstructs the stereo image of the dental cast. It is essential for computer-aided diagnosis and treatment planning in orthodontics. In comparison with other systems in service, for example, laser beam three-dimensional scanning system, the characteristic of this three-dimensional data acquiring system: a. celerity, it casts only 1 minute to scan a dental cast; b. compact, the machinery is simple and compact; c. no blind zone, a mirror is introduced ably to reduce blind zone.

  11. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    Science.gov (United States)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  12. Image denoising method based on FPGA in digital video transmission

    Science.gov (United States)

    Xiahou, Yaotao; Wang, Wanping; Huang, Tao

    2017-11-01

    In the image acquisition and transmission link, due to the acquisition of equipment and methods, the image would suffer some different degree of interference ,and the interference will reduce the quality of image which would influence the subsequent processing. Therefore, the image filtering and image enhancement are particularly important.The traditional image denoising algorithm smoothes the image while removing the noise, so that the details of the image are lost. In order to improve image quality and save image detail, this paper proposes an improved filtering algorithm based on edge detection, Gaussian filter and median filter. This method can not only reduce the noise effectively, but also the image details are saved relatively well, and the FPGA implementation scheme of this filter algorithm is also given in this paper.

  13. The Impact of Video Compression on Remote Cardiac Pulse Measurement Using Imaging Photoplethysmography

    Science.gov (United States)

    2017-05-30

    quality is human subjective perception assessed by a Mean Opinion Score (MOS). Alternatively, video quality may be assessed using one of numerous...cameras. Synchronization of the image capture from the array was achieved using a PCIe-6323 data acquisition card (National Instruments, Austin...large reductions of either video resolution or frame rate did not strongly impact iPPG pulse rate measurements [9]. A balanced approach may yield

  14. Computer Vision Tools for Finding Images and Video Sequences.

    Science.gov (United States)

    Forsyth, D. A.

    1999-01-01

    Computer vision offers a variety of techniques for searching for pictures in large collections of images. Appearance methods compare images based on the overall content of the image using certain criteria. Finding methods concentrate on matching subparts of images, defined in a variety of ways, in hope of finding particular objects. These ideas…

  15. Enrichment of words by visual images: books, slides, and videos.

    Science.gov (United States)

    Brozek, J M

    1999-08-01

    This article reviews additions to 3 ways of visually enriching verbal accounts of the history of psychology: illustrated books, slides, and videos. Although each approach has its limitations and its merits, taken together they constitute a significant addition to the printed word. As such, they broaden the toolkits of both the learners and the teachers of the history of psychology. Reference is also made to 3 earlier publications.

  16. Assessing the Content of YouTube Videos in Educating Patients Regarding Common Imaging Examinations.

    Science.gov (United States)

    Rosenkrantz, Andrew B; Won, Eugene; Doshi, Ankur M

    2016-12-01

    To assess the content of currently available YouTube videos seeking to educate patients regarding commonly performed imaging examinations. After initial testing of possible search terms, the first two pages of YouTube search results for "CT scan," "MRI," "ultrasound patient," "PET scan," and "mammogram" were reviewed to identify educational patient videos created by health organizations. Sixty-three included videos were viewed and assessed for a range of features. Average views per video were highest for MRI (293,362) and mammography (151,664). Twenty-seven percent of videos used a nontraditional format (eg, animation, song, humor). All videos (100.0%) depicted a patient undergoing the examination, 84.1% a technologist, and 20.6% a radiologist; 69.8% mentioned examination lengths, 65.1% potential pain/discomfort, 41.3% potential radiation, 36.5% a radiology report/results, 27.0% the radiologist's role in interpretation, and 13.3% laboratory work. For CT, 68.8% mentioned intravenous contrast and 37.5% mentioned contrast safety. For MRI, 93.8% mentioned claustrophobia, 87.5% noise, 75.0% need to sit still, 68.8% metal safety, 50.0% intravenous contrast, and 0.0% contrast safety. For ultrasound, 85.7% mentioned use of gel. For PET, 92.3% mentioned radiotracer injection, 61.5% fasting, and 46.2% diabetic precautions. For mammography, unrobing, avoiding deodorant, and possible additional images were all mentioned by 63.6%; dense breasts were mentioned by 0.0%. Educational patient videos on YouTube regarding common imaging examinations received high public interest and may provide a valuable patient resource. Videos most consistently provided information detailing the examination experience and less consistently provided safety information or described the presence and role of the radiologist. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  17. VQone MATLAB toolbox: A graphical experiment builder for image and video quality evaluations: VQone MATLAB toolbox.

    Science.gov (United States)

    Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka

    2016-03-01

    This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page (http://www.helsinki.fi/psychology/groups/visualcognition/).

  18. Automatic content-based analysis of georeferenced image data: Detection of Beggiatoa mats in seafloor video mosaics from the HÅkon Mosby Mud Volcano

    Science.gov (United States)

    Jerosch, K.; Lüdtke, A.; Schlüter, M.; Ioannidis, G. T.

    2007-02-01

    The combination of new underwater technology as remotely operating vehicles (ROVs), high-resolution video imagery, and software to compute georeferenced mosaics of the seafloor provides new opportunities for marine geological or biological studies and applications in offshore industry. Even during single surveys by ROVs or towed systems large amounts of images are compiled. While these underwater techniques are now well-engineered, there is still a lack of methods for the automatic analysis of the acquired image data. During ROV dives more than 4200 georeferenced video mosaics were compiled for the HÅkon Mosby Mud Volcano (HMMV). Mud volcanoes as HMMV are considered as significant source locations for methane characterised by unique chemoautotrophic communities as Beggiatoa mats. For the detection and quantification of the spatial distribution of Beggiatoa mats an automated image analysis technique was developed, which applies watershed transformation and relaxation-based labelling of pre-segmented regions. Comparison of the data derived by visual inspection of 2840 video images with the automated image analysis revealed similarities with a precision better than 90%. We consider this as a step towards a time-efficient and accurate analysis of seafloor images for computation of geochemical budgets and identification of habitats at the seafloor.

  19. Video image processing to create a speed sensor

    Science.gov (United States)

    1999-11-01

    Image processing has been applied to traffic analysis in recent years, with different goals. In the report, a new approach is presented for extracting vehicular speed information, given a sequence of real-time traffic images. We extract moving edges ...

  20. PIZZARO: Forensic analysis and restoration of image and video data

    Czech Academy of Sciences Publication Activity Database

    Kamenický, Jan; Bartoš, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozámský, Adam; Saic, Stanislav; Šroubek, Filip; Šorel, Michal; Zita, Aleš; Zitová, Barbara; Šíma, Z.; Švarc, P.; Hořínek, J.

    2016-01-01

    Roč. 264, č. 1 (2016), s. 153-166 ISSN 0379-0738 R&D Projects: GA MV VG20102013064; GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Image forensic analysis * Image restoration * Image tampering detection * Image source identification Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.989, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/kamenicky-0459504.pdf

  1. Image processing and classification procedures for analysis of sub-decimeter imagery acquired with an unmanned aircraft over arid rangelands

    Science.gov (United States)

    Using five centimeter resolution images acquired with an unmanned aircraft system (UAS), we developed and evaluated an image processing workflow that included the integration of resolution-appropriate field sampling, feature selection, object-based image analysis, and processing approaches for UAS i...

  2. Learning Computational Models of Video Memorability from fMRI Brain Imaging.

    Science.gov (United States)

    Han, Junwei; Chen, Changyuan; Shao, Ling; Hu, Xintao; Han, Jungong; Liu, Tianming

    2015-08-01

    Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects' brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework.

  3. A professional and cost effective digital video editing and image storage system for the operating room.

    Science.gov (United States)

    Scollato, A; Perrini, P; Benedetto, N; Di Lorenzo, N

    2007-06-01

    We propose an easy-to-construct digital video editing system ideal to produce video documentation and still images. A digital video editing system applicable to many video sources in the operating room is described in detail. The proposed system has proved easy to use and permits one to obtain videography quickly and easily. Mixing different streams of video input from all the devices in use in the operating room, the application of filters and effects produces a final, professional end-product. Recording on a DVD provides an inexpensive, portable and easy-to-use medium to store or re-edit or tape at a later time. From stored videography it is easy to extract high-quality, still images useful for teaching, presentations and publications. In conclusion digital videography and still photography can easily be recorded by the proposed system, producing high-quality video recording. The use of firewire ports provides good compatibility with next-generation hardware and software. The high standard of quality makes the proposed system one of the lowest priced products available today.

  4. Word2VisualVec: Image and Video to Sentence Matching by Visual Feature Prediction

    OpenAIRE

    Dong, Jianfeng; Li, Xirong; Snoek, Cees G. M.

    2016-01-01

    This paper strives to find the sentence best describing the content of an image or video. Different from existing works, which rely on a joint subspace for image / video to sentence matching, we propose to do so in a visual space only. We contribute Word2VisualVec, a deep neural network architecture that learns to predict a deep visual encoding of textual input based on sentence vectorization and a multi-layer perceptron. We thoroughly analyze its architectural design, by varying the sentence...

  5. On-board processing of video image sequences

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Chanrion, Olivier Arnaud; Forchhammer, Søren

    2008-01-01

    The ldquoatmosphere-space interactions monitorrdquo (ASIM) is a payload to be mounted on one of the external platforms of the Columbus module of the International Space Station (ISS). The instruments include six video cameras, six photometers and one X-ray detector. The main scientific objective...... of the mission is to study transient luminous events (TLE) above severe thunderstorms: the sprites, jets and elves. Other atmospheric phenomena are also studied including aurora, gravity waves and meteors. As part of the ASIM Phase B study, on-board processing of data from the cameras is being developed...

  6. Magnetic resonance imaging of acquired disorders of the pediatric female pelvis other than neoplasm.

    Science.gov (United States)

    Cox, Mougnyan; Gould, Sharon W; Podberesky, Daniel J; Epelman, Monica

    2016-05-01

    Transabdominal US remains the primary screening imaging modality of the pediatric female pelvis. However, MRI has become an invaluable adjunct to US in recent years. MRI offers superb soft-tissue contrast resolution that allows for detailed evaluation, particularly of the ovaries and their associated pathology. MRI can yield diagnostic information that is similar to or even better than that of US, especially in nonsexually active girls in whom transvaginal US would be contraindicated. MRI is generally a second-line examination and is preferred over CT because it does not involve the use of ionizing radiation. MRI might be underutilized in this population, particularly in differentiating surgical from nonsurgical conditions. This article reviews the relevant anatomy and discusses imaging of acquired conditions that involve the pediatric female genital tract, illustrating associated pathology with case examples.

  7. Image processing for identification and quantification of filamentous bacteria in in situ acquired images.

    Science.gov (United States)

    Dias, Philipe A; Dunkel, Thiemo; Fajado, Diego A S; Gallegos, Erika de León; Denecke, Martin; Wiedemann, Philipp; Schneider, Fabio K; Suhr, Hajo

    2016-06-11

    In the activated sludge process, problems of filamentous bulking and foaming can occur due to overgrowth of certain filamentous bacteria. Nowadays, these microorganisms are typically monitored by means of light microscopy, commonly combined with staining techniques. As drawbacks, these methods are susceptible to human errors, subjectivity and limited by the use of discontinuous microscopy. The in situ microscope appears as a suitable tool for continuous monitoring of filamentous bacteria, providing real-time examination, automated analysis and eliminating sampling, preparation and transport of samples. In this context, a proper image processing algorithm is proposed for automated recognition and measurement of filamentous objects. This work introduces a method for real-time evaluation of images without any staining, phase-contrast or dilution techniques, differently from studies present in the literature. Moreover, we introduce an algorithm which estimates the total extended filament length based on geodesic distance calculation. For a period of twelve months, samples from an industrial activated sludge plant were weekly collected and imaged without any prior conditioning, replicating real environment conditions. Trends of filament growth rate-the most important parameter for decision making-are correctly identified. For reference images whose filaments were marked by specialists, the algorithm correctly recognized 72 % of the filaments pixels, with a false positive rate of at most 14 %. An average execution time of 0.7 s per image was achieved. Experiments have shown that the designed algorithm provided a suitable quantification of filaments when compared with human perception and standard methods. The algorithm's average execution time proved its suitability for being optimally mapped into a computational architecture to provide real-time monitoring.

  8. Assessing the consistency of UAV-derived point clouds and images acquired at different altitudes

    Science.gov (United States)

    Ozcan, O.

    2016-12-01

    Unmanned Aerial Vehicles (UAVs) offer several advantages in terms of cost and image resolution compared to terrestrial photogrammetry and satellite remote sensing system. Nowadays, UAVs that bridge the gap between the satellite scale and field scale applications were initiated to be used in various application areas to acquire hyperspatial and high temporal resolution imageries due to working capacity and acquiring in a short span of time with regard to conventional photogrammetry methods. UAVs have been used for various fields such as for the creation of 3-D earth models, production of high resolution orthophotos, network planning, field monitoring and agricultural lands as well. Thus, geometric accuracy of orthophotos and volumetric accuracy of point clouds are of capital importance for land surveying applications. Correspondingly, Structure from Motion (SfM) photogrammetry, which is frequently used in conjunction with UAV, recently appeared in environmental sciences as an impressive tool allowing for the creation of 3-D models from unstructured imagery. In this study, it was aimed to reveal the spatial accuracy of the images acquired from integrated digital camera and the volumetric accuracy of Digital Surface Models (DSMs) which were derived from UAV flight plans at different altitudes using SfM methodology. Low-altitude multispectral overlapping aerial photography was collected at the altitudes of 30 to 100 meters and georeferenced with RTK-GPS ground control points. These altitudes allow hyperspatial imagery with the resolutions of 1-5 cm depending upon the sensor being used. Preliminary results revealed that the vertical comparison of UAV-derived point clouds with respect to GPS measurements pointed out an average distance at cm-level. Larger values are found in areas where instantaneous changes in surface are present.

  9. Video and image retrieval beyond the cognitive level: the needs and possibilities

    Science.gov (United States)

    Hanjalic, Alan

    2001-01-01

    The worldwide research efforts in the are of image and video retrieval have concentrated so far on increasing the efficiency and reliability of extracting the elements of image and video semantics and so on improving the search and retrieval performance at the cognitive level of content abstraction. At this abstraction level, the user is searching for 'factual' or 'objective' content such as image showing a panorama of San Francisco, an outdoor or an indoor image, a broadcast news report on a defined topic, a movie dialog between the actors A and B or the parts of a basketball game showing fast breaks, steals and scores. These efforts, however, do not address the retrieval applications at the so-called affective level of content abstraction where the 'ground truth' is not strictly defined. Such applications are, for instance, those where subjectivity of the user plays the major role, e.g. the task of retrieving all images that the user 'likes most', and those that are based on 'recognizing emotions' in audiovisual data. Typical examples are searching for all images that 'radiate happiness', identifying all 'sad' movie fragments and looking for the 'romantic landscapes', 'sentimental' movie segments, 'movie highlights' or 'most exciting' moments of a sport event. This paper discusses the needs and possibilities for widening the current scope of research in the area of image and video search and retrieval in order to enable applications at the affective level of content abstraction.

  10. Fusion of intraoperative cone-beam CT and endoscopic video for image-guided procedures

    Science.gov (United States)

    Daly, M. J.; Chan, H.; Prisman, E.; Vescan, A.; Nithiananthan, S.; Qiu, J.; Weersink, R.; Irish, J. C.; Siewerdsen, J. H.

    2010-02-01

    Methods for accurate registration and fusion of intraoperative cone-beam CT (CBCT) with endoscopic video have been developed and integrated into a system for surgical guidance that accounts for intraoperative anatomical deformation and tissue excision. The system is based on a prototype mobile C-Arm for intraoperative CBCT that provides low-dose 3D image updates on demand with sub-mm spatial resolution and soft-tissue visibility, and also incorporates subsystems for real-time tracking and navigation, video endoscopy, deformable image registration of preoperative images and surgical plans, and 3D visualization software. The position and pose of the endoscope are geometrically registered to 3D CBCT images by way of real-time optical tracking (NDI Polaris) for rigid endoscopes (e.g., head and neck surgery), and electromagnetic tracking (NDI Aurora) for flexible endoscopes (e.g., bronchoscopes, colonoscopes). The intrinsic (focal length, principal point, non-linear distortion) and extrinsic (translation, rotation) parameters of the endoscopic camera are calibrated from images of a planar calibration checkerboard (2.5×2.5 mm2 squares) obtained at different perspectives. Video-CBCT registration enables a variety of 3D visualization options (e.g., oblique CBCT slices at the endoscope tip, augmentation of video with CBCT images and planning data, virtual reality representations of CBCT [surface renderings]), which can reveal anatomical structures not directly visible in the endoscopic view - e.g., critical structures obscured by blood or behind the visible anatomical surface. Video-CBCT fusion is evaluated in pre-clinical sinus and skull base surgical experiments, and is currently being incorporated into an ongoing prospective clinical trial in CBCT-guided head and neck surgery.

  11. The ImageNet Shuffle: Reorganized Pre-training for Video Event Detection

    NARCIS (Netherlands)

    Mettes, P.; Koelma, D.C.; Snoek, C.G.M.

    2016-01-01

    This paper strives for video event detection using a representation learned from deep convolutional neural networks. Different from the leading approaches, who all learn from the 1,000 classes defined in the ImageNet Large Scale Visual Recognition Challenge, we investigate how to leverage the

  12. Video-rate two-photon excited fluorescence lifetime imaging system with interleaved digitization.

    Science.gov (United States)

    Dow, Ximeng Y; Sullivan, Shane Z; Muir, Ryan D; Simpson, Garth J

    2015-07-15

    A fast (up to video rate) two-photon excited fluorescence lifetime imaging system based on interleaved digitization is demonstrated. The system is compatible with existing beam-scanning microscopes with minor electronics and software modification. Proof-of-concept demonstrations were performed using laser dyes and biological tissue.

  13. Geometric Distortion in Image and Video Watermarking. Robustness and Perceptual Quality Impact

    NARCIS (Netherlands)

    Setyawan, I.

    2004-01-01

    The main focus of this thesis is the problem of geometric distortion in image and video watermarking. In this thesis we discuss the two aspects of the geometric distortion problem, namely the watermark desynchronization aspect and the perceptual quality assessment aspect. Furthermore, this thesis

  14. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    Directory of Open Access Journals (Sweden)

    Zhuowen Lv

    2015-01-01

    Full Text Available Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach.

  15. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    Science.gov (United States)

    Lv, Zhuowen; Xing, Xianglei; Wang, Kejun; Guan, Donghai

    2015-01-01

    Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach. PMID:25574935

  16. System and method for image registration of multiple video streams

    Energy Technology Data Exchange (ETDEWEB)

    Dillavou, Marcus W.; Shum, Phillip Corey; Guthrie, Baron L.; Shenai, Mahesh B.; Deaton, Drew Steven; May, Matthew Benton

    2018-02-06

    Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements.

  17. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    Science.gov (United States)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  18. Real-Depth imaging: a new (no glasses) 3D imaging technology with video/data projection applications

    Science.gov (United States)

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for anew, patent pending, 'floating 3D, off-the- screen-experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3D Graphics' technologies are actually flat on screen. Floating Images technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax and accommodation which coincides with convergence. Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed. The imagery must either be formatted for the Floating Images platform when written, or existing software can be reformatted without much difficult. The optical hardware system can be made to accommodate virtually any projection system to produce Floating Images for the Boardroom, video arcade, stage shows, or the classroom.

  19. Operational prediction of rip currents using numerical model and nearshore bathymetry from video images

    Science.gov (United States)

    Sembiring, L.; Van Ormondt, M.; Van Dongeren, A. R.; Roelvink, J. A.

    2017-07-01

    Rip currents are one of the most dangerous coastal hazards for swimmers. In order to minimize the risk, a coastal operational-process based-model system can be utilized in order to provide forecast of nearshore waves and currents that may endanger beach goers. In this paper, an operational model for rip current prediction by utilizing nearshore bathymetry obtained from video image technique is demonstrated. For the nearshore scale model, XBeach1 is used with which tidal currents, wave induced currents (including the effect of the wave groups) can be simulated simultaneously. Up-to-date bathymetry will be obtained using video images technique, cBathy 2. The system will be tested for the Egmond aan Zee beach, located in the northern part of the Dutch coastline. This paper will test the applicability of bathymetry obtained from video technique to be used as input for the numerical modelling system by comparing simulation results using surveyed bathymetry and model results using video bathymetry. Results show that the video technique is able to produce bathymetry converging towards the ground truth observations. This bathymetry validation will be followed by an example of operational forecasting type of simulation on predicting rip currents. Rip currents flow fields simulated over measured and modeled bathymetries are compared in order to assess the performance of the proposed forecast system.

  20. Accuracy Assessment of a Complex Building 3d Model Reconstructed from Images Acquired with a Low-Cost Uas

    Science.gov (United States)

    Oniga, E.; Chirilă, C.; Stătescu, F.

    2017-02-01

    Nowadays, Unmanned Aerial Systems (UASs) are a wide used technique for acquisition in order to create buildings 3D models, providing the acquisition of a high number of images at very high resolution or video sequences, in a very short time. Since low-cost UASs are preferred, the accuracy of a building 3D model created using this platforms must be evaluated. To achieve results, the dean's office building from the Faculty of "Hydrotechnical Engineering, Geodesy and Environmental Engineering" of Iasi, Romania, has been chosen, which is a complex shape building with the roof formed of two hyperbolic paraboloids. Seven points were placed on the ground around the building, three of them being used as GCPs, while the remaining four as Check points (CPs) for accuracy assessment. Additionally, the coordinates of 10 natural CPs representing the building characteristic points were measured with a Leica TCR 405 total station. The building 3D model was created as a point cloud which was automatically generated based on digital images acquired with the low-cost UASs, using the image matching algorithm and different software like 3DF Zephyr, Visual SfM, PhotoModeler Scanner and Drone2Map for ArcGIS. Except for the PhotoModeler Scanner software, the interior and exterior orientation parameters were determined simultaneously by solving a self-calibrating bundle adjustment. Based on the UAS point clouds, automatically generated by using the above mentioned software and GNSS data respectively, the parameters of the east side hyperbolic paraboloid were calculated using the least squares method and a statistical blunder detection. Then, in order to assess the accuracy of the building 3D model, several comparisons were made for the facades and the roof with reference data, considered with minimum errors: TLS mesh for the facades and GNSS mesh for the roof. Finally, the front facade of the building was created in 3D based on its characteristic points using the PhotoModeler Scanner

  1. Signal template generation from acquired mammographic images for the non-prewhitening model observer with eye-filter

    Science.gov (United States)

    Balta, Christiana; Bouwman, Ramona W.; Sechopoulos, Ioannis; Broeders, Mireille J. M.; Karssemeijer, Nico; van Engen, Ruben E.; Veldkamp, Wouter J. H.

    2017-03-01

    Model observers (MOs) are being investigated for image quality assessment in full-field digital mammography (FFDM). Signal templates for the non-prewhitening MO with eye filter (NPWE) were formed using acquired FFDM images. A signal template was generated from acquired images by averaging multiple exposures resulting in a low noise signal template. Noise elimination while preserving the signal was investigated and a methodology which results in a noise-free template is proposed. In order to deal with signal location uncertainty, template shifting was implemented. The procedure to generate the template was evaluated on images of an anthropomorphic breast phantom containing microcalcification-related signals. Optimal reduction of the background noise was achieved without changing the signal. Based on a validation study in simulated images, the difference (bias) in MO performance from the ground truth signal was calculated and found to be image quality assessment framework, the proposed method to construct templates from acquired images facilitates the use of the NPWE MO in acquired images.

  2. Low-complexity video encoding method for wireless image transmission in capsule endoscope.

    Science.gov (United States)

    Takizawa, Kenichi; Hamaguchi, Kiyoshi

    2010-01-01

    This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.

  3. Using smart phone video to supplement communication of radiology imaging in a neurosurgical unit: technical note.

    Science.gov (United States)

    Shivapathasundram, Ganeshwaran; Heckelmann, Michael; Sheridan, Mark

    2012-04-01

    The use of smart phones within medicine continues to grow at the same rate as mobile phone technology continues to evolve. One use of smart phones within medicine is in the transmission of radiological images to consultant neurosurgeons who are off-site in an emergency setting. In our unit, this has allowed quick, efficient, and safe communication between consultant neurosurgeon and trainees, aiding in rapid patient assessment and management in emergency situations. To describe a new means of smart phone technology use in the neurosurgical setting, where the video application of smart phones allows transfer of a whole series of patient neuroimaging via multimedia messaging service to off-site consultant neurosurgeons. METHOD/TECHNIQUE: Using the video application of smart phones, a 30-second video of an entire series of patient neuroimaging was transmitted to consultant neurosurgeons. With this information, combined with a clinical history, accurate management decisions were made. This technique has been used on a number of emergency situations in our unit to date. Thus far, the imaging received by consultants has been a very useful adjunct to the clinical information provided by the on-site trainee, and has helped expedite management of patients. While the aim should always be for the specialist neurosurgeon to review the imaging in person, in emergency settings, this is not always possible, and we feel that this technique of smart phone video is a very useful means for rapid communication with neurosurgeons.

  4. Determination of exterior parameters for video image sequences from helicopter by block adjustment with combined vertical and oblique images

    Science.gov (United States)

    Zhang, Jianqing; Zhang, Yong; Zhang, Zuxun

    2003-09-01

    Determination of image exterior parameters is a key aspect for the realization of automatic texture mapping of buildings in the reconstruction of real 3D city models. This paper reports about an application of automatic aerial triangulation on a block with three video image sequences, one vertical image sequence to buildings' roofs and two oblique image sequences to buildings' walls. A new process procedure is developed in order to auto matching homologous points between images in oblique and vertical images. Two strategies are tested. One is treating three strips as independent blocks and executing strip block adjustment respectively, the other is creating a block with three strips, using the new image matching procedure to extract large number of tie points and executing block adjustment. The block adjustment results of these two strategies are also compared.

  5. Comparison of Image Transform-Based Features for Visual Speech Recognition in Clean and Corrupted Videos

    Directory of Open Access Journals (Sweden)

    Ji Ming

    2008-03-01

    Full Text Available We present results of a study into the performance of a variety of different image transform-based feature types for speaker-independent visual speech recognition of isolated digits. This includes the first reported use of features extracted using a discrete curvelet transform. The study will show a comparison of some methods for selecting features of each feature type and show the relative benefits of both static and dynamic visual features. The performance of the features will be tested on both clean video data and also video data corrupted in a variety of ways to assess each feature type's robustness to potential real-world conditions. One of the test conditions involves a novel form of video corruption we call jitter which simulates camera and/or head movement during recording.

  6. Video outside versus video inside the web: do media setting and image size have an impact on the emotion-evoking potential of video?

    NARCIS (Netherlands)

    Verleur, R.; Verhagen, Pleunes Willem; Crawford, Margaret; Simonson, Michael; Lamboy, Carmen

    2001-01-01

    To explore the educational potential of video-evoked affective responses in a Web-based environment, the question was raised whether video in a Web-based environment is experienced differently from video in a traditional context. An experiment was conducted that studied the affect-evoking power of

  7. Bit Plane Coding based Steganography Technique for JPEG2000 Images and Videos

    Directory of Open Access Journals (Sweden)

    Geeta Kasana

    2016-02-01

    Full Text Available In this paper, a Bit Plane Coding (BPC based steganography technique for JPEG2000 images and Motion JPEG2000 video is proposed. Embedding in this technique is performed in the lowest significant bit planes of the wavelet coefficients of a cover image. In JPEG2000 standard, the number of bit planes of wavelet coefficients to be used in encoding is dependent on the compression rate and are used in Tier-2 process of JPEG2000. In the proposed technique, Tier-1 and Tier-2 processes of JPEG2000 and Motion JPEG2000 are executed twice on the encoder side to collect the information about the lowest bit planes of all code blocks of a cover image, which is utilized in embedding and transmitted to the decoder. After embedding secret data, Optimal Pixel Adjustment Process (OPAP is applied on stego images to enhance its visual quality. Experimental results show that proposed technique provides large embedding capacity and better visual quality of stego images than existing steganography techniques for JPEG2000 compressed images and videos. Extracted secret image is similar to the original secret image.

  8. ROAD SIGNS DETECTION AND RECOGNITION UTILIZING IMAGES AND 3D POINT CLOUD ACQUIRED BY MOBILE MAPPING SYSTEM

    OpenAIRE

    Li, Y H; Shinohara, T.; Satoh, T.; Tachibana, K

    2016-01-01

    High-definition and highly accurate road maps are necessary for the realization of automated driving, and road signs are among the most important element in the road map. Therefore, a technique is necessary which can acquire information about all kinds of road signs automatically and efficiently. Due to the continuous technical advancement of Mobile Mapping System (MMS), it has become possible to acquire large number of images and 3d point cloud efficiently with highly precise position inform...

  9. Gimbal Influence on the Stability of Exterior Orientation Parameters of UAV Acquired Images

    Science.gov (United States)

    Gašparović, Mateo; Jurjević, Luka

    2017-01-01

    In this paper, results from the analysis of the gimbal impact on the determination of the camera exterior orientation parameters of an Unmanned Aerial Vehicle (UAV) are presented and interpreted. Additionally, a new approach and methodology for testing the influence of gimbals on the exterior orientation parameters of UAV acquired images is presented. The main motive of this study is to examine the possibility of obtaining better geometry and favorable spatial bundles of rays of images in UAV photogrammetric surveying. The subject is a 3-axis brushless gimbal based on a controller board (Storm32). Only two gimbal axes are taken into consideration: roll and pitch axes. Testing was done in a flight simulation, and in indoor and outdoor flight mode, to analyze the Inertial Measurement Unit (IMU) and photogrammetric data. Within these tests the change of the exterior orientation parameters without the use of a gimbal is determined, as well as the potential accuracy of the stabilization with the use of a gimbal. The results show that using a gimbal has huge potential. Significantly, smaller discrepancies between data are noticed when a gimbal is used in flight simulation mode, even four times smaller than in other test modes. In this test the potential accuracy of a low budget gimbal for application in real conditions is determined. PMID:28218699

  10. Gimbal Influence on the Stability of Exterior Orientation Parameters of UAV Acquired Images

    Directory of Open Access Journals (Sweden)

    Mateo Gašparović

    2017-02-01

    Full Text Available In this paper, results from the analysis of the gimbal impact on the determination of the camera exterior orientation parameters of an Unmanned Aerial Vehicle (UAV are presented and interpreted. Additionally, a new approach and methodology for testing the influence of gimbals on the exterior orientation parameters of UAV acquired images is presented. The main motive of this study is to examine the possibility of obtaining better geometry and favorable spatial bundles of rays of images in UAV photogrammetric surveying. The subject is a 3-axis brushless gimbal based on a controller board (Storm32. Only two gimbal axes are taken into consideration: roll and pitch axes. Testing was done in a flight simulation, and in indoor and outdoor flight mode, to analyze the Inertial Measurement Unit (IMU and photogrammetric data. Within these tests the change of the exterior orientation parameters without the use of a gimbal is determined, as well as the potential accuracy of the stabilization with the use of a gimbal. The results show that using a gimbal has huge potential. Significantly, smaller discrepancies between data are noticed when a gimbal is used in flight simulation mode, even four times smaller than in other test modes. In this test the potential accuracy of a low budget gimbal for application in real conditions is determined.

  11. Gimbal Influence on the Stability of Exterior Orientation Parameters of UAV Acquired Images.

    Science.gov (United States)

    Gašparović, Mateo; Jurjević, Luka

    2017-02-18

    In this paper, results from the analysis of the gimbal impact on the determination of the camera exterior orientation parameters of an Unmanned Aerial Vehicle (UAV) are presented and interpreted. Additionally, a new approach and methodology for testing the influence of gimbals on the exterior orientation parameters of UAV acquired images is presented. The main motive of this study is to examine the possibility of obtaining better geometry and favorable spatial bundles of rays of images in UAV photogrammetric surveying. The subject is a 3-axis brushless gimbal based on a controller board (Storm32). Only two gimbal axes are taken into consideration: roll and pitch axes. Testing was done in a flight simulation, and in indoor and outdoor flight mode, to analyze the Inertial Measurement Unit (IMU) and photogrammetric data. Within these tests the change of the exterior orientation parameters without the use of a gimbal is determined, as well as the potential accuracy of the stabilization with the use of a gimbal. The results show that using a gimbal has huge potential. Significantly, smaller discrepancies between data are noticed when a gimbal is used in flight simulation mode, even four times smaller than in other test modes. In this test the potential accuracy of a low budget gimbal for application in real conditions is determined.

  12. Effective spatial database support for acquiring spatial information from remote sensing images

    Science.gov (United States)

    Jin, Peiquan; Wan, Shouhong; Yue, Lihua

    2009-12-01

    In this paper, a new approach to maintain spatial information acquiring from remote-sensing images is presented, which is based on Object-Relational DBMS. According to this approach, the detected and recognized results of targets are stored and able to be further accessed in an ORDBMS-based spatial database system, and users can access the spatial information using the standard SQL interface. This approach is different from the traditional ArcSDE-based method, because the spatial information management module is totally integrated into the DBMS and becomes one of the core modules in the DBMS. We focus on three issues, namely the general framework for the ORDBMS-based spatial database system, the definitions of the add-in spatial data types and operators, and the process to develop a spatial Datablade on Informix. The results show that the ORDBMS-based spatial database support for image-based target detecting and recognition is easy and practical to be implemented.

  13. Ultrafast video imaging of cell division from zebrafish egg using multimodal microscopic system

    Science.gov (United States)

    Lee, Sung-Ho; Jang, Bumjoon; Kim, Dong Hee; Park, Chang Hyun; Bae, Gyuri; Park, Seung Woo; Park, Seung-Han

    2017-07-01

    Unlike those of other ordinary laser scanning microscopies in the past, nonlinear optical laser scanning microscopy (SHG, THG microscopy) applied ultrafast laser technology which has high peak powers with relatively inexpensive, low-average-power. It short pulse nature reduces the ionization damage in organic molecules. And it enables us to take bright label-free images. In this study, we measured cell division of zebrafish egg with ultrafast video images using multimodal nonlinear optical microscope. The result shows in-vivo cell division label-free imaging with sub-cellular resolution.

  14. Modeling the Color Image and Video Quality on Liquid Crystal Displays with Backlight Dimming

    DEFF Research Database (Denmark)

    Korhonen, Jari; Mantel, Claire; Burini, Nino

    2013-01-01

    Objective image and video quality metrics focus mostly on the digital representation of the signal. However, the display characteristics are also essential for the overall Quality of Experience (QoE). In this paper, we use a model of a backlight dimming system for Liquid Crystal Display (LCD......) and show how the modeled image can be used as an input to quality assessment algorithms. For quality assessment, we propose an image quality metric, based on Peak Signal-to-Noise Ratio (PSNR) computation in the CIE L*a*b* color space. The metric takes luminance reduction, color distortion and loss...... of uniformity in the resulting image in consideration. Subjective evaluations of images generated using different backlight dimming algorithms and clipping strategies show that the proposed metric estimates the perceived image quality more accurately than conventional PSNR....

  15. The effects of video compression on acceptability of images for monitoring life sciences experiments

    Science.gov (United States)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters

  16. Viral video: Live imaging of virus-host encounters

    Science.gov (United States)

    Son, Kwangmin; Guasto, Jeffrey S.; Cubillos-Ruiz, Andres; Chisholm, Sallie W.; Sullivan, Matthew B.; Stocker, Roman

    2014-11-01

    Viruses are non-motile infectious agents that rely on Brownian motion to encounter and subsequently adsorb to their hosts. Paradoxically, the viral adsorption rate is often reported to be larger than the theoretical limit imposed by the virus-host encounter rate, highlighting a major gap in the experimental quantification of virus-host interactions. Here we present the first direct quantification of the viral adsorption rate, obtained using live imaging of individual host cells and viruses for thousands of encounter events. The host-virus pair consisted of Prochlorococcus MED4, a 800 nm small non-motile bacterium that dominates photosynthesis in the oceans, and its virus PHM-2, a myovirus that has a 80 nm icosahedral capsid and a 200 nm long rigid tail. We simultaneously imaged hosts and viruses moving by Brownian motion using two-channel epifluorescent microscopy in a microfluidic device. This detailed quantification of viral transport yielded a 20-fold smaller adsorption efficiency than previously reported, indicating the need for a major revision in infection models for marine and likely other ecosystems.

  17. Automatic Polyp Detection in Pillcam Colon 2 Capsule Images and Videos: Preliminary Feasibility Report

    Directory of Open Access Journals (Sweden)

    Pedro N. Figueiredo

    2011-01-01

    Full Text Available Background. The aim of this work is to present an automatic colorectal polyp detection scheme for capsule endoscopy. Methods. PillCam COLON2 capsule-based images and videos were used in our study. The database consists of full exam videos from five patients. The algorithm is based on the assumption that the polyps show up as a protrusion in the captured images and is expressed by means of a P-value, defined by geometrical features. Results. Seventeen PillCam COLON2 capsule videos are included, containing frames with polyps, flat lesions, diverticula, bubbles, and trash liquids. Polyps larger than 1 cm express a P-value higher than 2000, and 80% of the polyps show a P-value higher than 500. Diverticula, bubbles, trash liquids, and flat lesions were correctly interpreted by the algorithm as nonprotruding images. Conclusions. These preliminary results suggest that the proposed geometry-based polyp detection scheme works well, not only by allowing the detection of polyps but also by differentiating them from nonprotruding images found in the films.

  18. Mission planning optimization of video satellite for ground multi-object staring imaging

    Science.gov (United States)

    Cui, Kaikai; Xiang, Junhua; Zhang, Yulin

    2018-03-01

    This study investigates the emergency scheduling problem of ground multi-object staring imaging for a single video satellite. In the proposed mission scenario, the ground objects require a specified duration of staring imaging by the video satellite. The planning horizon is not long, i.e., it is usually shorter than one orbit period. A binary decision variable and the imaging order are used as the design variables, and the total observation revenue combined with the influence of the total attitude maneuvering time is regarded as the optimization objective. Based on the constraints of the observation time windows, satellite attitude adjustment time, and satellite maneuverability, a constraint satisfaction mission planning model is established for ground object staring imaging by a single video satellite. Further, a modified ant colony optimization algorithm with tabu lists (Tabu-ACO) is designed to solve this problem. The proposed algorithm can fully exploit the intelligence and local search ability of ACO. Based on full consideration of the mission characteristics, the design of the tabu lists can reduce the search range of ACO and improve the algorithm efficiency significantly. The simulation results show that the proposed algorithm outperforms the conventional algorithm in terms of optimization performance, and it can obtain satisfactory scheduling results for the mission planning problem.

  19. Video Object Tracking in Neural Axons with Fluorescence Microscopy Images

    Directory of Open Access Journals (Sweden)

    Liang Yuan

    2014-01-01

    tracking. In this paper, we describe two automated tracking methods for analyzing neurofilament movement based on two different techniques: constrained particle filtering and tracking-by-detection. First, we introduce the constrained particle filtering approach. In this approach, the orientation and position of a particle are constrained by the axon’s shape such that fewer particles are necessary for tracking neurofilament movement than object tracking techniques based on generic particle filtering. Secondly, a tracking-by-detection approach to neurofilament tracking is presented. For this approach, the axon is decomposed into blocks, and the blocks encompassing the moving neurofilaments are detected by graph labeling using Markov random field. Finally, we compare two tracking methods by performing tracking experiments on real time-lapse image sequences of neurofilament movement, and the experimental results show that both methods demonstrate good performance in comparison with the existing approaches, and the tracking accuracy of the tracing-by-detection approach is slightly better between the two.

  20. Body movement analysis during sleep for children with ADHD using video image processing.

    Science.gov (United States)

    Nakatani, Masahiro; Okada, Shima; Shimizu, Sachiko; Mohri, Ikuko; Ohno, Yuko; Taniike, Masako; Makikawa, Masaaki

    2013-01-01

    In recent years, the amount of children with sleep disorders that cause arousal during sleep or light sleep is increasing. Attention-deficit hyperactivity disorder (ADHD) is a cause of this sleep disorder; children with ADHD have frequent body movement during sleep. Therefore, we investigated the body movement during sleep of children with and without ADHD using video imaging. We analysed large gross body movements (GM) that occur and obtained the GM rate and the rest duration. There were differences between the body movements of children with ADHD and normally developed children. The children with ADHD moved frequently, so their rest duration was shorter than that of the normally developed children. Additionally, the rate of gross body movement indicated a significant difference in REM sleep (p video image processing.

  1. The importance of video editing in automated image analysis in studies of the cerebral cortex.

    Science.gov (United States)

    Terry, R D; Deteresa, R

    1982-03-01

    Editing of the video image in computerized image analysis is readily accomplished with the appropriate apparatus, but slows the assay very significantly. In dealing with the cerebral cortex, however video editing is of considerable importance in that cells are very often contiguous to one another or are partially superimposed, and this gives an erroneous measurement unless those cells are artificially separated. Also important is elimination of vascular cells from consideration by the automated counting apparatus. A third available mode of editing allows the filling-in of the cytoplasm of cell bodies which are not fully stained with sufficient intensity to be wholly detected. This study, which utilizes 23 samples, demonstrates that, in a given area of a histologic section of cerebral cortex, the number of small cells is greater and the number of large neurons is smaller with editing than without. In that not all cases follow this general pattern, inadequate editing may lead to significant errors on individual specimens as well as to the calculated mean. Video editing is therefore an essential part of the morphometric study of cerebral cortex by means of automated image analysis.

  2. Measurement of thigmomorphogenesis and gravitropism by non-intrusive computerized video image processing

    Science.gov (United States)

    Jaffe, M. J.

    1984-01-01

    A video image processing instrument, DARWIN (Digital Analyser of Resolvable Whole-pictures by Image Numeration), was developed. It was programmed to measure stem or root growth and bending, and coupled to a specially mounted video camera to be able to automatically generate growth and bending curves during gravitropism. The growth of the plant is recorded on a video casette recorder with a specially modified time lapse function. At the end of the experiment, DARWIN analyses the growth or movement and prints out bending and growth curves. This system was used to measure thigmomorphagenesis in light grown corn plants. If the plant is rubbed with an applied force load of 0.38 N., it grows faster than the unrubbed control, whereas 1.14 N. retards its growth. Image analysis shows that most of the change in the rate of growth is caused in the first hour after rubbing. When DARWIN was used to measure gravitropism in dark grown oat seedlings, it was found that the top side of the shoot contracts during the first hour of gravitational stimulus, whereas the bottom side begins to elongate after 10 to 15 minutes.

  3. Visual Recognition in RGB Images and Videos by Learning from RGB-D Data.

    Science.gov (United States)

    Li, Wen; Chen, Lin; Xu, Dong; Van Gool, Luc

    2017-08-02

    In this work, we propose a new framework for recognizing RGB images or videos by leveraging a set of labeled RGB-D data, in which the depth features can be additionally extracted from the depth images or videos. We formulate this task as a new unsupervised domain adaptation (UDA) problem, in which we aim to take advantage of the additional depth features in the source domain and also cope with the data distribution mismatch between the source and target domains. To handle the domain distribution mismatch, we propose to learn an optimal projection matrix to map the samples from both domains into a common subspace such that the domain distribution mismatch can be reduced. Moreover, we also propose different strategies to effectively utilize the additional depth features. To simultaneously cope with the above two issues, we formulate a unified learning framework called domain adaptation from multi-view to single-view (DAM2S). By defining various forms of regularizers in our DAM2S framework, different strategies can be readily incorporated to learn robust SVM classifiers for classifying the target samples. We conduct comprehensive experiments, which demonstrate the effectiveness of our proposed methods for recognizing RGB images and videos by learning from RGB-D data.

  4. The effect of music video clips on adolescent boys' body image, mood, and schema activation.

    Science.gov (United States)

    Mulgrew, Kate E; Volcevski-Kostas, Diana; Rendell, Peter G

    2014-01-01

    There is limited research that has examined experimentally the effects of muscular images on adolescent boys' body image, with no research specifically examining the effects of music television. The aim of the current study was to examine the effects of viewing muscular and attractive singers in music video clips on early, mid, and late adolescent boys' body image, mood, and schema activation. Participants were 180 boys in grade 7 (mean age = 12.73 years), grade 9 (mean age = 14.40 years) or grade 11 (mean age = 16.15 years) who completed pre- and post-test measures of mood and body satisfaction after viewing music videos containing male singers of muscular or average appearance. They also completed measures of schema activation and social comparison after viewing the clips. The results showed that the boys who viewed the muscular clips reported poorer upper body satisfaction, lower appearance satisfaction, lower happiness, and more depressive feelings compared to boys who viewed the clips depicting singers of average appearance. There was no evidence of increased appearance schema activation but the boys who viewed the muscular clips did report higher levels of social comparison to the singers. The results suggest that music video clips are a powerful form of media in conveying information about the male ideal body shape and that negative effects are found in boys as young as 12 years.

  5. Estimating Plant Traits of Grasslands from UAV-Acquired Hyperspectral Images: A Comparison of Statistical Approaches

    Directory of Open Access Journals (Sweden)

    Alessandra Capolupo

    2015-12-01

    Full Text Available Grassland ecosystems cover around 40% of the entire Earth’s surface. Therefore, it is necessary to guarantee good grassland management at field scale in order to improve its conservation and to achieve optimal growth. This study identified the most appropriate statistical strategy, between partial least squares regression (PLSR and narrow vegetation indices, for estimating the structural and biochemical grassland traits from UAV-acquired hyperspectral images. Moreover, the influence of fertilizers on plant traits for grasslands was analyzed. Hyperspectral data were collected from an experimental field at the farm Haus Riswick, near Kleve in Germany, for two different flight campaigns in May and October. The collected image blocks were geometrically and radiometrically corrected for surface reflectance. Spectral signatures extracted for the plots were adopted to derive grassland traits by computing PLSR and the following narrow vegetation indices: the MERIS Terrestrial Chlorophyll Index (MTCI, the ratio of the Modified Chlorophyll Absorption in Reflectance and Optimized Soil-Adjusted Vegetation Index (MCARI/OSAVI modified by Wu, the Red-edge Chlorophyll Index (CIred-edge, and the Normalized Difference Red Edge (NDRE. PLSR showed promising results for estimating grassland structural traits and gave less satisfying outcomes for the selected chemical traits (crude ash, crude fiber, crude protein, Na, K, metabolic energy. Established relations are not influenced by the type and the amount of fertilization, while they are affected by the grassland health status. PLSR is found to be the best strategy, among the approaches analyzed in this paper, for exploring structural and biochemical features of grasslands. Using UAV-based hyperspectral sensing allows for the highly detailed assessment of grassland experimental plots.

  6. Using image processing technology combined with decision tree algorithm in laryngeal video stroboscope automatic identification of common vocal fold diseases.

    Science.gov (United States)

    Jeffrey Kuo, Chung-Feng; Wang, Po-Chun; Chu, Yueng-Hsiang; Wang, Hsing-Won; Lai, Chun-Yu

    2013-10-01

    This study used the actual laryngeal video stroboscope videos taken by physicians in clinical practice as the samples for experimental analysis. The samples were dynamic vocal fold videos. Image processing technology was used to automatically capture the image of the largest glottal area from the video to obtain the physiological data of the vocal folds. In this study, an automatic vocal fold disease identification system was designed, which can obtain the physiological parameters for normal vocal folds, vocal paralysis and vocal nodules from image processing according to the pathological features. The decision tree algorithm was used as the classifier of the vocal fold diseases. The identification rate was 92.6%, and the identification rate with an image recognition improvement processing procedure after classification can be improved to 98.7%. Hence, the proposed system has value in clinical practices. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Video transect images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): data from 2000 (NODC Accession 0000728)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2000 at 23 sites, some of which had multiple depths. Estimates of substrate...

  8. Video transect images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): data from 2002 (NODC Accession 0000961)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2002 at 23 sites, some of which had multiple depths. Estimates of substrate...

  9. Video transect images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): data from year 1999 (NODC Accession 0000671)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (JPEG files) from CRAMP surveys taken in 1999 at 26 sites, some of which had multiple depths. Estimates of substrate...

  10. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): Data from 2000 (NODC Accession 0000728)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2000 at 23 sites, some of which had multiple depths. Estimates of substrate...

  11. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP):Data from 2003 (NODC Accession 0001732)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2003 at 15 sites, some of which had multiple depths. Estimates of substrate...

  12. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): Data from 2003 (NODC Accession 0001732)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2003 at 15 sites, some of which had multiple depths. Estimates of substrate...

  13. Video Transect Images (1999) from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP) (NODC Accession 0000671)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (JPEG files) from CRAMP surveys taken in 1999 at 26 sites, some of which had multiple depths. Estimates of substrate...

  14. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): Data from 2002 (NODC Accession 0000961)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2002 at 23 sites, some of which had multiple depths. Estimates of substrate...

  15. Artifact reduction of compressed images and video combining adaptive fuzzy filtering and directional anisotropic diffusion

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Forchhammer, Søren; Korhonen, Jari

    2011-01-01

    and ringing artifacts, we have applied directional anisotropic diffusion. Besides that, the selection of the adaptive threshold parameter for the diffusion coefficient has also improved the performance of the algorithm. Experimental results on JPEG compressed images as well as MJPEG and H.264 compressed......Fuzzy filtering is one of the recently developed methods for reducing distortion in compressed images and video. In this paper, we combine the powerful anisotropic diffusion equations with fuzzy filtering in order to reduce the impact of artifacts. Based on the directional nature of the blocking...

  16. Comparison of ultrasound imaging and video otoscopy with cross-sectional imaging for the diagnosis of canine otitis media.

    Science.gov (United States)

    Classen, J; Bruehschwein, A; Meyer-Lindenberg, A; Mueller, R S

    2016-11-01

    Ultrasound imaging (US) of the tympanic bulla (TB) for diagnosis of canine otitis media (OM) is less expensive and less invasive than cross-sectional imaging techniques including computed tomography (CT) and magnetic resonance imaging (MRI). Video otoscopy (VO) is used to clean inflamed ears. The objective of this study was to investigate the diagnostic value of US and VO in OM using cross-sectional imaging as the reference standard. Client owned dogs with clinical signs of OE and/or OM were recruited for the study. Physical, neurological, otoscopic and otic cytological examinations were performed on each dog and both TB were evaluated using US with an 8 MHz micro convex probe, cross-sectional imaging (CT or MRI) and VO. Of 32 dogs enrolled, 24 had chronic otitis externa (OE; five also had clinical signs of OM), four had acute OE without clinical signs of OM, and four had OM without OE. Ultrasound imaging was positive in three of 14 ears, with OM identified on cross-sectional imaging. One US was false positive. Sensitivity, specificity, positive and negative predictive values and accuracy of US were 21%, 98%, 75%, 81% and 81%, respectively. The corresponding values of VO were 91%, 98%, 91%, 98% and 97%, respectively. Video otoscopy could not identify OM in one case, while in another case, although the tympanum was ruptured, the CT was negative. Ultrasound imaging should not replace cross-sectional imaging for the diagnosis of canine OM, but can be helpful, and VO was much more reliable than US. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. High-Performance Motion Estimation for Image Sensors with Video Compression

    Directory of Open Access Journals (Sweden)

    Weizhi Xu

    2015-08-01

    Full Text Available It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME. Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME.

  18. High-Performance Motion Estimation for Image Sensors with Video Compression.

    Science.gov (United States)

    Xu, Weizhi; Yin, Shouyi; Liu, Leibo; Liu, Zhiyong; Wei, Shaojun

    2015-08-21

    It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME) is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME). Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME.

  19. A real-time remote video streaming platform for ultrasound imaging.

    Science.gov (United States)

    Ahmadi, Mehdi; Gross, Warren J; Kadoury, Samuel

    2016-08-01

    Ultrasound is a viable imaging technology in remote and resources-limited areas. Ultrasonography is a user-dependent skill which depends on a high degree of training and hands-on experience. However, there is a limited number of skillful sonographers located in remote areas. In this work, we aim to develop a real-time video streaming platform which allows specialist physicians to remotely monitor ultrasound exams. To this end, an ultrasound stream is captured and transmitted through a wireless network into remote computers, smart-phones and tablets. In addition, the system is equipped with a camera to track the position of the ultrasound probe. The main advantage of our work is using an open source platform for video streaming which gives us more control over streaming parameters than the available commercial products. The transmission delays of the system are evaluated for several ultrasound video resolutions and the results show that ultrasound videos close to the high-definition (HD) resolution can be received and displayed on an Android tablet with the delay of 0.5 seconds which is acceptable for accurate real-time diagnosis.

  20. [Superimpose of images by appending two simple video amplifier circuits to color television (author's transl)].

    Science.gov (United States)

    Kojima, K; Hiraki, T; Koshida, K; Maekawa, R; Hisada, K

    1979-09-15

    Images are very useful to obtain diagnostic informations in medical fields. Also by superimposing two or three images obtained from the same patient, various informations, for example a degree of overlapping and anatomical land mark, which can not be found in only one image, can be often found. In this paper characteristics of our trial color television system for the purpose of superimposing X-ray images and/or radionuclide images are described. This color television system, superimposing two images in each different color consists of two monochromatic vidicon cameras and 20 inches conventional color television in which only two simple video amplifier circuits are added. Signals from vidicon cameras are amplified about 40 dB and are directly applied to cathode terminals of color CRT in the television. This system is very simple and economical color displays, and enhance a degree of overlapping and displacement between images. As one of typical clinical applications, pancreas images were superimposed in color by this method. As a result, size and position of pancreas was enhanced. Also X-ray image and radionuclide image were superimposed to find exactly the position of tumors. Furthermore this system was very useful for color display of multinuclides scintigraphy.

  1. Road Signs Detection and Recognition Utilizing Images and 3d Point Cloud Acquired by Mobile Mapping System

    Science.gov (United States)

    Li, Y. H.; Shinohara, T.; Satoh, T.; Tachibana, K.

    2016-06-01

    High-definition and highly accurate road maps are necessary for the realization of automated driving, and road signs are among the most important element in the road map. Therefore, a technique is necessary which can acquire information about all kinds of road signs automatically and efficiently. Due to the continuous technical advancement of Mobile Mapping System (MMS), it has become possible to acquire large number of images and 3d point cloud efficiently with highly precise position information. In this paper, we present an automatic road sign detection and recognition approach utilizing both images and 3D point cloud acquired by MMS. The proposed approach consists of three stages: 1) detection of road signs from images based on their color and shape features using object based image analysis method, 2) filtering out of over detected candidates utilizing size and position information estimated from 3D point cloud, region of candidates and camera information, and 3) road sign recognition using template matching method after shape normalization. The effectiveness of proposed approach was evaluated by testing dataset, acquired from more than 180 km of different types of roads in Japan. The results show a very high success in detection and recognition of road signs, even under the challenging conditions such as discoloration, deformation and in spite of partial occlusions.

  2. ROAD SIGNS DETECTION AND RECOGNITION UTILIZING IMAGES AND 3D POINT CLOUD ACQUIRED BY MOBILE MAPPING SYSTEM

    Directory of Open Access Journals (Sweden)

    Y. H. Li

    2016-06-01

    Full Text Available High-definition and highly accurate road maps are necessary for the realization of automated driving, and road signs are among the most important element in the road map. Therefore, a technique is necessary which can acquire information about all kinds of road signs automatically and efficiently. Due to the continuous technical advancement of Mobile Mapping System (MMS, it has become possible to acquire large number of images and 3d point cloud efficiently with highly precise position information. In this paper, we present an automatic road sign detection and recognition approach utilizing both images and 3D point cloud acquired by MMS. The proposed approach consists of three stages: 1 detection of road signs from images based on their color and shape features using object based image analysis method, 2 filtering out of over detected candidates utilizing size and position information estimated from 3D point cloud, region of candidates and camera information, and 3 road sign recognition using template matching method after shape normalization. The effectiveness of proposed approach was evaluated by testing dataset, acquired from more than 180 km of different types of roads in Japan. The results show a very high success in detection and recognition of road signs, even under the challenging conditions such as discoloration, deformation and in spite of partial occlusions.

  3. Integration of Video Images and CAD Wireframes for 3d Object Localization

    Science.gov (United States)

    Persad, R. A.; Armenakis, C.; Sohn, G.

    2012-07-01

    The tracking of moving objects from single images has received widespread attention in photogrammetric computer vision and considered to be at a state of maturity. This paper presents a model-driven solution for localizing moving objects detected from monocular, rotating and zooming video images in a 3D reference frame. To realize such a system, the recovery of 2D to 3D projection parameters is essential. Automatic estimation of these parameters is critical, particularly for pan-tilt-zoom (PTZ) surveillance cameras where parameters change spontaneously upon camera motion. In this work, an algorithm for automated parameter retrieval is proposed. This is achieved by matching linear features between incoming images from video sequences and simple geometric 3D CAD wireframe models of man-made structures. The feature matching schema uses a hypothesis-verify optimization framework referred to as LR-RANSAC. This novel method improves the computational efficiency of the matching process in comparison to the standard RANSAC robust estimator. To demonstrate the applicability and performance of the method, experiments have been performed on indoor and outdoor image sequences under varying conditions with lighting changes and occlusions. Reliability of the matching algorithm has been analyzed by comparing the automatically determined camera parameters with ground truth (GT). Dependability of the retrieved parameters for 3D localization has also been assessed by comparing the difference between 3D positions of moving image objects estimated using the LR-RANSAC-derived parameters and those computed using GT parameters.

  4. INTEGRATION OF VIDEO IMAGES AND CAD WIREFRAMES FOR 3D OBJECT LOCALIZATION

    Directory of Open Access Journals (Sweden)

    R. A. Persad

    2012-07-01

    Full Text Available The tracking of moving objects from single images has received widespread attention in photogrammetric computer vision and considered to be at a state of maturity. This paper presents a model-driven solution for localizing moving objects detected from monocular, rotating and zooming video images in a 3D reference frame. To realize such a system, the recovery of 2D to 3D projection parameters is essential. Automatic estimation of these parameters is critical, particularly for pan-tilt-zoom (PTZ surveillance cameras where parameters change spontaneously upon camera motion. In this work, an algorithm for automated parameter retrieval is proposed. This is achieved by matching linear features between incoming images from video sequences and simple geometric 3D CAD wireframe models of man-made structures. The feature matching schema uses a hypothesis-verify optimization framework referred to as LR-RANSAC. This novel method improves the computational efficiency of the matching process in comparison to the standard RANSAC robust estimator. To demonstrate the applicability and performance of the method, experiments have been performed on indoor and outdoor image sequences under varying conditions with lighting changes and occlusions. Reliability of the matching algorithm has been analyzed by comparing the automatically determined camera parameters with ground truth (GT. Dependability of the retrieved parameters for 3D localization has also been assessed by comparing the difference between 3D positions of moving image objects estimated using the LR-RANSAC-derived parameters and those computed using GT parameters.

  5. Availability and performance of image/video-based vital signs monitoring methods: a systematic review protocol.

    Science.gov (United States)

    Harford, Mirae; Catherall, Jacqueline; Gerry, Stephen; Young, Duncan; Watkinson, Peter

    2017-10-25

    For many vital signs, monitoring methods require contact with the patient and/or are invasive in nature. There is increasing interest in developing still and video image-guided monitoring methods that are non-contact and non-invasive. We will undertake a systematic review of still and video image-based monitoring methods. We will perform searches in multiple databases which include MEDLINE, Embase, CINAHL, Cochrane library, IEEE Xplore and ACM Digital Library. We will use OpenGrey and Google searches to access unpublished or commercial data. We will not use language or publication date restrictions. The primary goal is to summarise current image-based vital signs monitoring methods, limited to heart rate, respiratory rate, oxygen saturations and blood pressure. Of particular interest will be the effectiveness of image-based methods compared to reference devices. Other outcomes of interest include the quality of the method comparison studies with respect to published reporting guidelines, any limitations of non-contact non-invasive technology and application in different populations. To the best of our knowledge, this is the first systematic review of image-based non-contact methods of vital signs monitoring. Synthesis of currently available technology will facilitate future research in this highly topical area. PROSPERO CRD42016029167.

  6. Availability and performance of image/video-based vital signs monitoring methods: a systematic review protocol

    Directory of Open Access Journals (Sweden)

    Mirae Harford

    2017-10-01

    Full Text Available Abstract Background For many vital signs, monitoring methods require contact with the patient and/or are invasive in nature. There is increasing interest in developing still and video image-guided monitoring methods that are non-contact and non-invasive. We will undertake a systematic review of still and video image-based monitoring methods. Methods We will perform searches in multiple databases which include MEDLINE, Embase, CINAHL, Cochrane library, IEEE Xplore and ACM Digital Library. We will use OpenGrey and Google searches to access unpublished or commercial data. We will not use language or publication date restrictions. The primary goal is to summarise current image-based vital signs monitoring methods, limited to heart rate, respiratory rate, oxygen saturations and blood pressure. Of particular interest will be the effectiveness of image-based methods compared to reference devices. Other outcomes of interest include the quality of the method comparison studies with respect to published reporting guidelines, any limitations of non-contact non-invasive technology and application in different populations. Discussion To the best of our knowledge, this is the first systematic review of image-based non-contact methods of vital signs monitoring. Synthesis of currently available technology will facilitate future research in this highly topical area. Systematic review registration PROSPERO CRD42016029167

  7. A unified framework for capturing facial images in video surveillance systems using cooperative camera system

    Science.gov (United States)

    Chan, Fai; Moon, Yiu-Sang; Chen, Jiansheng; Ma, Yiu-Kwan; Tsang, Wai-Hung; Fu, Kah-Kuen

    2008-04-01

    Low resolution and un-sharp facial images are always captured from surveillance videos because of long human-camera distance and human movements. Previous works addressed this problem by using an active camera to capture close-up facial images without considering human movements and mechanical delays of the active camera. In this paper, we proposed a unified framework to capture facial images in video surveillance systems by using one static and active camera in a cooperative manner. Human faces are first located by a skin-color based real-time face detection algorithm. A stereo camera model is also employed to approximate human face location and his/her velocity with respect to the active camera. Given the mechanical delays of the active camera, the position of a target face with a given delay can be estimated using a Human-Camera Synchronization Model. By controlling the active camera with corresponding amount of pan, tilt, and zoom, a clear close-up facial image of a moving human can be captured then. We built the proposed system in an 8.4-meter indoor corridor. Results show that the proposed stereo camera configuration can locate faces with average error of 3%. In addition, it is capable of capturing facial images of a walking human clearly in first instance in 90% of the test cases.

  8. Stigma models: Testing hypotheses of how images of Nevada are acquired and values are attached to them

    Energy Technology Data Exchange (ETDEWEB)

    Jenkins-Smith, H.C. [New Mexico Univ., Albuquerque, NM (United States)

    1994-12-01

    This report analyzes data from surveys on the effects that images associated with nuclear power and waste (i.e., nuclear images) have on people`s preference to vacation in Nevada. The analysis was stimulated by a model of imagery and stigma which assumes that information about a potentially hazardous facility generates signals that elicit negative images about the place in which it is located. Individuals give these images negative values (valences) that lessen their desire to vacation, relocate, or retire in that place. The model has been used to argue that the proposed Yucca Mountain high-level nuclear waste repository could elicit images of nuclear waste that would stigmatize Nevada and thus impose substantial economic losses there. This report proposes a revised model that assumes that the acquisition and valuation of images depend on individuals` ideological and cultural predispositions and that the ways in which new images will affect their preferences and behavior partly depend on these predispositions. The report tests these hypotheses: (1) individuals with distinct cultural and ideological predispositions have different propensities for acquiring nuclear images, (2) these people attach different valences to these images, (3) the variations in these valences are important, and (4) the valences of the different categories of images within an individual`s image sets for a place correlate very well. The analysis largely confirms these hypotheses, indicating that the stigma model should be revised to (1) consider the relevant ideological and cultural predispositions of the people who will potentially acquire and attach value to the image, (2) specify the kinds of images that previously attracted people to the host state, and (3) consider interactions between the old and potential new images of the place. 37 refs., 18 figs., 17 tabs.

  9. Acquiring and preprocessing leaf images for automated plant identification: understanding the tradeoff between effort and information gain

    Directory of Open Access Journals (Sweden)

    Michael Rzanny

    2017-11-01

    Full Text Available Abstract Background Automated species identification is a long term research subject. Contrary to flowers and fruits, leaves are available throughout most of the year. Offering margin and texture to characterize a species, they are the most studied organ for automated identification. Substantially matured machine learning techniques generate the need for more training data (aka leaf images. Researchers as well as enthusiasts miss guidance on how to acquire suitable training images in an efficient way. Methods In this paper, we systematically study nine image types and three preprocessing strategies. Image types vary in terms of in-situ image recording conditions: perspective, illumination, and background, while the preprocessing strategies compare non-preprocessed, cropped, and segmented images to each other. Per image type-preprocessing combination, we also quantify the manual effort required for their implementation. We extract image features using a convolutional neural network, classify species using the resulting feature vectors and discuss classification accuracy in relation to the required effort per combination. Results The most effective, non-destructive way to record herbaceous leaves is to take an image of the leaf’s top side. We yield the highest classification accuracy using destructive back light images, i.e., holding the plucked leaf against the sky for image acquisition. Cropping the image to the leaf’s boundary substantially improves accuracy, while precise segmentation yields similar accuracy at a substantially higher effort. The permanent use or disuse of a flash light has negligible effects. Imaging the typically stronger textured backside of a leaf does not result in higher accuracy, but notably increases the acquisition cost. Conclusions In conclusion, the way in which leaf images are acquired and preprocessed does have a substantial effect on the accuracy of the classifier trained on them. For the first time, this

  10. A Video Rate Confocal Laser Beam Scanning Light Microscope Using An Image Dissector

    Science.gov (United States)

    Goldstein, Seth R.; Hubin, Thomas; Rosenthal, Scott; Washburn, Clayton

    1989-12-01

    A video rate confocal reflected light microscope with no moving parts has been developed. Return light from an acousto-optically raster scanned laser beam is imaged from the microscope stage onto the photocathode of an Image Dissector Tube (IDT). Confocal operation is achieved by appropriately raster scanning with the IDT x and y deflection coils so as to continuously "sample" that portion of the photocathode that is being instantaneously illuminated by the return image of the scanning laser spot. Optimum IDT scan parameters and geometric distortion correction parameters are determined under computer control within seconds and are then continuously applied to insure system alignment. The system is operational and reflected light images from a variety of objects have been obtained. The operating principle can be extended to fluorescence and transmission microscopy.

  11. Short Project-Based Learning with MATLAB Applications to Support the Learning of Video-Image Processing

    Science.gov (United States)

    Gil, Pablo

    2017-10-01

    University courses concerning Computer Vision and Image Processing are generally taught using a traditional methodology that is focused on the teacher rather than on the students. This approach is consequently not effective when teachers seek to attain cognitive objectives involving their students' critical thinking. This manuscript covers the development, implementation and assessment of a short project-based engineering course with MATLAB applications Multimedia Engineering being taken by Bachelor's degree students. The principal goal of all course lectures and hands-on laboratory activities was for the students to not only acquire image-specific technical skills but also a general knowledge of data analysis so as to locate phenomena in pixel regions of images and video frames. This would hopefully enable the students to develop skills regarding the implementation of the filters, operators, methods and techniques used for image processing and computer vision software libraries. Our teaching-learning process thus permits the accomplishment of knowledge assimilation, student motivation and skill development through the use of a continuous evaluation strategy to solve practical and real problems by means of short projects designed using MATLAB applications. Project-based learning is not new. This approach has been used in STEM learning in recent decades. But there are many types of projects. The aim of the current study is to analyse the efficacy of short projects as a learning tool when compared to long projects during which the students work with more independence. This work additionally presents the impact of different types of activities, and not only short projects, on students' overall results in this subject. Moreover, a statistical study has allowed the author to suggest a link between the students' success ratio and the type of content covered and activities completed on the course. The results described in this paper show that those students who took part

  12. Towards Realising Secure and Efficient Image and Video Processing Applications on Quantum Computers

    Directory of Open Access Journals (Sweden)

    Abdullah M. Iliyasu

    2013-07-01

    Full Text Available Exploiting the promise of security and efficiency that quantum computing offers, the basic foundations leading to commercial applications for quantum image processing are proposed. Two mathematical frameworks and algorithms to accomplish the watermarking of quantum images, authentication of ownership of already watermarked images and recovery of their unmarked versions on quantum computers are proposed. Encoding the images as 2n-sized normalised Flexible Representation of Quantum Images (FRQI states, with n-qubits and 1-qubit dedicated to capturing the respective information about the colour and position of every pixel in the image respectively, the proposed algorithms utilise the flexibility inherent to the FRQI representation, in order to confine the transformations on an image to any predetermined chromatic or spatial (or a combination of both content of the image as dictated by the watermark embedding, authentication or recovery circuits. Furthermore, by adopting an apt generalisation of the criteria required to realise physical quantum computing hardware, three standalone components that make up the framework to prepare, manipulate and recover the various contents required to represent and produce movies on quantum computers are also proposed. Each of the algorithms and the mathematical foundations for their execution were simulated using classical (i.e., conventional or non-quantum computing resources, and their results were analysed alongside other longstanding classical computing equivalents. The work presented here, combined together with the extensions suggested, provide the basic foundations towards effectuating secure and efficient classical-like image and video processing applications on the quantum-computing framework.

  13. Analysis of Decorrelation Transform Gain for Uncoded Wireless Image and Video Communication.

    Science.gov (United States)

    Ruiqin Xiong; Feng Wu; Jizheng Xu; Xiaopeng Fan; Chong Luo; Wen Gao

    2016-04-01

    An uncoded transmission scheme called SoftCast has recently shown great potential for wireless video transmission. Unlike conventional approaches, SoftCast processes input images only by a series of transformations and modulates the coefficients directly to a dense constellation for transmission. The transmission is uncoded and lossy in nature, with its noise level commensurate with the channel condition. This paper presents a theoretical analysis for an uncoded visual communication, focusing on developing a quantitative measurements for the efficiency of decorrelation transform in a generalized uncoded transmission framework. Our analysis reveals that the energy distribution among signal elements is critical for the efficiency of uncoded transmission. A decorrelation transform can potentially bring a significant performance gain by boosting the energy diversity in signal representation. Numerical results on Markov random process and real image and video signals are reported to evaluate the performance gain of using different transforms in uncoded transmission. The analysis presented in this paper is verified by simulated SoftCast transmissions. This provide guidelines for designing efficient uncoded video transmission schemes.

  14. JF-cut: a parallel graph cut approach for large-scale image and video.

    Science.gov (United States)

    Peng, Yi; Chen, Li; Ou-Yang, Fang-Xin; Chen, Wei; Yong, Jun-Hai

    2015-02-01

    Graph cut has proven to be an effective scheme to solve a wide variety of segmentation problems in vision and graphics community. The main limitation of conventional graph-cut implementations is that they can hardly handle large images or videos because of high computational complexity. Even though there are some parallelization solutions, they commonly suffer from the problems of low parallelism (on CPU) or low convergence speed (on GPU). In this paper, we present a novel graph-cut algorithm that leverages a parallelized jump flooding technique and an heuristic push-relabel scheme to enhance the graph-cut process, namely, back-and-forth relabel, convergence detection, and block-wise push-relabel. The entire process is parallelizable on GPU, and outperforms the existing GPU-based implementations in terms of global convergence, information propagation, and performance. We design an intuitive user interface for specifying interested regions in cases of occlusions when handling video sequences. Experiments on a variety of data sets, including images (up to 15 K × 10 K), videos (up to 2.5 K × 1.5 K × 50), and volumetric data, achieve high-quality results and a maximum 40-fold (139-fold) speedup over conventional GPU (CPU-)-based approaches.

  15. Fast single-photon imager acquires 1024 pixels at 100 kframe/s

    Science.gov (United States)

    Guerrieri, Fabrizio; Tisa, Simone; Zappa, Franco

    2009-02-01

    We present the design and we discuss in depth the operating conditions of a two-dimensional (2-D) imaging array of single-photon detectors that provides a total of 1024 pixels, laid out in 32 rows by 32 columns array, integrated within a monolithic silicon chip with dimensions of 3.5 mm x 3.5 mm. We employed a standard high-voltage 0.35μm CMOS fabrication technology, with no need of any custom processing. Each pixel consists of one Single-Photon Avalanche Diode (SPAD) and a compact front-end analog electronics followed by a digital processing circuitry. The in-pixel front-end electronics senses the ignition of the avalanche, quenches the detector, provides a pulse and restores the detector for detecting a subsequent photon. The processing circuitry counts events (both photon and unwelcome "noise" ignition) within user-selectable integration time-slots and stores the count into an in-pixel memory cell, which is read-out in 10 ns/pixel. Such a two-levels pipeline architecture allows to acquire the actual frame while contemporary reading out the previous one, thus achieving a very high free-running frame rate, with negligible inter-frame dead-time. Each pixel is therefore a completely independent photon-counter. The measured Photo Detection Efficiency (PDE) tops 43% at 5V excess-bias, while the Dark-Counting Rate (DCR) is below 4kcps (counts per second) at room temperature. The maximum frame-rate depends on the system clock; with a convenient 100MHz system clock we achieved a free-running speed of 100 kframe/s from the all 1024 pixels.

  16. Terrain changes from images acquired on opportunistic flights by SfM photogrammetry

    Science.gov (United States)

    Girod, Luc; Nuth, Christopher; Kääb, Andreas; Etzelmüller, Bernd; Kohler, Jack

    2017-03-01

    Acquiring data to analyse change in topography is often a costly endeavour requiring either extensive, potentially risky, fieldwork and/or expensive equipment or commercial data. Bringing the cost down while keeping the precision and accuracy has been a focus in geoscience in recent years. Structure from motion (SfM) photogrammetric techniques are emerging as powerful tools for surveying, with modern algorithm and large computing power allowing for the production of accurate and detailed data from low-cost, informal surveys. The high spatial and temporal resolution permits the monitoring of geomorphological features undergoing relatively rapid change, such as glaciers, moraines, or landslides. We present a method that takes advantage of light-transport flights conducting other missions to opportunistically collect imagery for geomorphological analysis. We test and validate an approach in which we attach a consumer-grade camera and a simple code-based Global Navigation Satellite System (GNSS) receiver to a helicopter to collect data when the flight path covers an area of interest. Our method is based and builds upon Welty et al. (2013), showing the ability to link GNSS data to images without a complex physical or electronic link, even with imprecise camera clocks and irregular time lapses. As a proof of concept, we conducted two test surveys, in September 2014 and 2015, over the glacier Midtre Lovénbreen and its forefield, in northwestern Svalbard. We were able to derive elevation change estimates comparable to in situ mass balance stake measurements. The accuracy and precision of our DEMs allow detection and analysis of a number of processes in the proglacial area, including the presence of thermokarst and the evolution of water channels.

  17. Validation of a pediatric vocal fold nodule rating scale based on digital video images.

    Science.gov (United States)

    Nuss, Roger C; Ward, Jessica; Recko, Thomas; Huang, Lin; Woodnorth, Geralyn Harvey

    2012-01-01

    We sought to create a validated scale of vocal fold nodules in children, based on digital video clips obtained during diagnostic fiberoptic laryngoscopy. We developed a 4-point grading scale of vocal fold nodules in children, based upon short digital video clips. A tutorial for use of the scale, including schematic drawings of nodules, static images, and 10-second video clips, was presented to 36 clinicians with various levels of experience. The clinicians then reviewed 40 short digital video samples from pediatric patients evaluated in a voice clinic and rated the nodule size. Statistical analysis of the ratings provided inter-rater reliability scores. Thirty-six clinicians with various levels of experience rated a total of 40 short video clips. The ratings of experienced raters (14 pediatric otolaryngology attending physicians and pediatric otolaryngology fellows) were compared with those of inexperienced raters (22 nurses, medical students, otolaryngology residents, physician assistants, and pediatric speech-language pathologists). The overall intraclass correlation coefficient for the ratings of nodule size was quite good (0.62; 95% confidence interval, 0.52 to 0.74). The p value for experienced raters versus inexperienced raters was 0.1345, indicating no statistically significant difference in the ratings by these two groups. The intraclass correlation coefficient for intra-rater reliability was very high (0.89). The use of a dynamic scale of pediatric vocal fold nodule size most realistically represents the clinical assessment of nodules during an office visit. The results of this study show a high level of agreement between experienced and inexperienced raters. This scale can be used with a high level of reliability by clinicians with various levels of experience. A validated grading scale will help to assess long-term outcomes of pediatric patients with vocal fold nodules.

  18. Anatomic vs. acquired image frame discordance in spectral domain optical coherence tomography minimum rim measurements.

    Directory of Open Access Journals (Sweden)

    Lin He

    Full Text Available PURPOSE: To quantify the effects of using the fovea to Bruch's membrane opening (FoBMO axis as the nasal-temporal midline for 30° sectoral (clock-hour spectral domain optical coherence tomography (SDOCT optic nerve head (ONH minimum rim width (MRW and area (MRA calculations. METHODS: The internal limiting membrane and BMO were delineated within 24 radial ONH B-scans in 222 eyes of 222 participants with ocular hypertension and glaucoma. For each eye the fovea was marked within the infrared reflectance image, the FoBMO angle (θ relative to the acquired image frame (AIF horizontal was calculated, the ONH was divided into 30° sectors using a FoBMO or AIF nasal/temporal axis, and SDOCT MRW and MRA were quantified within each FoBMO vs. AIF sector. For each sector, focal rim loss was calculated as the MRW and MRA gradients (i.e. the difference between the value for that sector and the one clockwise to it divided by 30°. Sectoral FoBMO vs. AIF discordance was calculated as the difference between the FoBMO and AIF values for each sector. Generalized estimating equations were used to predict the eyes and sectors of maximum FoBMO vs. AIF discordance. RESULTS: The mean FoBMO angle was -6.6±4.2° (range: -17° to +7°. FoBMO vs. AIF discordance in sectoral mean MRW and MRA was significant for 7 of 12 and 6 of 12 sectors, respectively (p<0.05, Wilcoxon test, Bonferroni correction. Eye-specific, FoBMO vs. AIF sectoral discordance was predicted by sectoral rim gradient (p<0.001 and FoBMO angle (p<0.001 and achieved maximum values of 83% for MRW and 101% for MRA. CONCLUSIONS: Using the FoBMO axis as the nasal-temporal axis to regionalize the ONH rather than a line parallel to the AIF horizontal axis significantly influences clock-hour SDOCT rim values. This effect is greatest in eyes with large FoBMO angles and sectors with focal rim loss.

  19. A flexible software architecture for scalable real-time image and video processing applications

    Science.gov (United States)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  20. Interaction between High-Level and Low-Level Image Analysis for Semantic Video Object Extraction

    Directory of Open Access Journals (Sweden)

    Ebrahimi Touradj

    2004-01-01

    Full Text Available The task of extracting a semantic video object is split into two subproblems, namely, object segmentation and region segmentation. Object segmentation relies on a priori assumptions, whereas region segmentation is data-driven and can be solved in an automatic manner. These two subproblems are not mutually independent, and they can benefit from interactions with each other. In this paper, a framework for such interaction is formulated. This representation scheme based on region segmentation and semantic segmentation is compatible with the view that image analysis and scene understanding problems can be decomposed into low-level and high-level tasks. Low-level tasks pertain to region-oriented processing, whereas the high-level tasks are closely related to object-level processing. This approach emulates the human visual system: what one “sees” in a scene depends on the scene itself (region segmentation as well as on the cognitive task (semantic segmentation at hand. The higher-level segmentation results in a partition corresponding to semantic video objects. Semantic video objects do not usually have invariant physical properties and the definition depends on the application. Hence, the definition incorporates complex domain-specific knowledge and is not easy to generalize. For the specific implementation used in this paper, motion is used as a clue to semantic information. In this framework, an automatic algorithm is presented for computing the semantic partition based on color change detection. The change detection strategy is designed to be immune to the sensor noise and local illumination variations. The lower-level segmentation identifies the partition corresponding to perceptually uniform regions. These regions are derived by clustering in an -dimensional feature space, composed of static as well as dynamic image attributes. We propose an interaction mechanism between the semantic and the region partitions which allows to cope with multiple

  1. A New Colorimetrically-Calibrated Automated Video-Imaging Protocol for Day-Night Fish Counting at the OBSEA Coastal Cabled Observatory

    Directory of Open Access Journals (Sweden)

    Joaquín del Río

    2013-10-01

    Full Text Available Field measurements of the swimming activity rhythms of fishes are scant due to the difficulty of counting individuals at a high frequency over a long period of time. Cabled observatory video monitoring allows such a sampling at a high frequency over unlimited periods of time. Unfortunately, automation for the extraction of biological information (i.e., animals’ visual counts per unit of time is still a major bottleneck. In this study, we describe a new automated video-imaging protocol for the 24-h continuous counting of fishes in colorimetrically calibrated time-lapse photographic outputs, taken by a shallow water (20 m depth cabled video-platform, the OBSEA. The spectral reflectance value for each patch was measured between 400 to 700 nm and then converted into standard RGB, used as a reference for all subsequent calibrations. All the images were acquired within a standardized Region Of Interest (ROI, represented by a 2 × 2 m methacrylate panel, endowed with a 9-colour calibration chart, and calibrated using the recently implemented “3D Thin-Plate Spline” warping approach in order to numerically define color by its coordinates in n-dimensional space. That operation was repeated on a subset of images, 500 images as a training set, manually selected since acquired under optimum visibility conditions. All images plus those for the training set were ordered together through Principal Component Analysis allowing the selection of 614 images (67.6% out of 908 as a total corresponding to 18 days (at 30 min frequency. The Roberts operator (used in image processing and computer vision for edge detection was used to highlights regions of high spatial colour gradient corresponding to fishes’ bodies. Time series in manual and visual counts were compared together for efficiency evaluation. Periodogram and waveform analysis outputs provided very similar results, although quantified parameters in relation to the strength of respective rhythms were

  2. Image deblurring in video stream based on two-level image model

    Science.gov (United States)

    Mukovozov, Arseniy; Nikolaev, Dmitry; Limonova, Elena

    2017-03-01

    An iterative algorithm is proposed for blind multi-image deblurring of binary images. The binarity is the only prior restriction imposed on the image. Image formation model assumes convolution with arbitrary kernel and addition of a constant value. Penalty functional is composed using binarity constraint for regularization. The algorithm estimates the original image and distortion parameters by alternate reduction of two parts of this functional. Experimental results for natural (non-synthetic) data are present.

  3. Video and thermal imaging system for monitoring interiors of high temperature reaction vessels

    Science.gov (United States)

    Saveliev, Alexei V [Chicago, IL; Zelepouga, Serguei A [Hoffman Estates, IL; Rue, David M [Chicago, IL

    2012-01-10

    A system and method for real-time monitoring of the interior of a combustor or gasifier wherein light emitted by the interior surface of a refractory wall of the combustor or gasifier is collected using an imaging fiber optic bundle having a light receiving end and a light output end. Color information in the light is captured with primary color (RGB) filters or complimentary color (GMCY) filters placed over individual pixels of color sensors disposed within a digital color camera in a BAYER mosaic layout, producing RGB signal outputs or GMCY signal outputs. The signal outputs are processed using intensity ratios of the primary color filters or the complimentary color filters, producing video images and/or thermal images of the interior of the combustor or gasifier.

  4. Understanding Discrete Facial Expressions in Video Using an Emotion Avatar Image.

    Science.gov (United States)

    Songfan Yang; Bhanu, B

    2012-08-01

    Existing video-based facial expression recognition techniques analyze the geometry-based and appearance-based information in every frame as well as explore the temporal relation among frames. On the contrary, we present a new image-based representation and an associated reference image called the emotion avatar image (EAI), and the avatar reference, respectively. This representation leverages the out-of-plane head rotation. It is not only robust to outliers but also provides a method to aggregate dynamic information from expressions with various lengths. The approach to facial expression analysis consists of the following steps: 1) face detection; 2) face registration of video frames with the avatar reference to form the EAI representation; 3) computation of features from EAIs using both local binary patterns and local phase quantization; and 4) the classification of the feature as one of the emotion type by using a linear support vector machine classifier. Our system is tested on the Facial Expression Recognition and Analysis Challenge (FERA2011) data, i.e., the Geneva Multimodal Emotion Portrayal-Facial Expression Recognition and Analysis Challenge (GEMEP-FERA) data set. The experimental results demonstrate that the information captured in an EAI for a facial expression is a very strong cue for emotion inference. Moreover, our method suppresses the person-specific information for emotion and performs well on unseen data.

  5. Error protection and interleaving for wireless transmission of JPEG 2000 images and video.

    Science.gov (United States)

    Baruffa, Giuseppe; Micanti, Paolo; Frescura, Fabrizio

    2009-02-01

    The transmission of JPEG 2000 images or video over wireless channels has to cope with the high probability and burstyness of errors introduced by Gaussian noise, linear distortions, and fading. At the receiver side, there is distortion due to the compression performed at the sender side, and to the errors introduced in the data stream by the channel. Progressive source coding can also be successfully exploited to protect different portions of the data stream with different channel code rates, based upon the relative importance that each portion has on the reconstructed image. Unequal Error Protection (UEP) schemes are generally adopted, which offer a close to the optimal solution. In this paper, we present a dichotomic technique for searching the optimal UEP strategy, which lends ideas from existing algorithms, for the transmission of JPEG 2000 images and video over a wireless channel. Moreover, we also adopt a method of virtual interleaving to be used for the transmission of high bit rate streams over packet loss channels, guaranteeing a large PSNR advantage over a plain transmission scheme. These two protection strategies can also be combined to maximize the error correction capabilities.

  6. Assessment of the Potential of UAV Video Image Analysis for Planning Irrigation Needs of Golf Courses

    Directory of Open Access Journals (Sweden)

    Alberto-Jesús Perea-Moreno

    2016-12-01

    Full Text Available Golf courses can be considered as precision agriculture, as being a playing surface, their appearance is of vital importance. Areas with good weather tend to have low rainfall. Therefore, the water management of golf courses in these climates is a crucial issue due to the high water demand of turfgrass. Golf courses are rapidly transitioning to reuse water, e.g., the municipalities in the USA are providing price incentives or mandate the use of reuse water for irrigation purposes; in Europe this is mandatory. So, knowing the turfgrass surfaces of a large area can help plan the treated sewage effluent needs. Recycled water is usually of poor quality, thus it is crucial to check the real turfgrass surface in order to be able to plan the global irrigation needs using this type of water. In this way, the irrigation of golf courses does not detract from the natural water resources of the area. The aim of this paper is to propose a new methodology for analysing geometric patterns of video data acquired from UAVs (Unmanned Aerial Vehicle using a new Hierarchical Temporal Memory (HTM algorithm. A case study concerning maintained turfgrass, especially for golf courses, has been developed. It shows very good results, better than 98% in the confusion matrix. The results obtained in this study represent a first step toward video imagery classification. In summary, technical progress in computing power and software has shown that video imagery is one of the most promising environmental data acquisition techniques available today. This rapid classification of turfgrass can play an important role for planning water management.

  7. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    Science.gov (United States)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  8. Individual differences in the processing of smoking-cessation video messages: An imaging genetics study.

    Science.gov (United States)

    Shi, Zhenhao; Wang, An-Li; Aronowitz, Catherine A; Romer, Daniel; Langleben, Daniel D

    2017-09-01

    Studies testing the benefits of enriching smoking-cessation video ads with attention-grabbing sensory features have yielded variable results. Dopamine transporter gene (DAT1) has been implicated in attention deficits. We hypothesized that DAT1 polymorphism is partially responsible for this variability. Using functional magnetic resonance imaging, we examined brain responses to videos high or low in attention-grabbing features, indexed by "message sensation value" (MSV), in 53 smokers genotyped for DAT1. Compared to other smokers, 10/10 homozygotes showed greater neural response to High- vs. Low-MSV smoking-cessation videos in two a priori regions of interest: the right temporoparietal junction and the right ventrolateral prefrontal cortex. These regions are known to underlie stimulus-driven attentional processing. Exploratory analysis showed that the right temporoparietal response positively predicted follow-up smoking behavior indexed by urine cotinine. Our findings suggest that responses to attention-grabbing features in smoking-cessation messages is affected by the DAT1 genotype. Copyright © 2017. Published by Elsevier B.V.

  9. In situ calibration of an infrared imaging video bolometer in the Large Helical Device

    Energy Technology Data Exchange (ETDEWEB)

    Mukai, K., E-mail: mukai.kiyofumi@LHD.nifs.ac.jp; Peterson, B. J. [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki 509-5292 (Japan); Pandya, S. N.; Sano, R. [The Graduate University for Advance Studies, 322-6 Oroshi-cho, Toki 509-5292 (Japan)

    2014-11-15

    The InfraRed imaging Video Bolometer (IRVB) is a powerful diagnostic to measure multi-dimensional radiation profiles in plasma fusion devices. In the Large Helical Device (LHD), four IRVBs have been installed with different fields of view to reconstruct three-dimensional profiles using a tomography technique. For the application of the measurement to plasma experiments using deuterium gas in LHD in the near future, the long-term effect of the neutron irradiation on the heat characteristics of an IRVB foil should be taken into account by regular in situ calibration measurements. Therefore, in this study, an in situ calibration system was designed.

  10. Positive effect on patient experience of video information given prior to cardiovascular magnetic resonance imaging: A clinical trial.

    Science.gov (United States)

    Ahlander, Britt-Marie; Engvall, Jan; Maret, Eva; Ericsson, Elisabeth

    2017-11-17

    To evaluate the effect of video information given before cardiovascular magnetic resonance imaging on patient anxiety and to compare patient experiences of cardiovascular magnetic resonance imaging versus myocardial perfusion scintigraphy. To evaluate whether additional information has an impact on motion artefacts. Cardiovascular magnetic resonance imaging and myocardial perfusion scintigraphy are technically advanced methods for the evaluation of heart diseases. Although cardiovascular magnetic resonance imaging is considered to be painless, patients may experience anxiety due to the closed environment. A prospective randomised intervention study, not registered. The sample (n = 148) consisted of 97 patients referred for cardiovascular magnetic resonance imaging, randomised to receive either video information in addition to standard text-information (CMR-video/n = 49) or standard text-information alone (CMR-standard/n = 48). A third group undergoing myocardial perfusion scintigraphy (n = 51) was compared with the cardiovascular magnetic resonance imaging-standard group. Anxiety was evaluated before, immediately after the procedure and 1 week later. Five questionnaires were used: Cardiac Anxiety Questionnaire, State-Trait Anxiety Inventory, Hospital Anxiety and Depression scale, MRI Fear Survey Schedule and the MRI-Anxiety Questionnaire. Motion artefacts were evaluated by three observers, blinded to the information given. Data were collected between April 2015-April 2016. The study followed the CONSORT guidelines. The CMR-video group scored lower (better) than the cardiovascular magnetic resonance imaging-standard group in the factor Relaxation (p = .039) but not in the factor Anxiety. Anxiety levels were lower during scintigraphic examinations compared to the CMR-standard group (p magnetic resonance imaging increased by adding video information prior the exam, which is important in relation to perceived quality in nursing. No effect was seen on motion

  11. Influence of image compression on the quality of UNB pan-sharpened imagery: a case study with security video image frames

    Science.gov (United States)

    Adhamkhiabani, Sina Adham; Zhang, Yun; Fathollahi, Fatemeh

    2014-05-01

    UNB Pan-sharp, also named FuzeGo, is an image fusion technique to produce high resolution color satellite images by fusing a high resolution panchromatic (monochrome) image and a low resolution multispectral (color) image. This is an effective solution that modern satellites have been using to capture high resolution color images at an ultra-high speed. Initial research on security camera systems shows that the UNB Pan-sharp technique can also be utilized to produce high resolution and high sensitive color video images for various imaging and monitoring applications. Based on UNB Pansharp technique, a video camera prototype system, called the UNB Super-camera system, was developed that captures high resolution panchromatic images and low resolution color images simultaneously, and produces real-time high resolution color video images on the fly. In a separate study, it was proved that UNB Super Camera outperforms conventional 1-chip and 3-chip color cameras in image quality, especially when the illumination is low such as in room lighting. In this research the influence of image compression on the quality of UNB Pan-sharped high resolution color images is evaluated, since image compression is widely used in still and video cameras to reduce data volume and speed up data transfer. The results demonstrate that UNB Pan-sharp can consistently produce high resolution color images that have the same detail as the input high resolution panchromatic image and the same color of the input low resolution color image, regardless the compression ratio and lighting condition. In addition, the high resolution color images produced by UNB Pan-sharp have higher sensitivity (signal to noise ratio) and better edge sharpness and color rendering than those of the same generation 1-chip color camera, regardless the compression ratio and lighting condition.

  12. Automated analysis of images acquired with electronic portal imaging device during delivery of quality assurance plans for inversely optimized arc therapy

    DEFF Research Database (Denmark)

    Fredh, Anna; Korreman, Stine; Rosenschöld, Per Munck af

    2010-01-01

    This work presents an automated method for comprehensively analyzing EPID images acquired for quality assurance of RapidArc treatment delivery. In-house-developed software has been used for the analysis and long-term results from measurements on three linacs are presented....

  13. Security SVGA image sensor with on-chip video data authentication and cryptographic circuit

    Science.gov (United States)

    Stifter, P.; Eberhardt, K.; Erni, A.; Hofmann, K.

    2005-10-01

    Security applications of sensors in a networking environment has a strong demand of sensor authentication and secure data transmission due to the possibility of man-in-the-middle and address spoofing attacks. Therefore a secure sensor system should fulfil the three standard requirements of cryptography, namely data integrity, authentication and non-repudiation. This paper is intended to present the unique sensor development by AIM, the so called SecVGA, which is a high performance, monochrome (B/W) CMOS active pixel image sensor. The device is capable of capturing still and motion images with a resolution of 800x600 active pixels and converting the image into a digital data stream. The distinguishing feature of this development in comparison to standard imaging sensors is the on-chip cryptographic engine which provides the sensor authentication, based on a one-way challenge/response protocol. The implemented protocol results in the exchange of a session-key which will secure the following video data transmission. This is achieved by calculating a cryptographic checksum derived from a stateful hash value of the complete image frame. Every sensor contains an EEPROM memory cell for the non-volatile storage of a unique identifier. The imager is programmable via a two-wire I2C compatible interface which controls the integration time, the active window size of the pixel array, the frame rate and various operating modes including the authentication procedure.

  14. Wave Height Estimation from Shadowing Based on the Acquired X-Band Marine Radar Images in Coastal Area

    Directory of Open Access Journals (Sweden)

    Yanbo Wei

    2017-08-01

    Full Text Available In this paper, the retrieving significant wave height from X-band marine radar images based on shadow statistics is investigated, since the retrieving accuracy can not be seriously affected by environmental factors and the method has the advantage of without any external reference to calibrate. However, the accuracy of the significant wave height estimated from the radar image acquired at the near-shore area is not ideal. To solve this problem, the effect of water depth is considered in the theoretical derivation of estimated wave height based on the sea surface slope. And then, an improved retrieving algorithm which is suitable for both in deep water area and shallow water area is developed. In addition, the radar data are sparsely processed in advance in order to achieve high quality edge image for the requirement of shadow statistic algorithm, since the high resolution radar images will lead to angle-blurred for the image edge detection and time-consuming in the estimation of sea surface slope. The data acquired from Pingtan Test Base in Fujian Province were used to verify the effectiveness of the proposed algorithm. The experimental results demonstrate that the improved method which takes into account the water depth is more efficient and effective and has better performance for retrieving significant wave height in the shallow water area, compared to the in situ buoy data as the ground truth and that of the existing shadow statistic method.

  15. On the impact of neutron beam divergence and scattering on the quality of transmission acquired tomographic images

    Science.gov (United States)

    Silvani, Maria Ines; Lopes, Ricardo T.; de Almeida, Gevaldo L.; Gonçalves, Marcelo José; Furieri, Rosanne C. A. A.

    2007-10-01

    The impact of the divergence of a thermal neutron beam and the scattered neutrons on the quality of tomographic images acquired by transmission have been evaluated by using a third generation tomographic system incorporating neutron collimators under several different arrangements. The system equipped with a gaseous position sensitive detector has been placed at the main channel outlet of the Argonauta Research Reactor in Instituto de Engenharia Nuclear (CNEN-Brazil) which furnishes a thermal neutron flux of 2.3 × 105 n cm-2 s-1. Experiments have then been conducted using test-objects with well-known inner structure and composition to assess the influence of the collimators arrangement on the quality of the acquired images. Both, beam divergence and scattering - expected to spoil the image quality - have been reduced by using properly positioned collimators between the neutron source and the object, and in the gap between the object and the detector, respectively. The shadow cast by this last collimator on the projections used to reconstruct the tomographic images has been eliminated by a proper software specifically written for this purpose. Improvement of the tomographic images has been observed, demonstrating the effectiveness of the proposed approach to improve their quality by using properly positioned collimators.

  16. Detection of recurrent and primary acquired cholesteatoma with echo-planar diffusion-weighted magnetic resonance imaging.

    Science.gov (United States)

    Evlice, A; Tarkan, Ö; Kiroğlu, M; Biçakci, K; Özdemir, S; Tuncer, Ü; Çekiç, E

    2012-07-01

    To evaluate the diagnostic value of echo-planar diffusion-weighted magnetic resonance imaging in pre-operative detection of suspected primary acquired, residual and/or recurrent cholesteatoma. Fifty-eight chronic otitis media patients with suspected cholesteatoma were thus evaluated two weeks pre-operatively, and divided into group one (41 patients, no previous surgery, suspected primary acquired cholesteatoma) and group two (17 patients, previous surgery, scheduled 'second-look' or revision surgery for suspected residual or recurrent cholesteatoma). Patients' operative, histopathology and radiological findings were compared. Cholesteatoma was found in 63 per cent of group one patients and 58 per cent of group two patients at surgery. Histopathological examination of surgical specimens indicated that imaging accurately predicted the presence or absence of cholesteatoma in 90 per cent of group one (37/41; 23 true positives, 14 true negatives) and 76 per cent of group two (13/17; seven true positives, six true negatives). Three patients in both groups were false negative diagnoses and one patient in both groups was a false positive. The sensitivity, specificity, and positive and negative predictive values of echo-planar diffusion-weighted magnetic resonance imaging of cholesteatoma were respectively 88, 93, 95 and 82 per cent in group one and 70, 85, 87 and 66 per cent in group two. Echo-planar diffusion-weighted magnetic resonance imaging is a valuable technique with high sensitivity and specificity for cholesteatoma imaging.

  17. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera

    Directory of Open Access Journals (Sweden)

    Yufu Qu

    2018-01-01

    Full Text Available In order to reconstruct three-dimensional (3D structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth–map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

  18. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera.

    Science.gov (United States)

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-14

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

  19. Processing and fusion of passively acquired, millimeter and terahertz images of the human body

    Science.gov (United States)

    Tian, Li; Shen, Yanchun; Jin, Weiqi; Zhao, Guozhong; Cai, Yi

    2017-04-01

    A passive, millimeter wave (MMW) and terahertz (THz) dual-band imaging system composed of 94 and 250 GHz single-element detectors was used to investigate preprocessing and fusion algorithms for dual-band images. Subsequently, an MMW and THz image preprocessing and fusion integrated algorithm (MMW-THz IPFIA) was developed. In the algorithm, a block-matching and three-dimensional filtering denoising algorithm is employed to filter noise, an adaptive histogram equalization algorithm to enhance images, an intensity-based registration algorithm to register images, and a wavelet-based image fusion algorithm to fuse the preprocessed images. The performance of the algorithm was analyzed by calculating the SNR and information entropy of the actual images. This algorithm effectively reduces the image noise and improves the level of detail in the images. Since the algorithm improves the performance of the investigated imaging system, it should support practical technological applications. Because the system responds to blackbody radiation, its improvement is quantified herein using the static performance parameter commonly employed for thermal imaging systems, namely, the minimum detectable temperature difference (MDTD). An experiment was conducted in which the system's MDTD was measured before and after applying the MMW-THz IPFIA, verifying the improved performance that can be realized through its application.

  20. High-quality and small-capacity e-learning video featuring lecturer-superimposing PC screen images

    Science.gov (United States)

    Nomura, Yoshihiko; Murakami, Michinobu; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    2006-10-01

    Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene. While dynamic lecturing scene is shot by a digital video camera, screen images are electronically stored by a PC screen image capturing software in relatively long period at a practical class. Then, a lecturer and a lecture stick are extracted from the digital video images by pattern recognition techniques, and the extracted images are superimposed on the appropriate PC screen images by off-line processing. Thus, we have succeeded to create a high-quality and small-capacity (HQ/SC) video-on-demand educational content featuring the advantages: the high quality of image sharpness, the small electronic file capacity, and the realistic lecturer motion.

  1. Direct ultrasound to video registration using photoacoustic markers from a single image pose

    Science.gov (United States)

    Cheng, Alexis; Guo, Xiaoyu; Kang, Hyun Jae; Choti, Michael A.; Kang, Jin U.; Taylor, Russell H.; Boctor, Emad M.

    2015-03-01

    Fusion of video and other imaging modalities is common in modern surgical scenarios to provide surgeons with additional information. Doing so requires the use of interventional guidance equipment and surgical navigation systems to register the tools and devices used in surgery with each other. In this work, we focus explicitly on registering ultrasound with a stereocamera system using photoacoustic markers. Previous work has shown that photoacoustic markers can be used to register three-dimensional ultrasound with video resulting in target registration errors lower than the current available systems. Photoacoustic markers are non-collinear laser spots projected onto some surface. They can be simultaneously visualized by a stereocamera system and in an ultra-sound volume because of the photoacoustic effect. This work replaces the three-dimensional ultrasound volume with images from a single ultrasound image pose. While an ultrasound volume provides more information than an ultrasound image, it has its disadvantages such as higher cost and slower acquisition rate. However, in general, it is difficult to register two-dimensional with three-dimensional spatial data. We propose the use of photoacoustic markers viewed by a convex array ultrasound transducer. Each photoacoustic markers wavefront provides information on its elevational position, resulting in three-dimensional spatial data. This development enhances this methods practicality as convex array transducers are more common in surgical practice than three-dimensional transducers. This work is demonstrated on a synthetic phantom. The resulting target registration error for this experiment was 2.47mm and the standard deviations was 1.29mm, which is comparable to current available systems.

  2. Realization of a video-rate distributed aperture millimeter-wave imaging system using optical upconversion

    Science.gov (United States)

    Schuetz, Christopher; Martin, Richard; Dillon, Thomas; Yao, Peng; Mackrides, Daniel; Harrity, Charles; Zablocki, Alicia; Shreve, Kevin; Bonnett, James; Curt, Petersen; Prather, Dennis

    2013-05-01

    Passive imaging using millimeter waves (mmWs) has many advantages and applications in the defense and security markets. All terrestrial bodies emit mmW radiation and these wavelengths are able to penetrate smoke, fog/clouds/marine layers, and even clothing. One primary obstacle to imaging in this spectrum is that longer wavelengths require larger apertures to achieve the resolutions desired for many applications. Accordingly, lens-based focal plane systems and scanning systems tend to require large aperture optics, which increase the achievable size and weight of such systems to beyond what can be supported by many applications. To overcome this limitation, a distributed aperture detection scheme is used in which the effective aperture size can be increased without the associated volumetric increase in imager size. This distributed aperture system is realized through conversion of the received mmW energy into sidebands on an optical carrier. This conversion serves, in essence, to scale the mmW sparse aperture array signals onto a complementary optical array. The side bands are subsequently stripped from the optical carrier and recombined to provide a real time snapshot of the mmW signal. Using this technique, we have constructed a real-time, video-rate imager operating at 75 GHz. A distributed aperture consisting of 220 upconversion channels is used to realize 2.5k pixels with passive sensitivity. Details of the construction and operation of this imager as well as field testing results will be presented herein.

  3. Radiometric Normalization of Large Airborne Image Data Sets Acquired by Different Sensor Types

    Science.gov (United States)

    Gehrke, S.; Beshah, B. T.

    2016-06-01

    Generating seamless mosaics of aerial images is a particularly challenging task when the mosaic comprises a large number of im-ages, collected over longer periods of time and with different sensors under varying imaging conditions. Such large mosaics typically consist of very heterogeneous image data, both spatially (different terrain types and atmosphere) and temporally (unstable atmo-spheric properties and even changes in land coverage). We present a new radiometric normalization or, respectively, radiometric aerial triangulation approach that takes advantage of our knowledge about each sensor's properties. The current implementation supports medium and large format airborne imaging sensors of the Leica Geosystems family, namely the ADS line-scanner as well as DMC and RCD frame sensors. A hierarchical modelling - with parameters for the overall mosaic, the sensor type, different flight sessions, strips and individual images - allows for adaptation to each sensor's geometric and radiometric properties. Additional parameters at different hierarchy levels can compensate radiome-tric differences of various origins to compensate for shortcomings of the preceding radiometric sensor calibration as well as BRDF and atmospheric corrections. The final, relative normalization is based on radiometric tie points in overlapping images, absolute radiometric control points and image statistics. It is computed in a global least squares adjustment for the entire mosaic by altering each image's histogram using a location-dependent mathematical model. This model involves contrast and brightness corrections at radiometric fix points with bilinear interpolation for corrections in-between. The distribution of the radiometry fixes is adaptive to each image and generally increases with image size, hence enabling optimal local adaptation even for very long image strips as typi-cally captured by a line-scanner sensor. The normalization approach is implemented in HxMap software. It has been

  4. VIDEO-BASED POINT CLOUD GENERATION USING MULTIPLE ACTION CAMERAS

    Directory of Open Access Journals (Sweden)

    T. Teo

    2015-05-01

    Full Text Available Due to the development of action cameras, the use of video technology for collecting geo-spatial data becomes an important trend. The objective of this study is to compare the image-mode and video-mode of multiple action cameras for 3D point clouds generation. Frame images are acquired from discrete camera stations while videos are taken from continuous trajectories. The proposed method includes five major parts: (1 camera calibration, (2 video conversion and alignment, (3 orientation modelling, (4 dense matching, and (5 evaluation. As the action cameras usually have large FOV in wide viewing mode, camera calibration plays an important role to calibrate the effect of lens distortion before image matching. Once the camera has been calibrated, the author use these action cameras to take video in an indoor environment. The videos are further converted into multiple frame images based on the frame rates. In order to overcome the time synchronous issues in between videos from different viewpoints, an additional timer APP is used to determine the time shift factor between cameras in time alignment. A structure form motion (SfM technique is utilized to obtain the image orientations. Then, semi-global matching (SGM algorithm is adopted to obtain dense 3D point clouds. The preliminary results indicated that the 3D points from 4K video are similar to 12MP images, but the data acquisition performance of 4K video is more efficient than 12MP digital images.

  5. Survey lines of the video and photos from the mini-SEABOSS sampling system acquired in Boston Harbor and approaches (surveylines_vid)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — These data are the trackline from the seafloor photograph and video survey conducted September 2004 using the mini-SeaBOSS sampling system on the R/V Rafael in...

  6. Video-rate bioluminescence imaging of matrix metalloproteinase-2 secreted from a migrating cell.

    Directory of Open Access Journals (Sweden)

    Takahiro Suzuki

    Full Text Available BACKGROUND: Matrix metalloproteinase-2 (MMP-2 plays an important role in cancer progression and metastasis. MMP-2 is secreted as a pro-enzyme, which is activated by the membrane-bound proteins, and the polarized distribution of secretory and the membrane-associated MMP-2 has been investigated. However, the real-time visualizations of both MMP-2 secretion from the front edge of a migration cell and its distribution on the cell surface have not been reported. METHODOLOGY/PRINCIPAL FINDINGS: The method of video-rate bioluminescence imaging was applied to visualize exocytosis of MMP-2 from a living cell using Gaussia luciferase (GLase as a reporter. The luminescence signals of GLase were detected by a high speed electron-multiplying charge-coupled device camera (EM-CCD camera with a time resolution within 500 ms per image. The fusion protein of MMP-2 to GLase was expressed in a HeLa cell and exocytosis of MMP-2 was detected in a few seconds along the leading edge of a migrating HeLa cell. The membrane-associated MMP-2 was observed at the specific sites on the bottom side of the cells, suggesting that the sites of MMP-2 secretion are different from that of MMP-2 binding. CONCLUSIONS: We were the first to successfully demonstrate secretory dynamics of MMP-2 and the specific sites for polarized distribution of MMP-2 on the cell surface. The video-rate bioluminescence imaging using GLase is a useful method to investigate distribution and dynamics of secreted proteins on the whole surface of polarized cells in real time.

  7. Traffic Video Image Segmentation Model Based on Bayesian and Spatio-Temporal Markov Random Field

    Science.gov (United States)

    Zhou, Jun; Bao, Xu; Li, Dawei; Yin, Yongwen

    2017-10-01

    Traffic video image is a kind of dynamic image and its background and foreground is changed at any time, which results in the occlusion. In this case, using the general method is more difficult to get accurate image segmentation. A segmentation algorithm based on Bayesian and Spatio-Temporal Markov Random Field is put forward, which respectively build the energy function model of observation field and label field to motion sequence image with Markov property, then according to Bayesian' rule, use the interaction of label field and observation field, that is the relationship of label field’s prior probability and observation field’s likelihood probability, get the maximum posterior probability of label field’s estimation parameter, use the ICM model to extract the motion object, consequently the process of segmentation is finished. Finally, the segmentation methods of ST - MRF and the Bayesian combined with ST - MRF were analyzed. Experimental results: the segmentation time in Bayesian combined with ST-MRF algorithm is shorter than in ST-MRF, and the computing workload is small, especially in the heavy traffic dynamic scenes the method also can achieve better segmentation effect.

  8. Cryptanalysis of a spatiotemporal chaotic image/video cryptosystem and its improved version

    Energy Technology Data Exchange (ETDEWEB)

    Ge Xin, E-mail: gexiner@gmail.co [Zhengzhou Information Science and Technology Institute, Zhengzhou 450002, Henan (China); Liu Fenlin; Lu Bin; Wang Wei [Zhengzhou Information Science and Technology Institute, Zhengzhou 450002, Henan (China)

    2011-01-31

    Recently, a spatiotemporal chaotic image/video cryptosystem was proposed by Lian. Shortly after its publication, Rhouma et al. proposed two attacks on the cryptosystem. They as well introduced an improved cryptosystem which is more secured under attacks (R. Rhouma, S. Belghith, Phys. Lett. A 372 (2008) 5790) . This Letter re-examines securities of Lian's cryptosystem and its improved version, by showing that not all details of the ciphered image of Lian's cryptosystem can be recovered by Rhouma et al.'s attacks due to the incorrectly recovered part of the sign-bits of the AC coefficients with an inappropriately chosen image. As a result, modifications of Rhouma et al.'s attacks are proposed in order to recover the ciphered image of Lian's cryptosystem completely; then based on the modifications, two new attacks are proposed to break the improved version of Lian's cryptosystem. Finally, experimental results illustrate the validity of our analysis.

  9. Approximate Circuits in Low-Power Image and Video Processing: The Approximate Median Filter

    Directory of Open Access Journals (Sweden)

    L. Sekanina

    2017-09-01

    Full Text Available Low power image and video processing circuits are crucial in many applications of computer vision. Traditional techniques used to reduce power consumption in these applications have recently been accompanied by circuit approximation methods which exploit the fact that these applications are highly error resilient and, hence, the quality of image processing can be traded for power consumption. On the basis of a literature survey, we identified the components whose implementations are the most frequently approximated and the methods used for obtaining these approximations. One of the components is the median image filter. We propose, evaluate and compare two approximation strategies based on Cartesian genetic programming applied to approximate various common implementations of the median filter. For filters developed using these approximation strategies, trade-offs between the quality of filtering and power consumption are investigated. Under conditions of our experiments we conclude that better trade-offs are achieved when the image filter is evolved from scratch rather than a conventional filter is approximated.

  10. Reliability of Calf Bioelectrical Impedance Spectroscopy and Magnetic-Resonance-Imaging-Acquired Skeletal Muscle Hydration Measures in Healthy People

    Directory of Open Access Journals (Sweden)

    Anuradha Sawant

    2013-01-01

    Full Text Available Purpose. The purpose of this study was to investigate the test-retest reliability, relative variability, and agreement between calf bioelectrical impedance-spectroscopy (cBIS acquired extracellular fluid (ECF, intracellular fluid (ICF, total water and the ratio of ECF : ICF, magnetic-resonance-imaging (MRI acquired transverse relaxation times (T2, and apparent diffusion coefficient (ADC of calf muscles of the same segment in healthy individuals. Methods. Muscle hydration measures were collected in 32 healthy individuals on two occasions and analyzed by a single rater. On both occasions, MRI measures were collected from tibialis anterior (TA, medial (MG, and lateral gastrocnemius (LG and soleus muscles following the cBIS data acquired using XiTRON Hydra 4200 BIS device. The intraclass correlation coefficients (ICC2,1, coefficient of variation (CV, and agreement between MRI and cBIS data were also calculated. Results. ICC2,1 values for cBIS, T2, and ADC ranged from 0.56 to 0.92, 0.96 to 0.99, and 0.05 to 0.56, respectively. Relative variability between measures (CV ranged from 14.6 to 25.6% for the cBIS data and 4.2 to 10.0% for the MRI-acquired data. The ratio of ECF : ICF could significantly predict T2 of TA and soleus muscles. Conclusion. MRI-acquired measures of T2 had the highest test-retest reliability of muscle hydration with the least error and variation on repeated testing. Hence, T2 of a muscle is the most reliable and stable outcome measure for evaluating individual muscle hydration.

  11. Higher-order singular value decomposition-based discrete fractional random transform for simultaneous compression and encryption of video images

    Science.gov (United States)

    Wang, Qingzhu; Chen, Xiaoming; Zhu, Yihai

    2017-09-01

    Existing image compression and encryption methods have several shortcomings: they have low reconstruction accuracy and are unsuitable for three-dimensional (3D) images. To overcome these limitations, this paper proposes a tensor-based approach adopting tensor compressive sensing and tensor discrete fractional random transform (TDFRT). The source video images are measured by three key-controlled sensing matrices. Subsequently, the resulting tensor image is further encrypted using 3D cat map and the proposed TDFRT, which is based on higher-order singular value decomposition. A multiway projection algorithm is designed to reconstruct the video images. The proposed algorithm can greatly reduce the data volume and improve the efficiency of the data transmission and key distribution. The simulation results validate the good compression performance, efficiency, and security of the proposed algorithm.

  12. Comparison Of Processing Time Of Different Size Of Images And Video Resolutions For Object Detection Using Fuzzy Inference System

    Directory of Open Access Journals (Sweden)

    Yogesh Yadav

    2017-01-01

    Full Text Available Object Detection with small computation cost and processing time is a necessity in diverse domains such as traffic analysis security cameras video surveillance etc .With current advances in technology and decrease in prices of image sensors and video cameras the resolution of captured images is more than 1MP and has higher frame rates. This implies a considerable data size that needs to be processed in a very short period of time when real-time operations and data processing is needed. Real time video processing with high performance can be achieved with GPU technology. The aim of this study is to evaluate the influence of different image and video resolutions on the processing time number of objects detections and accuracy of the detected object. MOG2 algorithm is used for processing video input data with GPU module. Fuzzy interference system is used to evaluate the accuracy of number of detected object and to show the difference between CPU and GPU computing methods.

  13. Video-mosaicking of in vivo reflectance confocal microscopy images for noninvasive examination of skin lesion (Conference Presentation)

    Science.gov (United States)

    Kose, Kivanc; Gou, Mengran; Yelamos, Oriol; Cordova, Miguel A.; Rossi, Anthony; Nehal, Kishwer S.; Camps, Octavia I.; Dy, Jennifer G.; Brooks, Dana H.; Rajadhyaksha, Milind

    2017-02-01

    In this report we describe a computer vision based pipeline to convert in-vivo reflectance confocal microscopy (RCM) videos collected with a handheld system into large field of view (FOV) mosaics. For many applications such as imaging of hard to access lesions, intraoperative assessment of MOHS margins, or delineation of lesion margins beyond clinical borders, raster scan based mosaicing techniques have clinically significant limitations. In such cases, clinicians often capture RCM videos by freely moving a handheld microscope over the area of interest, but the resulting videos lose large-scale spatial relationships. Videomosaicking is a standard computational imaging technique to register, and stitch together consecutive frames of videos into large FOV high resolution mosaics. However, mosaicing RCM videos collected in-vivo has unique challenges: (i) tissue may deform or warp due to physical contact with the microscope objective lens, (ii) discontinuities or "jumps" between consecutive images and motion blur artifacts may occur, due to manual operation of the microscope, and (iii) optical sectioning and resolution may vary between consecutive images due to scattering and aberrations induced by changes in imaging depth and tissue morphology. We addressed these challenges by adapting or developing new algorithmic methods for videomosaicking, specifically by modeling non-rigid deformations, followed by automatically detecting discontinuities (cut locations) and, finally, applying a data-driven image stitching approach that fully preserves resolution and tissue morphologic detail without imposing arbitrary pre-defined boundaries. We will present example mosaics obtained by clinical imaging of both melanoma and non-melanoma skin cancers. The ability to combine freehand mosaicing for handheld microscopes with preserved cellular resolution will have high impact application in diverse clinical settings, including low-resource healthcare systems.

  14. Series of aerial images over Quivira National Wildlife Refuge, acquired September, 1950

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This data set is composite of original black and white series images obtained from Earth Explorer (USGS) on September 23rd, 1950. The original photos were...

  15. Multi-Parameter Ensemble Learning for Automated Vertebral Body Segmentation in Heterogeneously Acquired Clinical MR Images.

    Science.gov (United States)

    Gaonkar, Bilwaj; Xia, Yihao; Villaroman, Diane S; Ko, Allison; Attiah, Mark; Beckett, Joel S; Macyszyn, Luke

    2017-01-01

    The development of quantitative imaging biomarkers in medicine requires automatic delineation of relevant anatomical structures using available imaging data. However, this task is complicated in clinical medicine due to the variation in scanning parameters and protocols, even within a single medical center. Existing literature on automatic image segmentation using MR data is based on the analysis of highly homogenous images obtained using a fixed set of pulse sequence parameters (TR/TE). Unfortunately, algorithms that operate on fixed scanning parameters do not avail themselves to real-world daily clinical use due to the existing variation in scanning parameters and protocols. Thus, it is necessary to develop algorithmic techniques that can address the challenge of MR image segmentation using real clinical data. Toward this goal, we developed a multi-parametric ensemble learning technique to automatically detect and segment lumbar vertebral bodies using MR images of the spine. We use spine imaging data to illustrate our techniques since low back pain is an extremely common condition and a typical spine clinic evaluates patients that have been referred with a wide range of scanning parameters. This method was designed with special emphasis on robustness so that it can perform well despite the inherent variation in scanning protocols. Specifically, we show how a single multi-parameter ensemble model trained with manually labeled T2 scans can autonomously segment vertebral bodies on scans with echo times varying between 24 and 147 ms and relaxation times varying between 1500 and 7810 ms. Furthermore, even though the model was trained using T2-MR imaging data, it can accurately segment vertebral bodies on T1-MR and CT, further demonstrating the robustness and versatility of our methodology. We believe that robust segmentation techniques, such as the one presented here, are necessary for translating computer assisted diagnosis into everyday clinical practice.

  16. Request to Release CEV Orion TSP Images Acquired at AEDC Tunnel 9

    Science.gov (United States)

    Norris, Joe; Austin, Jason

    2010-01-01

    This document reviews the images that are requested for release from the Temperature Sensitive Paint tests of the Crew Exploration Vehicle (CEV) Orion that were conducted at the Arnold Engineering Development Center wind tunnel. Included is a description of the data, sample images, and graphs showing (1) Thermocouple T history under the paint layer at the location where the I/Iref is provided, and the Surface I/Iref history over the thermocouple.

  17. Image and video based remote target localization and tracking on smartphones

    Science.gov (United States)

    Wang, Qia; Lobzhanidze, Alex; Jang, Hyun; Zeng, Wenjun; Shang, Yi; Yang, Jingyu

    2012-06-01

    Smartphones are becoming popular nowadays not only because of its communication functionality but also, more importantly, its powerful sensing and computing capability. In this paper, we describe a novel and accurate image and video based remote target localization and tracking system using the Android smartphones, by leveraging its built-in sensors such as camera, digital compass, GPS, etc. Even though many other distance estimation or localization devices are available, our all-in-one, easy-to-use localization and tracking system on low cost and commodity smartphones is first of its kind. Furthermore, smartphones' exclusive user-friendly interface has been effectively taken advantage of by our system to facilitate low complexity and high accuracy. Our experimental results show that our system works accurately and efficiently.

  18. [Sexuality and the human body: the subject's view through video images].

    Science.gov (United States)

    Vargas, E; Siqueira, V H

    1999-11-01

    This study analyzes images of the body linked to sexual and reproductive behavior found in the communication processes mediated by so-called educational videos. In the relationship between subject and technology, the paper is intended to characterize the discourses and the view or perspective currently shaping health education practices. Focusing on the potential in the relationship between the enunciator and subjects represented in the text and the interaction between health professionals and messages, the study attempts to characterize the discourses and questions providing the basis for a given view of the body and sexuality. The study was conducted in the years 1996-1997 and focused on health professionals from the public health system. The results show a concept of sexuality that tends to generalize the meaning ascribed to sexual experience, ignoring the various ways by which different culturally defined groups attribute meaning to the body.

  19. Telesonography: virtual 3D image processing of remotely acquired abdominal, vascular, and fetal sonograms.

    Science.gov (United States)

    Arbeille, Ph; Fornage, B; Boucher, A; Ruiz, J; Georgescu, M; Blouin, J; Cristea, J; Carles, G; Farin, F; Vincent, N

    2014-02-01

    To design and test a new telesonography technique using remote volume acquisition by untrained operators in locations without access to trained sonographers, postprocessing, and interpretation done at expert centers. The technique was tested with 84 sonograms of organs acquired in pregnant women (n = 8) and patients with various abdominal pathologic conditions (n = 11) located in French Guyana (France), Ceuta (Spain), and Murighiol (Romania). An operator inexperienced in sonography (US) placed the transducer over the predetermined acoustic window for each organ, then swept it from a -45° to a +45° position to scan the targeted organ. The acquired volume dataset was sent to an expert center via the Internet and reconstructed using a proprietary software, which allowed a trained sonographer to navigate through the appropriately reconstructed sonograms. After three-dimensional processing at the expert center, the organs scanned in the obstetrical cases were adequately visualized by the expert in seven of eight (88%) examinations of the fetal head, femur, and umbilical cord and eight of eight (100%) examinations of the fetal abdomen and placenta, whereas in the general abdominal cases, the liver, gallbladder, portal vein, and right kidney were correctly visualized in 10 of 11 (91%) examinations. Telesonography allowed untrained operators to scan and transfer the US volume datasets over the Internet to an expert center where an expert sonographer could navigate through the reconstructed US volume and visualize sonograms of diagnostic quality. Copyright © 2013 Wiley Periodicals, Inc.

  20. An investigation of gantry angle data accuracy for cine-mode EPID images acquired during arc IMRT.

    Science.gov (United States)

    McCowan, Peter M; Rickey, Daniel W; Rowshanfarzad, Pejman; Greer, Peter B; Ansbacher, William; McCurdy, Boyd M

    2014-01-06

    EPID images acquired in cine mode during arc therapy have inaccurate gantry angles recorded in their image headers. In this work, methods were developed to assess the accuracy of the gantry potentiometer for linear accelerators. As well, assessments of the accuracy of other, more accessible, sources of gantry angle information (i.e., treatment log files, analysis of EPID image headers) were investigated. The methods used in this study are generally applicable to any linear accelerator unit, and have been demonstrated here with Clinac/Trilogy systems. Gantry angle data were simultaneously acquired using three methods: i) a direct gantry potentiometer measurement, ii) an incremental rotary encoder, and iii) a custom-made radiographic gantry-angle phantom which produced unique wire intersections as a function of gantry angle. All methods were compared to gantry angle data from the EPID image header and the linac MLC DynaLog file. The encoder and gantry-angle phantom were used to validate the accuracy of the linac's potentiometer. The EPID image header gantry angles and the DynaLog file gantry angles were compared to the potentiometer. The encoder and gantry-angle phantom mean angle differences with the potentiometer were 0.13° ± 0.14° and 0.10°± 0.30°, respectively. The EPID image header angles analyzed in this study were within ± 1° of the potentiometer angles only 35% of the time. In some cases, EPID image header gantry angles disagreed by as much as 3° with the potentiometer. A time delay in frame acquisition was determined using the continuous acquisition mode of the EPID. After correcting for this time delay, 75% of the header angles, on average, were within ± 1° of the true gantry angle, compared to an average of only 35% without the correction. Applying a boxcar smoothing filter to the corrected gantry angles further improved the accuracy of the header-derived gantry angles to within ± 1° for almost all images (99.4%). An angle accuracy of 0.11°

  1. Three-dimensional tomographic imaging for dynamic radiation behavior study using infrared imaging video bolometers in large helical device plasma

    Energy Technology Data Exchange (ETDEWEB)

    Sano, Ryuichi; Iwama, Naofumi [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki, Gifu 509-5292 (Japan); Peterson, Byron J.; Kobayashi, Masahiro; Mukai, Kiyofumi [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki, Gifu 509-5292 (Japan); SOKENDAI (The Graduate University for Advanced Studies), Hayama, Kanagawa 240-0193 (Japan); Teranishi, Masaru [Hiroshima Institute of Technology, 2-1-1, Miyake, Saeki-ku, Hiroshima 731-5193 (Japan); Pandya, Shwetang N. [Institute of Plasma Research, Near Indira Bridge, Bhat Village, Gandhinagar, Gujarat 382428 (India)

    2016-05-15

    A three-dimensional (3D) tomography system using four InfraRed imaging Video Bolometers (IRVBs) has been designed with a helical periodicity assumption for the purpose of plasma radiation measurement in the large helical device. For the spatial inversion of large sized arrays, the system has been numerically and experimentally examined using the Tikhonov regularization with the criterion of minimum generalized cross validation, which is the standard solver of inverse problems. The 3D transport code EMC3-EIRENE for impurity behavior and related radiation has been used to produce phantoms for numerical tests, and the relative calibration of the IRVB images has been carried out with a simple function model of the decaying plasma in a radiation collapse. The tomography system can respond to temporal changes in the plasma profile and identify the 3D dynamic behavior of radiation, such as the radiation enhancement that starts from the inboard side of the torus, during the radiation collapse. The reconstruction results are also consistent with the output signals of a resistive bolometer. These results indicate that the designed 3D tomography system is available for the 3D imaging of radiation. The first 3D direct tomographic measurement of a magnetically confined plasma has been achieved.

  2. Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution – an application in higher education

    NARCIS (Netherlands)

    Jan Kuijten; Ajda Ortac; Hans Maier; Gert de Heer

    2015-01-01

    To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels).

  3. Optimal angular dose distribution to acquire 3D and extra 2D images for digital breast tomosynthesis (DBT)

    Science.gov (United States)

    Park, Hye-Suk; Kim, Ye-Seul; Lee, Haeng-Hwa; Gang, Won-Suk; Kim, Hee-Joung; Choi, Young-Wook; Choi, JaeGu

    2015-08-01

    The purpose of this study is to determine the optimal non-uniform angular dose distribution to improve the quality of the 3D reconstructed images and to acquire extra 2D projection images. In this analysis, 7 acquisition sets were generated by using four different values for the number of projections (11, 15, 21, and 29) and total angular range (±14°, ±17.5°, ±21°, and ±24.5° ). For all acquisition sets, the zero-degree projection was used as the 2D image that was close to that of standard conventional mammography (CM). Exposures used were 50, 100, 150, and 200 mR for the zero-degree projection, and the remaining dose was distributed over the remaining projection angles. To quantitatively evaluate image quality, we computed the CNR (contrast-to-noise ratio) and the ASF (artifact spread function) for the same radiation dose. The results indicate that, for microcalcifications, acquisition sets with approximately 4 times higher exposure on the zero-degree projection than the average exposure for the remaining projection angles yielded higher CNR values and were 3% higher than the uniform distribution. However, very high dose concentrations toward the zero-degree projection may reduce the quality of the reconstructed images due to increasing noise in the peripheral views. The zero-degree projection of the non-uniform dose distribution offers a 2D image similar to that of standard CM, but with a significantly lower radiation dose. Therefore, we need to evaluate the diagnostic potential of extra 2D projection image when diagnose breast cancer by using 3D images with non-uniform angular dose distributions.

  4. Review of passive-blind detection in digital video forgery based on sensing and imaging techniques

    Science.gov (United States)

    Tao, Junjie; Jia, Lili; You, Ying

    2016-01-01

    Advances in digital video compression and IP communication technologies raised new issues and challenges concerning the integrity and authenticity of surveillance videos. It is so important that the system should ensure that once recorded, the video cannot be altered; ensuring the audit trail is intact for evidential purposes. This paper gives an overview of passive techniques of Digital Video Forensics which are based on intrinsic fingerprints inherent in digital surveillance videos. In this paper, we performed a thorough research of literatures relevant to video manipulation detection methods which accomplish blind authentications without referring to any auxiliary information. We presents review of various existing methods in literature, and much more work is needed to be done in this field of video forensics based on video data analysis and observation of the surveillance systems.

  5. OPTIMISATION OF OCCUPATIONAL RADIATION PROTECTION IN IMAGE-GUIDED INTERVENTIONS: EXPLORING VIDEO RECORDINGS AS A TOOL IN THE PROCESS.

    Science.gov (United States)

    Almén, Anja; Sandblom, Viktor; Rystedt, Hans; von Wrangel, Alexa; Ivarsson, Jonas; Båth, Magnus; Lundh, Charlotta

    2016-06-01

    The overall purpose of this work was to explore how video recordings can contribute to the process of optimising occupational radiation protection in image-guided interventions. Video-recorded material from two image-guided interventions was produced and used to investigate to what extent it is conceivable to observe and assess dose-affecting actions in video recordings. Using the recorded material, it was to some extent possible to connect the choice of imaging techniques to the medical events during the procedure and, to a less extent, to connect these technical and medical issues to the occupational exposure. It was possible to identify a relationship between occupational exposure level to staff and positioning and use of shielding. However, detailed values of the dose rates were not possible to observe on the recordings, and the change in occupational exposure level from adjustments of exposure settings was not possible to identify. In conclusion, the use of video recordings is a promising tool to identify dose-affecting instances, allowing for a deeper knowledge of the interdependency between the management of the medical procedure, the applied imaging technology and the occupational exposure level. However, for a full information about the dose-affecting actions, the equipment used and the recording settings have to be thoroughly planned. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Robust spectral analysis of videocapsule images acquired from celiac disease patients

    Directory of Open Access Journals (Sweden)

    Bhagat Govind

    2011-09-01

    Full Text Available Abstract Background Dominant frequency (DF analysis of videocapsule endoscopy images is a new method to detect small intestinal periodicities that may result from mechanical rhythms such as peristalsis. Longer periodicity is related to greater image texture at areas of villous atrophy in celiac disease. However, extraneous features and spatiotemporal phase shift may mask DF rhythms. Method The robustness of Fourier and ensemble averaging spectral analysis to compute DF was tested. Videocapsule images from the distal duodenum of 11 celiac patients (frame rate 2/s and pixel resolution 576 × 576 were analyzed. For patients 1, 2, ... 11, respectively, a total of 10, 11, ..., 20 sequential images were extracted from a randomly selected time epoch. Each image sequence was artificially repeated to 200 frames, simulating periodicities of 0.2, 0.18, ..., 0.1Hz, respectively. Random white noise at four different levels, spatiotemporal phase shift, and frames with air bubbles were added. Power spectra were constructed pixel-wise over 200 frames, and an average spectrum was computed from the 576 × 576 individual spectra. The largest spectral peak in the average spectrum was the estimated DF. Error was defined as the absolute difference between actual DF and estimated DF. Results For Fourier analysis, the mean absolute error between estimated and actual DF was 0.032 ± 0.052Hz. Error increased with greater degree of random noise imposed. In contrast, all ensemble average estimates precisely predicted the simulated DF. Conclusions The ensemble average DF estimate of videocapsule images with simulated periodicity is robust to noise and spatiotemporal phase shift as compared with Fourier analysis. Accurate estimation of DF eliminates the need to impose complex masking, extraction, and/or corrective preprocessing measures.

  7. Image Segmentation and Feature Extraction for Recognizing Strokes in Tennis Game Videos

    NARCIS (Netherlands)

    Zivkovic, Z.; van der Heijden, Ferdinand; Petkovic, M.; Jonker, Willem; Langendijk, R.L.; Heijnsdijk, J.W.J.; Pimentel, A.D.; Wilkinson, M.H.F.

    This paper addresses the problem of recognizing human actions from video. Particularly, the case of recognizing events in tennis game videos is analyzed. Driven by our domain knowledge, a robust player segmentation algorithm is developed for real video data. Further, we introduce a number of novel

  8. Series of aerial images over Quivira National Wildlife Refuge, acquired October, 1938

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This data set is composite of original black and white series images obtained from Earth Explorer (USGS) on October 1st, 5th and 12th, 1938. The original photos were...

  9. Video-rate in vivo fluorescence imaging with a line-scanned dual-axis confocal microscope

    Science.gov (United States)

    Chen, Ye; Wang, Danni; Khan, Altaz; Wang, Yu; Borwege, Sabine; Sanai, Nader; Liu, Jonathan T. C.

    2015-10-01

    Video-rate optical-sectioning microscopy of living organisms would allow for the investigation of dynamic biological processes and would also reduce motion artifacts, especially for in vivo imaging applications. Previous feasibility studies, with a slow stage-scanned line-scanned dual-axis confocal (LS-DAC) microscope, have demonstrated that LS-DAC microscopy is capable of imaging tissues with subcellular resolution and high contrast at moderate depths of up to several hundred microns. However, the sensitivity and performance of a video-rate LS-DAC imaging system, with low-numerical aperture optics, have yet to be demonstrated. Here, we report on the construction and validation of a video-rate LS-DAC system that possesses sufficient sensitivity to visualize fluorescent contrast agents that are topically applied or systemically delivered in animal and human tissues. We present images of murine oral mucosa that are topically stained with methylene blue, and images of protoporphyrin IX-expressing brain tumor from glioma patients that have been administered 5-aminolevulinic acid prior to surgery. In addition, we demonstrate in vivo fluorescence imaging of red blood cells trafficking within the capillaries of a mouse ear, at frame rates of up to 30 fps. These results can serve as a benchmark for miniature in vivo microscopy devices under development.

  10. Surgical tool detection in cataract surgery videos through multi-image fusion inside a convolutional neural network.

    Science.gov (United States)

    Al Hajj, Hassan; Lamard, Mathieu; Charriere, Katia; Cochener, Beatrice; Quellec, Gwenole

    2017-07-01

    The automatic detection of surgical tools in surgery videos is a promising solution for surgical workflow analysis. It paves the way to various applications, including surgical workflow optimization, surgical skill evaluation and real-time warning generation. A solution based on convolutional neural networks (CNNs) is proposed in this paper. Unlike existing solutions, the proposed CNN does not analyze images independently. it analyzes sequences of consecutive images. Features extracted from each image by the CNN are fused inside the network using the optical flow. For improved performance, this multi-image fusion strategy is also applied while training the CNN. The proposed framework was evaluated in a dataset of 30 cataract surgery videos (6 hours of videos). Ten tool categories were defined by surgeons. The proposed system was able to detect each of these categories with a high area under the ROC curve (0.953 ≤ Az ≤ 0.987). The proposed detector, based on multi-image fusion, was significantly more sensitive and specific than a similar system analyzing images independently (p = 2.98 × 10(-6) and p = 2.07 × 10(-3), respectively).

  11. Diffusion tensor imaging of the auditory nerve in patients with acquired single-sided deafness

    DEFF Research Database (Denmark)

    Vos, Sjoerd; Haakma, Wieke; Versnel, Huib

    2015-01-01

    following cochlear hair cell loss, and the amount of degeneration may considerably differ between the two ears, also in patients with bilateral deafness. A measure that reflects the nerve's condition would help to assess the best of both nerves and decide accordingly which ear should be implanted...... for optimal benefit from a CI. Diffusion tensor MRI (DTI) may provide such a measure, by allowing noninvasive investigations of the nerve's microstructure. In this pilot study, we show the first use of DTI to image the auditory nerve in five normal-hearing subjects and five patients with long-term profound...... single-sided sensorineural hearing loss. A specialized acquisition protocol was designed for a 3 T MRI scanner to image the small nerve bundle. The nerve was reconstructed using fiber tractography and DTI metrics - which reflect the nerve's microstructural properties - were computed per tract. Comparing...

  12. Electronic spreadsheet to acquire the reflectance from the TM and ETM+ Landsat images

    Directory of Open Access Journals (Sweden)

    Antonio R. Formaggio

    2005-08-01

    Full Text Available The reflectance of agricultural cultures and other terrestrial surface "targets" is an intrinsic parameter of these targets, so in many situations, it must be used instead of the values of "gray levels" that is found in the satellite images. In order to get reflectance values, it is necessary to eliminate the atmospheric interference and to make a set of calculations that uses sensor parameters and information regarding the original image. The automation of this procedure has the advantage to speed up the process and to reduce the possibility of errors during calculations. The objective of this paper is to present an electronic spreadsheet that simplifies and automatizes the transformation of the digital numbers of the TM/Landsat-5 and ETM+/Landsat-7 images into reflectance. The method employed for atmospheric correction was the dark object subtraction (DOS. The electronic spreadsheet described here is freely available to users and can be downloaded at the following website: http://www.dsr.inpe.br/Calculo_Reflectancia.xls.

  13. A Review on Video/Image Authentication and Tamper Detection Techniques

    Science.gov (United States)

    Parmar, Zarna; Upadhyay, Saurabh

    2013-02-01

    With the innovations and development in sophisticated video editing technology and a wide spread of video information and services in our society, it is becoming increasingly significant to assure the trustworthiness of video information. Therefore in surveillance, medical and various other fields, video contents must be protected against attempt to manipulate them. Such malicious alterations could affect the decisions based on these videos. A lot of techniques are proposed by various researchers in the literature that assure the authenticity of video information in their own way. In this paper we present a brief survey on video authentication techniques with their classification. These authentication techniques are generally classified into following categories: digital signature based techniques, watermark based techniques, and other authentication techniques.

  14. CREATION OF 3D MODELS FROM LARGE UNSTRUCTURED IMAGE AND VIDEO DATASETS

    Directory of Open Access Journals (Sweden)

    J. Hollick

    2013-05-01

    Full Text Available Exploration of various places using low-cost camera solutions over decades without having a photogrammetric application in mind has resulted in large collections of images and videos that may have significant cultural value. The purpose of collecting this data is often to provide a log of events and therefore the data is often unstructured and of varying quality. Depending on the equipment used there may be approximate location data available for the images but the accuracy of this data may also be of varying quality. In this paper we present an approach that can deal with these conditions and process datasets of this type to produce 3D models. Results from processing the dataset collected during the discovery and subsequent exploration of the HMAS Sydney and HSK Kormoran wreck sites shows the potential of our approach. The results are promising and show that there is potential to retrieve significantly more information from many of these datasets than previously thought possible.

  15. Jointly optimized spatial prediction and block transform for video and image coding.

    Science.gov (United States)

    Han, Jingning; Saxena, Ankur; Melkote, Vinay; Rose, Kenneth

    2012-04-01

    This paper proposes a novel approach to jointly optimize spatial prediction and the choice of the subsequent transform in video and image compression. Under the assumption of a separable first-order Gauss-Markov model for the image signal, it is shown that the optimal Karhunen-Loeve Transform, given available partial boundary information, is well approximated by a close relative of the discrete sine transform (DST), with basis vectors that tend to vanish at the known boundary and maximize energy at the unknown boundary. The overall intraframe coding scheme thus switches between this variant of the DST named asymmetric DST (ADST), and traditional discrete cosine transform (DCT), depending on prediction direction and boundary information. The ADST is first compared with DCT in terms of coding gain under ideal model conditions and is demonstrated to provide significantly improved compression efficiency. The proposed adaptive prediction and transform scheme is then implemented within the H.264/AVC intra-mode framework and is experimentally shown to significantly outperform the standard intra coding mode. As an added benefit, it achieves substantial reduction in blocking artifacts due to the fact that the transform now adapts to the statistics of block edges. An integer version of this ADST is also proposed.

  16. Significance of telemedicine for video image transmission of endoscopic retrograde cholangiopancreatography and endoscopic ultrasonography procedures.

    Science.gov (United States)

    Shimizu, Shuji; Itaba, Soichi; Yada, Shinichiro; Takahata, Shunichi; Nakashima, Naoki; Okamura, Koji; Rerknimitr, Rungsun; Akaraviputh, Thawatchai; Lu, Xinghua; Tanaka, Masao

    2011-05-01

    With the rapid and marked progress in gastrointestinal endoscopy, the education of doctors in many new diagnostic and therapeutic procedures is of increasing importance. Telecommunications (telemedicine) is very useful and cost-effective for doctors' continuing exposure to advanced skills, including those needed for hepato-pancreato-biliary diseases. Nevertheless, telemedicine in endoscopy has not yet gained much popularity. We have successfully established a new system which solves the problems of conventional ones, namely poor streaming images and the need for special expensive teleconferencing equipment. The digital video transport system, free software that transforms digital video signals directly into Internet Protocol without any analog conversion, was installed on a personal computer using a network with as much as 30 Mbps per channel, thereby providing more than 200 times greater information volume than the conventional system. Kyushu University Hospital in Japan was linked internationally to worldwide academic networks, using security software to protect patients' privacy. Of the 188 telecommunications link-ups involving 108 institutions in 23 countries performed between February 2003 and August 2009, 55 events were endoscopy-related, 19 were live demonstrations, and 36 were gastrointestinal teleconferences with interactive discussions. The frame rate of the transmitted pictures was 30/s, thus preserving smooth high-quality streaming. This paper documents the first time that an advanced tele-endoscopy system has been established over such a wide area using academic high-volume networks, funded by the various governments, and which is now available all over the world. The benefits of a network dedicated to research and education have barely been recognized in the medical community. We believe our cutting-edge system will be a milestone in endoscopy and will improve the quality of gastrointestinal education, especially with respect to endoscopic retrograde

  17. Strategically acquired gradient Echo (STAGE) imaging, part I: Creating enhanced T1 contrast and standardized susceptibility weighted imaging and quantitative susceptibility mapping.

    Science.gov (United States)

    Chen, Yongsheng; Liu, Saifeng; Wang, Yu; Kang, Yan; Mark Haacke, E

    2017-10-19

    To provide whole brain grey matter (GM) to white matter (WM) contrast enhanced T1W (T1WE) images, multi-echo quantitative susceptibility mapping (QSM), proton density (PD) weighted images, T1 maps, PD maps, susceptibility weighted imaging (SWI), and R2* maps with minimal misregistration in scanning times 3T) were used for both T1 mapping with radio frequency (RF) transmit field correction and creating enhanced GM/WM contrast (the T1WE). The proposed T1WE image was created from a combination of the proton density weighted (6°, PDW) and T1W (24°) images and corrected for RF transmit field variations. Prior to the QSM calculation, a multi-echo phase unwrapping strategy was implemented using the unwrapped short echo to unwrap the longer echo to speed up computation. R2* maps were used to mask deep grey matter and veins during the iterative QSM calculation. A weighted-average sum of susceptibility maps was generated to increase the signal-to-noise ratio (SNR) and the contrast-to-noise ratio (CNR). The proposed T1WE image has a significantly improved CNR both for WM to deep GM and WM to cortical GM compared to the acquired T1W image (the first echo of 24° scan) and the T1MPRAGE image. The weighted-average susceptibility maps have 80±26%, 55±22%, 108±33% SNR increases for the ten datasets compared to the single echo result of 17.5ms, and 80±36%, 59±29% and 108±37% CNR increases for the putamen, caudate nucleus, and globus pallidus, respectively. STAGE imaging offers the potential to create a standardized brain imaging protocol providing four pieces of quantitative tissue property information and multiple types of qualitative information in just 5min. Copyright © 2017. Published by Elsevier Inc.

  18. Endoscopic trimodal imaging detects colonic neoplasia as well as standard video endoscopy.

    Science.gov (United States)

    Kuiper, Teaco; van den Broek, Frank J C; Naber, Anton H; van Soest, Ellert J; Scholten, Pieter; Mallant-Hent, Rosalie Ch; van den Brande, Jan; Jansen, Jeroen M; van Oijen, Arnoud H A M; Marsman, Willem A; Bergman, Jacques J G H M; Fockens, Paul; Dekker, Evelien

    2011-06-01

    Endoscopic trimodal imaging (ETMI) is a novel endoscopic technique that combines high-resolution endoscopy (HRE), autofluorescence imaging (AFI), and narrow-band imaging (NBI) that has only been studied in academic settings. We performed a randomized, controlled trial in a nonacademic setting to compare ETMI with standard video endoscopy (SVE) in the detection and differentiation of colorectal lesions. The study included 234 patients scheduled to receive colonoscopy who were randomly assigned to undergo a colonoscopy in tandem with either ETMI or SVE. In the ETMI group (n=118), first examination was performed using HRE, followed by AFI. In the other group, both examinations were performed using SVE (n=116). In the ETMI group, detected lesions were differentiated using AFI and NBI. In the ETMI group, 87 adenomas were detected in the first examination (with HRE), and then 34 adenomas were detected during second inspection (with AFI). In the SVE group, 79 adenomas were detected during the first inspection, and then 33 adenomas were detected during the second inspection. Adenoma detection rates did not differ significantly between the 2 groups (ETMI: 1.03 vs SVE: 0.97, P=.360). The adenoma miss-rate was 29% for HRE and 28% for SVE. The sensitivity, specificity, and accuracy of NBI in differentiating adenomas from nonadenomatous lesions were 87%, 63%, and 75%, respectively; corresponding values for AFI were 90%, 37%, and 62%, respectively. In a nonacademic setting, ETMI did not improve the detection rate for adenomas compared with SVE. NBI and AFI each differentiated colonic lesions with high levels of sensitivity but low levels of specificity. Copyright © 2011 AGA Institute. Published by Elsevier Inc. All rights reserved.

  19. Real-time video imaging of gas plumes using a DMD-enabled full-frame programmable spectral filter

    Science.gov (United States)

    Graff, David L.; Love, Steven P.

    2016-02-01

    Programmable spectral filters based on digital micromirror devices (DMDs) are typically restricted to imaging a 1D line across a scene, analogous to conventional "push-broom scanning" hyperspectral imagers. In previous work, however, we demonstrated that, by placing the diffraction grating at a telecentric image plane rather than at the more conventional location in collimated space, a spectral plane can be created at which light from the entire 2D scene focuses to a unique location for each wavelength. A DMD placed at this spectral plane can then spectrally manipulate an entire 2D image at once, enabling programmable matched filters to be applied to real-time video imaging. We have adapted this concept to imaging rapidly evolving gas plumes. We have constructed a high spectral resolution programmable spectral imager operating in the shortwave infrared region, capable of resolving the rotational-vibrational line structure of several gases at sub-nm spectral resolution. This ability to resolve the detailed gas-phase line structure enables implementation of highly selective filters that unambiguously separate the gas spectrum from background spectral clutter. On-line and between-line multi-band spectral filters, with bands individually weighted using the DMD's duty-cycle-based grayscale capability, are alternately uploaded to the DMD, the resulting images differenced, and the result displayed in real time at rates of several frames per second to produce real-time video of the turbulent motion of the gas plume.

  20. A clinical pilot study of a modular video-CT augmentation system for image-guided skull base surgery

    Science.gov (United States)

    Liu, Wen P.; Mirota, Daniel J.; Uneri, Ali; Otake, Yoshito; Hager, Gregory; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Siewerdsen, Jeffrey H.

    2012-02-01

    Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7+/-0.3) pixels and mean target registration error of (2.3+/-1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.

  1. Pulmonary cryptococcosis in rheumatoid arthritis (RA) patients: Comparison of imaging characteristics among RA, acquired immunodeficiency syndrome, and immunocompetent patients

    Energy Technology Data Exchange (ETDEWEB)

    Yanagawa, Noriyo, E-mail: noriyo_yana@ybb.ne.jp [Departments of Radiology, Tokyo Metropolitan Cancer and Infectious Diseases Center Komagome Hospital, 3-8-22, Honkomagome, Bunkyo-ku, Tokyo 113-8677 (Japan); Sakai, Fumikazu [Department of Diagnostic Radiology, Saitama Medical University International Medical Center, 1397-1 Yamane, Hidaka-shi, Saitama 350-1298 (Japan); Takemura, Tamiko [Department of Pathology, Japanese Red Cross Medical Center, 4-1-22 Hiroo, Shibuya-ku, Tokyo 150-8935 (Japan); Ishikawa, Satoru [Department of Respiratory Medicine, National Hospital Organization Chiba-East-Hospital, 673 Nitona-cho, Chuo-ku, Chiba-shi, Chiba 260-8712 (Japan); Takaki, Yasunobu [Departments of Radiology, Tokyo Metropolitan Cancer and Infectious Diseases Center Komagome Hospital, 3-8-22, Honkomagome, Bunkyo-ku, Tokyo 113-8677 (Japan); Hishima, Tsunekazu [Department of Pathology, Tokyo Metropolitan Cancer and Infectious Diseases Center Komagome Hospital, 3-8-22, Honkomagome, Bunkyo-ku, Tokyo 113-8677 (Japan); Kamata, Noriko [Departments of Radiology, Tokyo Metropolitan Cancer and Infectious Diseases Center Komagome Hospital, 3-8-22, Honkomagome, Bunkyo-ku, Tokyo 113-8677 (Japan)

    2013-11-01

    Purpose: The imaging characteristics of cryptococcosis in rheumatoid arthritis (RA) patients were analyzed by comparing them with those of acquired immunodeficiency syndrome (AIDS) and immunocompetent patients, and the imaging findings were correlated with pathological findings. Methods: Two radiologists retrospectively compared the computed tomographic (CT) findings of 35 episodes of pulmonary cryptococcosis in 31 patients with 3 kinds of underlying states (10 RA, 12 AIDS, 13 immunocompetent), focusing on the nature, number, and distribution of lesions. The pathological findings of 18 patients (8 RA, 2 AIDS, 8 immunocompetent) were analyzed by two pathologists, and then correlated with imaging findings. Results: The frequencies of consolidation and ground glass attenuation (GGA) were significantly higher, and the frequency of peripheral distribution was significantly lower in the RA group than in the immunocompetent group. Peripheral distribution was less common and generalized distribution was more frequent in the RA group than in the AIDS group. The pathological findings of the AIDS and immunocompetent groups reflected their immune status: There was lack of a granuloma reaction in the AIDS group, and a complete granuloma reaction in the immunocompetent group, while the findings of the RA group varied, including a complete granuloma reaction, a loose granuloma reaction and a hyper-immune reaction. Cases with the last two pathologic findings were symptomatic and showed generalized or central distribution on CT. Conclusion: Cryptococcosis in the RA group showed characteristic radiological and pathological findings compared with the other 2 groups.

  2. Color atomic force microscopy: A method to acquire three independent potential parameters to generate a color image

    Science.gov (United States)

    Allain, P. E.; Damiron, D.; Miyazaki, Y.; Kaminishi, K.; Pop, F. V.; Kobayashi, D.; Sasaki, N.; Kawakatsu, H.

    2017-09-01

    Atomic force microscopy has enabled imaging at the sub-molecular level, and 3D mapping of the tip-surface potential field. However, fast identification of the surface still remains a challenging topic for the microscope to enjoy widespread use as a tool with chemical contrast. In this paper, as a step towards implementation of such function, we introduce a control scheme and mathematical treatment of the acquired data that enable retrieval of essential information characterizing this potential field, leading to fast acquisition of images with chemical contrast. The control scheme is based on the tip sample distance modulation at an angular frequency ω , and null-control of the ω component of the measured self-excitation frequency of the oscillator. It is demonstrated that this control is robust, and that effective Morse Parameters that give satisfactory curve fit to the measured frequency shift can be calculated at rates comparable to the scan. Atomic features with similar topography were distinguished by differences in these parameters. The decay length parameter was resolved with a resolution of 10 pm. The method was demonstrated on quenched silicon at a scan rate comparable to conventional imaging.

  3. Use of Variogram Parameters in Analysis of Hyperspectral Imaging Data Acquired from Dual-Stressed Crop Leaves

    Directory of Open Access Journals (Sweden)

    Christian Nansen

    2012-01-01

    Full Text Available A detailed introduction to variogram analysis of reflectance data is provided, and variogram parameters (nugget, sill, and range values were examined as possible indicators of abiotic (irrigation regime and biotic (spider mite infestation stressors. Reflectance data was acquired from 2 maize hybrids (Zea mays L. at multiple time points in 2 data sets (229 hyperspectral images, and data from 160 individual spectral bands in the spectrum from 405 to 907 nm were analyzed. Based on 480 analyses of variance (160 spectral bands × 3 variogram parameters, it was seen that most of the combinations of spectral bands and variogram parameters were unsuitable as stress indicators mainly because of significant difference between the 2 data sets. However, several combinations of spectral bands and variogram parameters (especially nugget values could be considered unique indicators of either abiotic or biotic stress. Furthermore, nugget values at 683 and 775 nm responded significantly to abiotic stress, and nugget values at 731 nm and range values at 715 nm responded significantly to biotic stress. Based on qualitative characterization of actual hyperspectral images, it was seen that even subtle changes in spatial patterns of reflectance values can elicit several-fold changes in variogram parameters despite non-significant changes in average and median reflectance values and in width of 95% confidence limits. Such scattered stress expression is in accordance with documented within-leaf variation in both mineral content and chlorophyll concentration and therefore supports the need for reflectance-based stress detection at a high spatial resolution (many hyperspectral reflectance profiles acquired from a single leaf and may be used to explain or characterize within-leaf foraging patterns of herbivorous arthropods.

  4. A Merging Approach for Urban Boundary Correction Acquired By Remote Sensing Images

    Science.gov (United States)

    Zhang, P. L.; Shi, W. Z.; Wu, X. Y.

    2014-11-01

    Since reform and opening up to outside world, ever-growing economy and development of urbanization of China have caused expansion of the urban land scale. It's necessary to grasp the information about urban spatial form change, expansion situation and expanding regularity, in order to provide the scientific basis for urban management and planning. The traditional methods, like land supply cumulative method and remote sensing, to get the urban area, existed some defects. Their results always doesn't accord with the reality, and can't reflects the actual size of the urban area. Therefore, we propose a new method, making the best use of remote sensing, the population data, road data and other social economic statistic data. Because urban boundary not only expresses a geographical concept, also a social economic systems.It's inaccurate to describe urban area with only geographic areas. We firstly use remote sensing images, demographic data, road data and other data to produce urban boundary respectively. Then we choose the weight value for each boundary, and in terms of a certain model the ultimate boundary can be obtained by a series of calculations of previous boundaries. To verify the validity of this method, we design a set of experiments and obtained the preliminary results. The results have shown that this method can extract the urban area well and conforms with both the broad and narrow sense. Compared with the traditional methods, it's more real-time, objective and ornamental.

  5. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    Directory of Open Access Journals (Sweden)

    Nakamura Satoshi

    2004-01-01

    Full Text Available We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  6. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... questions Clinical Studies Publications Catalog Photos and Images Spanish Language Information Grants and Funding Extramural Research Division ... Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video ...

  7. Multisensor fusion in gastroenterology domain through video and echo endoscopic image combination: a challenge

    Science.gov (United States)

    Debon, Renaud; Le Guillou, Clara; Cauvin, Jean-Michel; Solaiman, Basel; Roux, Christian

    2001-08-01

    Medical domain makes intensive use of information fusion. In particular, the gastro-enterology is a discipline where physicians have the choice between several imagery modalities that offer complementary advantages. Among all existing systems, videoendoscopy (based on a CCD sensor) and echoendoscopy (based on an ultrasound sensor) are the most efficient. The use of each system corresponds to a given step in the physician diagnostic elaboration. Nowadays, several works aim to achieve automatic interpretation of videoendoscopic sequences. These systems can quantify color and superficial textures of the digestive tube. Unfortunately the relief information, which is important for the diagnostic, is very difficult to retrieve. On the other hand, some studies have proved that 3D information can be easily quantified using echoendoscopy image sequences. That is why the idea to combine these information, acquired from two very different points of view, can be considered as a real challenge for the medical image fusion topic. In this paper, after a review of actual works concerning numerical exploitation of videoendoscopy and echoendoscopy, the following question will be discussed: how can the use of complementary aspects of the different systems ease the automatic exploitation of videoendoscopy ? In a second time, we will evaluate the feasibility of the achievement of a realistic 3D reconstruction based both on information given by echoendoscopy (relief) and videoendoscopy (texture). Enumeration of potential applications of such a fusion system will then follow. Further discussions and perspectives will conclude this first study.

  8. Prediction of foal carcass composition and wholesale cut yields by using video image analysis.

    Science.gov (United States)

    Lorenzo, J M; Guedes, C M; Agregán, R; Sarriés, M V; Franco, D; Silva, S R

    2018-01-01

    This work represents the first contribution for the application of the video image analysis (VIA) technology in predicting lean meat and fat composition in the equine species. Images of left sides of the carcass (n=42) were captured from the dorsal, lateral and medial views using a high-resolution digital camera. A total of 41 measurements (angles, lengths, widths and areas) were obtained by VIA. The variation of percentage of lean meat obtained from the forequarter (FQ) and hindquarter (HQ) carcass ranged between 5.86% and 7.83%. However, the percentage of fat (FAT) obtained from the FQ and HQ carcass presented a higher variation (CV between 41.34% and 44.58%). By combining different measurements and using prediction models with cold carcass weight (CCW) and VIA measurement the coefficient of determination (k-fold-R 2) were 0.458 and 0.532 for FQ and HQ, respectively. On the other hand, employing the most comprehensive model (CCW plus all VIA measurements), the k-fold-R 2 increased from 0.494 to 0.887 and 0.513 to 0.878 with respect to the simplest model (only with CCW), while precision increased with the reduction in the root mean square error (2.958 to 0.947 and 1.841 to 0.787) for the hindquarter fat and lean percentage, respectively. With CCW plus VIA measurements is possible to explain the wholesale value cuts yield variation (k-fold-R 2 between 0.533 and 0.889). Overall, the VIA technology performed in the present study could be considered as an accurate method to assess the horse carcass composition which could have a role in breeding programmes and research studies to assist in the development of a value-based marketing system for horse carcass.

  9. Enhancing Perceived Quality of Compressed Images and Video with Anisotropic Diffusion and Fuzzy Filtering

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Korhonen, Jari; Forchhammer, Søren

    2013-01-01

    video sequences. For the video sequences, different filters are applied to luminance (Y) and chrominance (U,V) components. The performance of the proposed method has been compared against several other methods by using different objective quality metrics and a subjective comparison study. Both objective...

  10. VideoSAR collections to image underground chemical explosion surface phenomena

    Science.gov (United States)

    Yocky, David A.; Calloway, Terry M.; Wahl, Daniel E.

    2017-05-01

    Fully-polarimetric X-band (9.6 GHz center frequency) VideoSAR with 0.125-meter ground resolution flew collections before, during, and after the fifth Source Physics Experiment (SPE-5) underground chemical explosion. We generate and exploit synthetic aperture RADAR (SAR) and VideoSAR products to characterize surface effects caused by the underground explosion. To our knowledge, this has never been done. Exploited VideoSAR products are "movies" of coherence maps, phase-difference maps, and magnitude imagery. These movies show two-dimensional, time-varying surface movement. However, objects located on the SPE pad created unwanted, vibrating signatures during the event which made registration and coherent processing more difficult. Nevertheless, there is evidence that dynamic changes are captured by VideoSAR during the event. VideoSAR provides a unique, coherent, time-varying measure of surface expression of an underground chemical explosion.

  11. Magnetic resonance imaging depiction of acquired Dyke-Davidoff-Masson syndrome with crossed cerebro-cerebellar diaschisis: Report of two cases.

    Science.gov (United States)

    Gupta, Ranjana; Joshi, Sandeep; Mittal, Amit; Luthra, Ishita; Mittal, Puneet; Verma, Vibha

    2015-01-01

    Acquired Dyke-Davidoff-Masson syndrome, also known as hemispheric atrophy, is characterized by loss of volume of one cerebral hemisphere from an insult in early life. Crossed cerebellar diaschisis refers to dysfunction/atrophy of cerebellar hemisphere which is secondary to contralateral supratentorial insult. We describe magnetic resonance imaging findings in two cases of acquired Dyke-Davidoff-Masson syndrome with crossed cerebro-cerebellar diaschisis.

  12. Magnetic resonance imaging depiction of acquired Dyke–Davidoff–Masson syndrome with crossed cerebro-cerebellar diaschisis: Report of two cases

    Science.gov (United States)

    Gupta, Ranjana; Joshi, Sandeep; Mittal, Amit; Luthra, Ishita; Mittal, Puneet; Verma, Vibha

    2015-01-01

    Acquired Dyke–Davidoff–Masson syndrome, also known as hemispheric atrophy, is characterized by loss of volume of one cerebral hemisphere from an insult in early life. Crossed cerebellar diaschisis refers to dysfunction/atrophy of cerebellar hemisphere which is secondary to contralateral supratentorial insult. We describe magnetic resonance imaging findings in two cases of acquired Dyke–Davidoff–Masson syndrome with crossed cerebro-cerebellar diaschisis. PMID:26557182

  13. Integration of Point Clouds and Images Acquired from a Low-Cost NIR Camera Sensor for Cultural Heritage Purposes

    Science.gov (United States)

    Kedzierski, M.; Walczykowski, P.; Wojtkowska, M.; Fryskowska, A.

    2017-08-01

    Terrestrial Laser Scanning is currently one of the most common techniques for modelling and documenting structures of cultural heritage. However, only geometric information on its own, without the addition of imagery data is insufficient when formulating a precise statement about the status of studies structure, for feature extraction or indicating the sites to be restored. Therefore, the Authors propose the integration of spatial data from terrestrial laser scanning with imaging data from low-cost cameras. The use of images from low-cost cameras makes it possible to limit the costs needed to complete such a study, and thus, increasing the possibility of intensifying the frequency of photographing and monitoring of the given structure. As a result, the analysed cultural heritage structures can be monitored more closely and in more detail, meaning that the technical documentation concerning this structure is also more precise. To supplement the laser scanning information, the Authors propose using both images taken both in the near-infrared range and in the visible range. This choice is motivated by the fact that not all important features of historical structures are always visible RGB, but they can be identified in NIR imagery, which, with the additional merging with a three-dimensional point cloud, gives full spatial information about the cultural heritage structure in question. The Authors proposed an algorithm that automates the process of integrating NIR images with a point cloud using parameters, which had been calculated during the transformation of RGB images. A number of conditions affecting the accuracy of the texturing had been studies, in particular, the impact of the geometry of the distribution of adjustment points and their amount on the accuracy of the integration process, the correlation between the intensity value and the error on specific points using images in different ranges of the electromagnetic spectrum and the selection of the optimal

  14. INTEGRATION OF POINT CLOUDS AND IMAGES ACQUIRED FROM A LOW-COST NIR CAMERA SENSOR FOR CULTURAL HERITAGE PURPOSES

    Directory of Open Access Journals (Sweden)

    M. Kedzierski

    2017-08-01

    Full Text Available Terrestrial Laser Scanning is currently one of the most common techniques for modelling and documenting structures of cultural heritage. However, only geometric information on its own, without the addition of imagery data is insufficient when formulating a precise statement about the status of studies structure, for feature extraction or indicating the sites to be restored. Therefore, the Authors propose the integration of spatial data from terrestrial laser scanning with imaging data from low-cost cameras. The use of images from low-cost cameras makes it possible to limit the costs needed to complete such a study, and thus, increasing the possibility of intensifying the frequency of photographing and monitoring of the given structure. As a result, the analysed cultural heritage structures can be monitored more closely and in more detail, meaning that the technical documentation concerning this structure is also more precise. To supplement the laser scanning information, the Authors propose using both images taken both in the near-infrared range and in the visible range. This choice is motivated by the fact that not all important features of historical structures are always visible RGB, but they can be identified in NIR imagery, which, with the additional merging with a three-dimensional point cloud, gives full spatial information about the cultural heritage structure in question. The Authors proposed an algorithm that automates the process of integrating NIR images with a point cloud using parameters, which had been calculated during the transformation of RGB images. A number of conditions affecting the accuracy of the texturing had been studies, in particular, the impact of the geometry of the distribution of adjustment points and their amount on the accuracy of the integration process, the correlation between the intensity value and the error on specific points using images in different ranges of the electromagnetic spectrum and the selection

  15. Short term exposure to attractive and muscular singers in music video clips negatively affects men's body image and mood.

    Science.gov (United States)

    Mulgrew, K E; Volcevski-Kostas, D

    2012-09-01

    Viewing idealized images has been shown to reduce men's body satisfaction; however no research has examined the impact of music video clips. This was the first study to examine the effects of exposure to muscular images in music clips on men's body image, mood and cognitions. Ninety men viewed 5 min of clips containing scenery, muscular or average-looking singers, and completed pre- and posttest measures of mood and body image. Appearance schema activation was also measured. Men exposed to the muscular clips showed poorer posttest levels of anger, body and muscle tone satisfaction compared to men exposed to the scenery or average clips. No evidence of schema activation was found, although potential problems with the measure are noted. These preliminary findings suggest that even short term exposure to music clips can produce negative effects on men's body image and mood. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. A Super-resolution Reconstruction Algorithm for Surveillance Video

    Directory of Open Access Journals (Sweden)

    Jian Shao

    2017-01-01

    Full Text Available Recent technological developments have resulted in surveillance video becoming a primary method of preserving public security. Many city crimes are observed in surveillance video. The most abundant evidence collected by the police is also acquired through surveillance video sources. Surveillance video footage offers very strong support for solving criminal cases, therefore, creating an effective policy, and applying useful methods to the retrieval of additional evidence is becoming increasingly important. However, surveillance video has had its failings, namely, video footage being captured in low resolution (LR and bad visual quality. In this paper, we discuss the characteristics of surveillance video and describe the manual feature registration – maximum a posteriori – projection onto convex sets to develop a super-resolution reconstruction method, which improves the quality of surveillance video. From this method, we can make optimal use of information contained in the LR video image, but we can also control the image edge clearly as well as the convergence of the algorithm. Finally, we make a suggestion on how to adjust the algorithm adaptability by analyzing the prior information of target image.

  17. Emergency Medicine Evaluation of Community-Acquired Pneumonia: History, Examination, Imaging and Laboratory Assessment, and Risk Scores.

    Science.gov (United States)

    Long, Brit; Long, Drew; Koyfman, Alex

    2017-11-01

    Pneumonia is a common infection, accounting for approximately one million hospitalizations in the United States annually. This potentially life-threatening disease is commonly diagnosed based on history, physical examination, and chest radiograph. To investigate emergency medicine evaluation of community-acquired pneumonia including history, physical examination, imaging, and the use of risk scores in patient assessment. Pneumonia is the number one cause of death from infectious disease. The condition is broken into several categories, the most common being community-acquired pneumonia. Diagnosis centers on history, physical examination, and chest radiograph. However, all are unreliable when used alone, and misdiagnosis occurs in up to one-third of patients. Chest radiograph has a sensitivity of 46-77%, and biomarkers including white blood cell count, procalcitonin, and C-reactive protein provide little benefit in diagnosis. Biomarkers may assist admitting teams, but require further study for use in the emergency department. Ultrasound has shown utility in correctly identifying pneumonia. Clinical gestalt demonstrates greater ability to diagnose pneumonia. Clinical scores including Pneumonia Severity Index (PSI); Confusion, blood Urea nitrogen, Respiratory rate, Blood pressure, age 65 score (CURB-65); and several others may be helpful for disposition, but should supplement, not replace, clinical judgment. Patient socioeconomic status must be considered in disposition decisions. The diagnosis of pneumonia requires clinical gestalt using a combination of history and physical examination. Chest radiograph may be negative, particularly in patients presenting early in disease course and elderly patients. Clinical scores can supplement clinical gestalt and assist in disposition when used appropriately. Published by Elsevier Inc.

  18. Disembodied perspective: third-person images in GoPro videos

    National Research Council Canada - National Science Library

    Bédard, Philippe

    2015-01-01

    A technical analysis of GoPro videos, focusing on the production of a third-person perspective created when the camera is turned back on the user, and the sense of disorientation that results for the spectator...

  19. Estimating age ratios and size of pacific walrus herds on coastal haulouts using video imaging.

    Directory of Open Access Journals (Sweden)

    Daniel H Monson

    Full Text Available During Arctic summers, sea ice provides resting habitat for Pacific walruses as it drifts over foraging areas in the eastern Chukchi Sea. Climate-driven reductions in sea ice have recently created ice-free conditions in the Chukchi Sea by late summer causing walruses to rest at coastal haulouts along the Chukotka and Alaska coasts, which provides an opportunity to study walruses at relatively accessible locations. Walrus age can be determined from the ratio of tusk length to snout dimensions. We evaluated use of images obtained from a gyro-stabilized video system mounted on a helicopter flying at high altitudes (to avoid disturbance to classify the sex and age of walruses hauled out on Alaska beaches in 2010-2011. We were able to classify 95% of randomly selected individuals to either an 8- or 3-category age class, and we found measurement-based age classifications were more repeatable than visual classifications when using images presenting the correct head profile. Herd density at coastal haulouts averaged 0.88 walruses/m(2 (std. err. = 0.02, herd size ranged from 8,300 to 19,400 (CV 0.03-0.06 and we documented ∼30,000 animals along ∼1 km of beach in 2011. Within the herds, dependent walruses (0-2 yr-olds tended to be located closer to water, and this tendency became more pronounced as the herd spent more time on the beach. Therefore, unbiased estimation of herd age-ratios will require a sampling design that allows for spatial and temporal structuring. In addition, randomly sampling walruses available at the edge of the herd for other purposes (e.g., tagging, biopsying will not sample walruses with an age structure representative of the herd. Sea ice losses are projected to continue, and population age structure data collected with aerial videography at coastal haulouts may provide demographic information vital to ongoing efforts to understand effects of climate change on this species.

  20. Estimating age ratios and size of Pacific walrus herds on coastal haulouts using video imaging

    Science.gov (United States)

    Monson, Daniel H.; Udevitz, Mark S.; Jay, Chadwick V.

    2013-01-01

    During Arctic summers, sea ice provides resting habitat for Pacific walruses as it drifts over foraging areas in the eastern Chukchi Sea. Climate-driven reductions in sea ice have recently created ice-free conditions in the Chukchi Sea by late summer causing walruses to rest at coastal haulouts along the Chukotka and Alaska coasts, which provides an opportunity to study walruses at relatively accessible locations. Walrus age can be determined from the ratio of tusk length to snout dimensions. We evaluated use of images obtained from a gyro-stabilized video system mounted on a helicopter flying at high altitudes (to avoid disturbance) to classify the sex and age of walruses hauled out on Alaska beaches in 2010–2011. We were able to classify 95% of randomly selected individuals to either an 8- or 3-category age class, and we found measurement-based age classifications were more repeatable than visual classifications when using images presenting the correct head profile. Herd density at coastal haulouts averaged 0.88 walruses/m2 (std. err. = 0.02), herd size ranged from 8,300 to 19,400 (CV 0.03–0.06) and we documented ~30,000 animals along ~1 km of beach in 2011. Within the herds, dependent walruses (0–2 yr-olds) tended to be located closer to water, and this tendency became more pronounced as the herd spent more time on the beach. Therefore, unbiased estimation of herd age-ratios will require a sampling design that allows for spatial and temporal structuring. In addition, randomly sampling walruses available at the edge of the herd for other purposes (e.g., tagging, biopsying) will not sample walruses with an age structure representative of the herd. Sea ice losses are projected to continue, and population age structure data collected with aerial videography at coastal haulouts may provide demographic information vital to ongoing efforts to understand effects of climate change on this species.

  1. Estimating age ratios and size of pacific walrus herds on coastal haulouts using video imaging.

    Science.gov (United States)

    Monson, Daniel H; Udevitz, Mark S; Jay, Chadwick V

    2013-01-01

    During Arctic summers, sea ice provides resting habitat for Pacific walruses as it drifts over foraging areas in the eastern Chukchi Sea. Climate-driven reductions in sea ice have recently created ice-free conditions in the Chukchi Sea by late summer causing walruses to rest at coastal haulouts along the Chukotka and Alaska coasts, which provides an opportunity to study walruses at relatively accessible locations. Walrus age can be determined from the ratio of tusk length to snout dimensions. We evaluated use of images obtained from a gyro-stabilized video system mounted on a helicopter flying at high altitudes (to avoid disturbance) to classify the sex and age of walruses hauled out on Alaska beaches in 2010-2011. We were able to classify 95% of randomly selected individuals to either an 8- or 3-category age class, and we found measurement-based age classifications were more repeatable than visual classifications when using images presenting the correct head profile. Herd density at coastal haulouts averaged 0.88 walruses/m(2) (std. err. = 0.02), herd size ranged from 8,300 to 19,400 (CV 0.03-0.06) and we documented ∼30,000 animals along ∼1 km of beach in 2011. Within the herds, dependent walruses (0-2 yr-olds) tended to be located closer to water, and this tendency became more pronounced as the herd spent more time on the beach. Therefore, unbiased estimation of herd age-ratios will require a sampling design that allows for spatial and temporal structuring. In addition, randomly sampling walruses available at the edge of the herd for other purposes (e.g., tagging, biopsying) will not sample walruses with an age structure representative of the herd. Sea ice losses are projected to continue, and population age structure data collected with aerial videography at coastal haulouts may provide demographic information vital to ongoing efforts to understand effects of climate change on this species.

  2. Miniature stereoscopic video system provides real-time 3D registration and image fusion for minimally invasive surgery

    Science.gov (United States)

    Yaron, Avi; Bar-Zohar, Meir; Horesh, Nadav

    2007-02-01

    Sophisticated surgeries require the integration of several medical imaging modalities, like MRI and CT, which are three-dimensional. Many efforts are invested in providing the surgeon with this information in an intuitive & easy to use manner. A notable development, made by Visionsense, enables the surgeon to visualize the scene in 3D using a miniature stereoscopic camera. It also provides real-time 3D measurements that allow registration of navigation systems as well as 3D imaging modalities, overlaying these images on the stereoscopic video image in real-time. The real-time MIS 'see through tissue' fusion solutions enable the development of new MIS procedures in various surgical segments, such as spine, abdomen, cardio-thoracic and brain. This paper describes 3D surface reconstruction and registration methods using Visionsense camera, as a step toward fully automated multi-modality 3D registration.

  3. Video change detection for fixed wing UAVs

    Science.gov (United States)

    Bartelsen, Jan; Müller, Thomas; Ring, Jochen; Mück, Klaus; Brüstle, Stefan; Erdnüß, Bastian; Lutz, Bastian; Herbst, Theresa

    2017-10-01

    In this paper we proceed the work of Bartelsen et al.1 We present the draft of a process chain for an image based change detection which is designed for videos acquired by fixed wing unmanned aerial vehicles (UAVs). From our point of view, automatic video change detection for aerial images can be useful to recognize functional activities which are typically caused by the deployment of improvised explosive devices (IEDs), e.g. excavations, skid marks, footprints, left-behind tooling equipment, and marker stones. Furthermore, in case of natural disasters, like flooding, imminent danger can be recognized quickly. Due to the necessary flight range, we concentrate on fixed wing UAVs. Automatic change detection can be reduced to a comparatively simple photogrammetric problem when the perspective change between the "before" and "after" image sets is kept as small as possible. Therefore, the aerial image acquisition demands a mission planning with a clear purpose including flight path and sensor configuration. While the latter can be enabled simply by a fixed and meaningful adjustment of the camera, ensuring a small perspective change for "before" and "after" videos acquired by fixed wing UAVs is a challenging problem. Concerning this matter, we have performed tests with an advanced commercial off the shelf (COTS) system which comprises a differential GPS and autopilot system estimating the repetition accuracy of its trajectory. Although several similar approaches have been presented,23 as far as we are able to judge, the limits for this important issue are not estimated so far. Furthermore, we design a process chain to enable the practical utilization of video change detection. It consists of a front-end of a database to handle large amounts of video data, an image processing and change detection implementation, and the visualization of the results. We apply our process chain on the real video data acquired by the advanced COTS fixed wing UAV and synthetic data. For the

  4. Real-time intravascular photoacoustic-ultrasound imaging of lipid-laden plaque at speed of video-rate level

    Science.gov (United States)

    Hui, Jie; Cao, Yingchun; Zhang, Yi; Kole, Ayeeshik; Wang, Pu; Yu, Guangli; Eakins, Gregory; Sturek, Michael; Chen, Weibiao; Cheng, Ji-Xin

    2017-03-01

    Intravascular photoacoustic-ultrasound (IVPA-US) imaging is an emerging hybrid modality for the detection of lipidladen plaques by providing simultaneous morphological and lipid-specific chemical information of an artery wall. The clinical utility of IVPA-US technology requires real-time imaging and display at speed of video-rate level. Here, we demonstrate a compact and portable IVPA-US system capable of imaging at up to 25 frames per second in real-time display mode. This unprecedented imaging speed was achieved by concurrent innovations in excitation laser source, rotary joint assembly, 1 mm IVPA-US catheter, differentiated A-line strategy, and real-time image processing and display algorithms. By imaging pulsatile motion at different imaging speeds, 16 frames per second was deemed to be adequate to suppress motion artifacts from cardiac pulsation for in vivo applications. Our lateral resolution results further verified the number of A-lines used for a cross-sectional IVPA image reconstruction. The translational capability of this system for the detection of lipid-laden plaques was validated by ex vivo imaging of an atherosclerotic human coronary artery at 16 frames per second, which showed strong correlation to gold-standard histopathology.

  5. High dynamic range (HDR) virtual bronchoscopy rendering for video tracking

    Science.gov (United States)

    Popa, Teo; Choi, Jae

    2007-03-01

    In this paper, we present the design and implementation of a new rendering method based on high dynamic range (HDR) lighting and exposure control. This rendering method is applied to create video images for a 3D virtual bronchoscopy system. One of the main optical parameters of a bronchoscope's camera is the sensor exposure. The exposure adjustment is needed since the dynamic range of most digital video cameras is narrower than the high dynamic range of real scenes. The dynamic range of a camera is defined as the ratio of the brightest point of an image to the darkest point of the same image where details are present. In a video camera exposure is controlled by shutter speed and the lens aperture. To create the virtual bronchoscopic images, we first rendered a raw image in absolute units (luminance); then, we simulated exposure by mapping the computed values to the values appropriate for video-acquired images using a tone mapping operator. We generated several images with HDR and others with low dynamic range (LDR), and then compared their quality by applying them to a 2D/3D video-based tracking system. We conclude that images with HDR are closer to real bronchoscopy images than those with LDR, and thus, that HDR lighting can improve the accuracy of image-based tracking.

  6. Fully Automatic Software for Retinal Thickness in Eyes With Diabetic Macular Edema From Images Acquired by Cirrus and Spectralis Systems

    Science.gov (United States)

    Lee, Joo Yong; Chiu, Stephanie J.; Srinivasan, Pratul P.; Izatt, Joseph A.; Toth, Cynthia A.; Farsiu, Sina; Jaffe, Glenn J.

    2013-01-01

    Purpose. To determine whether a novel automatic segmentation program, the Duke Optical Coherence Tomography Retinal Analysis Program (DOCTRAP), can be applied to spectral-domain optical coherence tomography (SD-OCT) images obtained from different commercially available SD-OCT in eyes with diabetic macular edema (DME). Methods. A novel segmentation framework was used to segment the retina, inner retinal pigment epithelium, and Bruch's membrane on images from eyes with DME acquired by one of two SD-OCT systems, Spectralis or Cirrus high definition (HD)-OCT. Thickness data obtained by the DOCTRAP software were compared with those produced by Spectralis and Cirrus. Measurement agreement and its dependence were assessed using intraclass correlation (ICC). Results. A total of 40 SD-OCT scans from 20 subjects for each machine were included in the analysis. Spectralis: the mean thickness in the 1-mm central area determined by DOCTRAP and Spectralis was 463.8 ± 107.5 μm and 467.0 ± 108.1 μm, respectively (ICC, 0.999). There was also a high level agreement in surrounding areas (out to 3 mm). Cirrus: the mean thickness in the 1-mm central area was 440.8 ± 183.4 μm and 442.7 ± 182.4 μm by DOCTRAP and Cirrus, respectively (ICC, 0.999). The thickness agreement in surrounding areas (out to 3 mm) was more variable due to Cirrus segmentation errors in one subject (ICC, 0.734–0.999). After manual correction of the errors, there was a high level of thickness agreement in surrounding areas (ICC, 0.997–1.000). Conclusions. The DOCTRAP may be useful to compare retinal thicknesses in eyes with DME across OCT platforms. PMID:24084089

  7. Robust real-time segmentation of images and videos using a smooth-spline snake-based algorithm.

    Science.gov (United States)

    Precioso, Frederic; Barlaud, Michel; Blu, Thierry; Unser, Michael

    2005-07-01

    This paper deals with fast image and video segmentation using active contours. Region-based active contours using level sets are powerful techniques for video segmentation, but they suffer from large computational cost. A parametric active contour method based on B-Spline interpolation has been proposed in to highly reduce the computational cost, but this method is sensitive to noise. Here, we choose to relax the rigid interpolation constraint in order to robustify our method in the presence of noise: by using smoothing splines, we trade a tunable amount of interpolation error for a smoother spline curve. We show by experiments on natural sequences that this new flexibility yields segmentation results of higher quality at no additional computational cost. Hence, real-time processing for moving objects segmentation is preserved.

  8. Peri-operative imaging of cancer margins with reflectance confocal microscopy during Mohs micrographic surgery: feasibility of a video-mosaicing algorithm

    Science.gov (United States)

    Flores, Eileen; Yelamos, Oriol; Cordova, Miguel; Kose, Kivanc; Phillips, William; Rossi, Anthony; Nehal, Kishwer; Rajadhyaksha, Milind

    2017-02-01

    Reflectance confocal microscopy (RCM) imaging shows promise for guiding surgical treatment of skin cancers. Recent technological advancements such as the introduction of the handheld version of the reflectance confocal microscope, video acquisition and video-mosaicing have improved RCM as an emerging tool to evaluate cancer margins during routine surgical skin procedures such as Mohs micrographic surgery (MMS). Detection of residual non-melanoma skin cancer (NMSC) tumor during MMS is feasible, as demonstrated by the introduction of real-time perioperative imaging on patients in the surgical setting. Our study is currently testing the feasibility of a new mosaicing algorithm for perioperative RCM imaging of NMSC cancer margins on patients during MMS. We report progress toward imaging and image analysis on forty-five patients, who presented for MMS at the MSKCC Dermatology service. The first 10 patients were used as a training set to establish an RCM imaging algorithm, which was implemented on the remaining test set of 35 patients. RCM imaging, using 35% AlCl3 for nuclear contrast, was performed pre- and intra-operatively with the Vivascope 3000 (Caliber ID). Imaging was performed in quadrants in the wound, to simulate the Mohs surgeon's examination of pathology. Videos were taken at the epidermal and deep dermal margins. Our Mohs surgeons assessed all videos and video-mosaics for quality and correlation to histology. Overall, our RCM video-mosaicing algorithm is feasible. RCM videos and video-mosaics of the epidermal and dermal margins were found to be of clinically acceptable quality. Assessment of cancer margins was affected by type of NMSC, size and location. Among the test set of 35 patients, 83% showed acceptable imaging quality, resolution and contrast. Visualization of nuclear and cellular morphology of residual BCC/SCC tumor and normal skin features could be detected in the peripheral and deep dermal margins. We observed correlation between the RCM videos/video

  9. Real-Depth imaging: a new 3D imaging technology with inexpensive direct-view (no glasses) video and other applications

    Science.gov (United States)

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for a new, patent pending, 'floating 3-D, off-the-screen- experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3-D graphics' technologies are actually flat on screen. Floating ImagesTM technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3-D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax (the ability to look around foreground objects to see previously hidden background objects, with each eye seeing a different view at all times) and accommodation (the need to re-focus one's eyes when shifting attention from a near object to a distant object) which coincides with convergence (the need to re-aim one's eyes when shifting attention from a near object to a distant object). Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed (unlike stereoscopic and autostereoscopic displays). The imagery (video or computer generated) must either be formatted for the Floating ImagesTM platform when written or existing software can be re-formatted without much difficulty.

  10. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  11. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-03-01

    Full Text Available Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT, speed-up robust feature (SURF, local binary patterns (LBP, histogram of oriented gradients (HOG, and weighted HOG. Recently, the convolutional neural network (CNN method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  12. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  13. CT angiography of the head-and-neck vessels acquired with low tube voltage, low iodine, and iterative image reconstruction: clinical evaluation of radiation dose and image quality.

    Directory of Open Access Journals (Sweden)

    Wei-lan Zhang

    Full Text Available OBJECTIVES: We aimed to assess the effectiveness and feasibility of head-and-neck Computed Tomography Angiography (CTA with low tube voltage and low concentration contrast media combined with iterative reconstruction algorithm. METHODS: 92 patients were randomly divided into group A and B: patients in group A received a conventional scan with 120 kVp and contrast media of 320 mgI/ml. Patients in group B, 80 kVp and contrast media of 270 mgI/ml were used along with iterative reconstruction algorithm techniques. Image quality, radiation dose and the effectively consumed iodine amount between two groups were analyzed and compared. RESULTS: Image quality of CTA of head-and-neck vessels obtained from patients in group B was significantly improved quantitatively and qualitatively. In addition, CT attenuation values in group B were also significantly higher than that in group A (p<0.001. Furthermore, compared with the protocol whereby 120 kVp and 320 mgI/dl were administrated, the mean radiation dose and consumed iodine amount in protocol B were also reduced by 50% and 15.6%, respectively (p<0.001. CONCLUSIONS: With the help of iterative reconstruction algorithm techniques, the head-and-neck CTA with diagnostic quality can be adequately acquired with low tube voltage and low concentration contrast media. This method could be potentially extended to include any part of the body to reduce the risks related to ionizing radiation.

  14. Real time three-dimensional space video rate sensors for millimeter waves imaging based very inexpensive plasma LED lamps

    Science.gov (United States)

    Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S.; Rozban, Daniel; Abramovich, Amir

    2014-10-01

    In recent years, much effort has been invested to develop inexpensive but sensitive Millimeter Wave (MMW) detectors that can be used in focal plane arrays (FPAs), in order to implement real time MMW imaging. Real time MMW imaging systems are required for many varied applications in many fields as homeland security, medicine, communications, military products and space technology. It is mainly because this radiation has high penetration and good navigability through dust storm, fog, heavy rain, dielectric materials, biological tissue, and diverse materials. Moreover, the atmospheric attenuation in this range of the spectrum is relatively low and the scattering is also low compared to NIR and VIS. The lack of inexpensive room temperature imaging systems makes it difficult to provide a suitable MMW system for many of the above applications. In last few years we advanced in research and development of sensors using very inexpensive (30-50 cents) Glow Discharge Detector (GDD) plasma indicator lamps as MMW detectors. This paper presents three kinds of GDD sensor based lamp Focal Plane Arrays (FPA). Those three kinds of cameras are different in the number of detectors, scanning operation, and detection method. The 1st and 2nd generations are 8 × 8 pixel array and an 18 × 2 mono-rail scanner array respectively, both of them for direct detection and limited to fixed imaging. The last designed sensor is a multiplexing frame rate of 16x16 GDD FPA. It permits real time video rate imaging of 30 frames/ sec and comprehensive 3D MMW imaging. The principle of detection in this sensor is a frequency modulated continuous wave (FMCW) system while each of the 16 GDD pixel lines is sampled simultaneously. Direct detection is also possible and can be done with a friendly user interface. This FPA sensor is built over 256 commercial GDD lamps with 3 mm diameter International Light, Inc., Peabody, MA model 527 Ne indicator lamps as pixel detectors. All three sensors are fully supported

  15. Towards an automated analysis of video-microscopy images of fungal morphogenesis

    Directory of Open Access Journals (Sweden)

    Diéguez-Uribeondo, Javier

    2005-06-01

    Full Text Available Fungal morphogenesis is an exciting field of cell biology and several mathematical models have been developed to describe it. These models require experimental evidences to be corroborated and, therefore, there is a continuous search for new microscopy and image analysis techniques. In this work, we have used a Canny-edge-detector based technique to automate the generation of hyphal profiles and calculation of morphogenetic parameters such as diameter, elongation rates and hyphoid fitness. The results show that the data obtained with this technique are similar to published data generated with manualbased tracing techniques and that have been carried out on the same species or genus. Thus, we show that application of edge detector-based technique to hyphal growth represents an efficient and accurate method to study hyphal morphogenesis. This represents the first step towards an automated analysis of videomicroscopy images of fungal morphogenesis.La morfogénesis de los hongos es un área de estudio de gran relevancia en la biología celular y en la que se han desarrollado varios modelos matemáticos. Los modelos matemáticos de procesos biológicos precisan de pruebas experimentales que apoyen y corroboren las predicciones teóricas y, por este motivo, existe una búsqueda continua de nuevas técnicas de microscopía y análisis de imágenes para su aplicación en el estudio del crecimiento celular. En este trabajo hemos utilizado una técnica basada en un detector de contornos llamado “Canny-edge-detectorâ€� con el objetivo de automatizar la generación de perfiles de hifas y el cálculo de parámetros morfogenéticos, tales como: el diámetro, la velocidad de elongación y el ajuste con el perfil hifoide, es decir, el perfil teórico de las hifas de los hongos. Los resultados obtenidos son similares a los datos publicados a partir de técnicas manuales de trazado de contornos, generados en la misma especie y género. De esta manera

  16. A framework for the recognition of high-level surgical tasks from video images for cataract surgeries

    Science.gov (United States)

    Lalys, Florent; Riffaud, Laurent; Bouget, David; Jannin, Pierre

    2012-01-01

    The need for a better integration of the new generation of Computer-Assisted-Surgical (CAS) systems has been recently emphasized. One necessity to achieve this objective is to retrieve data from the Operating Room (OR) with different sensors, then to derive models from these data. Recently, the use of videos from cameras in the OR has demonstrated its efficiency. In this paper, we propose a framework to assist in the development of systems for the automatic recognition of high level surgical tasks using microscope videos analysis. We validated its use on cataract procedures. The idea is to combine state-of-the-art computer vision techniques with time series analysis. The first step of the framework consisted in the definition of several visual cues for extracting semantic information, therefore characterizing each frame of the video. Five different pieces of image-based classifiers were therefore implemented. A step of pupil segmentation was also applied for dedicated visual cue detection. Time series classification algorithms were then applied to model time-varying data. Dynamic Time Warping (DTW) and Hidden Markov Models (HMM) were tested. This association combined the advantages of all methods for better understanding of the problem. The framework was finally validated through various studies. Six binary visual cues were chosen along with 12 phases to detect, obtaining accuracies of 94%. PMID:22203700

  17. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Benoit Macq

    2008-07-01

    Full Text Available Based on the analysis of real mobile ad hoc network (MANET traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS to wireless clients is demonstrated.

  18. Social Evaluations of Stereotypic Images in Video Games: Unfair, Legitimate, or "Just Entertainment"?

    Science.gov (United States)

    Brenick, Alaina; Henning, Alexandra; Killen, Melanie; O'Connor, Alexander; Collins, Michael

    2007-01-01

    The aim of this study is to assess late adolescents' evaluations of and reasoning about gender stereotypes in video games. Female (n = 46) and male (n = 41) students, predominantly European American, with a mean age 19 years, are interviewed about their knowledge of game usage, awareness and evaluation of stereotypes, beliefs about the influences…

  19. The Moving Image in Education Research: Reassembling the Body in Classroom Video Data

    Science.gov (United States)

    de Freitas, Elizabeth

    2016-01-01

    While audio recordings and observation might have dominated past decades of classroom research, video data is now the dominant form of data in the field. Ubiquitous videography is standard practice today in archiving the body of both the teacher and the student, and vast amounts of classroom and experiment clips are stored in online archives. Yet…

  20. Video monitoring in the Gadria debris flow catchment: preliminary results of large scale particle image velocimetry (LSPIV)

    Science.gov (United States)

    Theule, Joshua; Crema, Stefano; Comiti, Francesco; Cavalli, Marco; Marchi, Lorenzo

    2015-04-01

    Large scale particle image velocimetry (LSPIV) is a technique mostly used in rivers to measure two dimensional velocities from high resolution images at high frame rates. This technique still needs to be thoroughly explored in the field of debris flow studies. The Gadria debris flow monitoring catchment in Val Venosta (Italian Alps) has been equipped with four MOBOTIX M12 video cameras. Two cameras are located in a sediment trap located close to the alluvial fan apex, one looking upstream and the other looking down and more perpendicular to the flow. The third camera is in the next reach upstream from the sediment trap at a closer proximity to the flow. These three cameras are connected to a field shelter equipped with power supply and a server collecting all the monitoring data. The fourth camera is located in an active gully, the camera is activated by a rain gauge when there is one minute of rainfall. Before LSPIV can be used, the highly distorted images need to be corrected and accurate reference points need to be made. We decided to use IMGRAFT (an opensource image georectification toolbox) which can correct distorted images using reference points and camera location, and then finally rectifies the batch of images onto a DEM grid (or the DEM grid onto the image coordinates). With the orthorectified images, we used the freeware Fudaa-LSPIV (developed by EDF, IRSTEA, and DeltaCAD Company) to generate the LSPIV calculations of the flow events. Calculated velocities can easily be checked manually because of the already orthorectified images. During the monitoring program (since 2011) we recorded three debris flow events at the sediment trap area (each with very different surge dynamics). The camera in the gully was in operation in 2014 which managed to record granular flows and rockfalls, which particle tracking may be more appropriate for velocity measurements. The four cameras allows us to explore the limitations of camera distance, angle, frame rate, and image

  1. An efficient HW and SW design of H.264 video compression, storage and playback on FPGA devices for handheld thermal imaging systems

    Science.gov (United States)

    Gunay, Omer; Ozsarac, Ismail; Kamisli, Fatih

    2017-05-01

    Video recording is an essential property of new generation military imaging systems. Playback of the stored video on the same device is also desirable as it provides several operational benefits to end users. Two very important constraints for many military imaging systems, especially for hand-held devices and thermal weapon sights, are power consumption and size. To meet these constraints, it is essential to perform most of the processing applied to the video signal, such as preprocessing, compression, storing, decoding, playback and other system functions on a single programmable chip, such as FPGA, DSP, GPU or ASIC. In this work, H.264/AVC (Advanced Video Coding) compatible video compression, storage, decoding and playback blocks are efficiently designed and implemented on FPGA platforms using FPGA fabric and Altera NIOS II soft processor. Many subblocks that are used in video encoding are also used during video decoding in order to save FPGA resources and power. Computationally complex blocks are designed using FPGA fabric, while blocks such as SD card write/read, H.264 syntax decoding and CAVLC decoding are done using NIOS processor to benefit from software flexibility. In addition, to keep power consumption low, the system was designed to require limited external memory access. The design was tested using 640x480 25 fps thermal camera on CYCLONE V FPGA, which is the ALTERA's lowest power FPGA family, and consumes lower than 40% of CYCLONE V 5CEFA7 FPGA resources on average.

  2. Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution - an application in higher education.

    Science.gov (United States)

    Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan

    2015-11-01

    To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study. © 2015 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.

  3. The advantages of using photographs and video images in telephone consultations with a specialist in paediatric surgery

    Directory of Open Access Journals (Sweden)

    Ibrahim Akkoyun

    2012-01-01

    Full Text Available Background: The purpose of this study was to evaluate the advantages of a telephone consultation with a specialist in paediatric surgery after taking photographs and video images by a general practitioner for the diagnosis of some diseases. Materials and Methods: This was a prospective study of the reliability of paediatric surgery online consultation among specialists and general practitioners. Results: Of 26 general practitioners included in the study, 12 were working in the city and 14 were working in districts outside the city. A total of 41 pictures and 3 videos of 38 patients were sent and evaluated together with the medical history and clinical findings. These patients were diagnosed with umbilical granuloma (n = 6, physiological/pathological phimosis (n = 6, balanitis (n = 6, hydrocele (n = 6, umbilical hernia (n = 4, smegma cyst (n = 2, reductable inguinal hernia (n = 1, incarcerated inguinal hernia (n = 1, paraphimosis (n = 1, burried penis (n = 1, hypospadias (n = 1, epigastric hernia (n = 1, vulva synechia (n = 1, and rectal prolapse (n = 1. Twelve patients were asked to be referred urgently, but it was suggested that only two of these patients, who had paraphimosis and incarcerated inguinal hernia be referred in emergency conditions. It was decided that there was no need for the other ten patients to be referred to a specialist at night or at the weekend. All diagnoses were confirmed to be true, when all patients underwent examination in the pediatric surgery clinic in elective conditions. Conclusion: Evaluation of photographs and video images of a lesion together with medical history and clinical findings via a telephone consultation between a paediatric surgery specialist and a general practitioner provides a definitive diagnosis and prevents patients from being referred unnecessarily.

  4. Compression of Video-Otoscope Images for Tele-Otology: A Pilot Study

    Science.gov (United States)

    2001-10-25

    algorithm used in image compression is the one developed by the Joint Picture Expert Group (JPEG), which has been deployed in almost all imaging ...recognised the image , nor go back to view the previous images . This was designed to minimise the affect of memory . After the assessments were tabulated...also have contributed such as the memory effect, or the experience of the assessor. V. CONCLUSION 1. Images can probably be compressed to about

  5. Social Evaluations of Stereotypic Images in Video Games: Unfair, Legitimate, or "Just Entertainment"?

    Science.gov (United States)

    Brenick, Alaina; Henning, Alexandra; Killen, Melanie; O'Connor, Alexander; Collins, Michael

    2007-06-01

    The aim of this study was to assess adolescents' evaluations of, and reasoning about, gender stereotypes in video games. Female (N = 46) and male (N = 41), predominantly European-American, mean age = 19 years, were interviewed about their knowledge of game usage, awareness and evaluation of stereotypes, beliefs about the influences of games on the players, and authority jurisdiction over 3 different types of games: games with negative male stereotypes, and games with negative female stereotypes, and gender-neutral games. Gender differences were found for how participants evaluated these games. Males were more likely than females to find stereotypes acceptable. Results are discussed in terms of social reasoning, video game playing, and gender differences.

  6. Social Evaluations of Stereotypic Images in Video Games: Unfair, Legitimate, or “Just Entertainment”?

    Science.gov (United States)

    Brenick, Alaina; Henning, Alexandra; Killen, Melanie; O'Connor, Alexander; Collins, Michael

    2015-01-01

    The aim of this study was to assess adolescents' evaluations of, and reasoning about, gender stereotypes in video games. Female (N = 46) and male (N = 41), predominantly European-American, mean age = 19 years, were interviewed about their knowledge of game usage, awareness and evaluation of stereotypes, beliefs about the influences of games on the players, and authority jurisdiction over 3 different types of games: games with negative male stereotypes, and games with negative female stereotypes, and gender-neutral games. Gender differences were found for how participants evaluated these games. Males were more likely than females to find stereotypes acceptable. Results are discussed in terms of social reasoning, video game playing, and gender differences. PMID:25722501

  7. Concurrent Calculations on Reconfigurable Logic Devices Applied to the Analysis of Video Images

    Directory of Open Access Journals (Sweden)

    Sergio R. Geninatti

    2010-01-01

    Full Text Available This paper presents the design and implementation on FPGA devices of an algorithm for computing similarities between neighboring frames in a video sequence using luminance information. By taking advantage of the well-known flexibility of Reconfigurable Logic Devices, we have designed a hardware implementation of the algorithm used in video segmentation and indexing. The experimental results show the tradeoff between concurrent sequential resources and the functional blocks needed to achieve maximum operational speed while achieving minimum silicon area usage. To evaluate system efficiency, we compare the performance of the hardware solution to that of calculations done via software using general-purpose processors with and without an SIMD instruction set.

  8. Social Evaluations of Stereotypic Images in Video Games: Unfair, Legitimate, or “Just Entertainment”?

    OpenAIRE

    Brenick, Alaina; Henning, Alexandra; Killen, Melanie; O'Connor, Alexander; Collins, Michael

    2007-01-01

    The aim of this study was to assess adolescents' evaluations of, and reasoning about, gender stereotypes in video games. Female (N = 46) and male (N = 41), predominantly European-American, mean age = 19 years, were interviewed about their knowledge of game usage, awareness and evaluation of stereotypes, beliefs about the influences of games on the players, and authority jurisdiction over 3 different types of games: games with negative male stereotypes, and games with negative female stereotyp...

  9. Photometric-Photogrammetric Analysis of Video Images of a Venting of Water from Space Shuttle Discovery

    Science.gov (United States)

    1990-06-15

    simulations), which are accompanied by a much less-dense cloud of subrnicron ice droplets produced when the evaporated/sublimed water gas overexpands and...Focus, pan and tilt angles, and angular field are controlled from the crew cabin with the aid of a monochrome video monitor. (Some of these cameras...ice particles when this gas has become overexpanded . 2) The angular spreads of the two types of particle are the same within experimental uncertainty

  10. Head-motion-controlled video goggles: preliminary concept for an interactive laparoscopic image display (i-LID).

    Science.gov (United States)

    Aidlen, Jeremy T; Glick, Sara; Silverman, Kenneth; Silverman, Harvey F; Luks, Francois I

    2009-08-01

    Light-weight, low-profile, and high-resolution head-mounted displays (HMDs) now allow personalized viewing, of a laparoscopic image. The advantages include unobstructed viewing, regardless of position at the operating table, and the possibility to customize the image (i.e., enhanced reality, picture-in-picture, etc.). The bright image display allows use in daylight surroundings and the low profile of the HMD provides adequate peripheral vision. Theoretic disadvantages include reliance for all on the same image capture and anticues (i.e., reality disconnect) when the projected image remains static, despite changes in head position. This can lead to discomfort and even nausea. We have developed a prototype of interactive laparoscopic image display that allows hands-free control of the displayed image by changes in spatial orientation of the operator's head. The prototype consists of an HMD, a spatial orientation device, and computer software to enable hands-free panning and zooming of a video-endoscopic image display. The spatial orientation device uses magnetic fields created by a transmitter and receiver, each containing three orthogonal coils. The transmitter coils are efficiently driven, using USB power only, by a newly developed circuit, each at a unique frequency. The HMD-mounted receiver system links to a commercially available PC-interface PCI-bus sound card (M-Audiocard Delta 44; Avid Technology, Tewksbury, MA). Analog signals at the receiver are filtered, amplified, and converted to digital signals, which are processed to control the image display. The prototype uses a proprietary static fish-eye lens and software for the distortion-free reconstitution of any portion of the captured image. Left-right and up-down motions of the head (and HMD) produce real-time panning of the displayed image. Motion of the head toward, or away from, the transmitter causes real-time zooming in or out, respectively, of the displayed image. This prototype of the interactive HMD

  11. Video compression and DICOM proxies for remote viewing of DICOM images

    Science.gov (United States)

    Khorasani, Elahe; Sheinin, Vadim; Paulovicks, Brent; Jagmohan, Ashish

    2009-02-01

    Digital medical images are rapidly growing in size and volume. A typical study includes multiple image "slices." These images have a special format and a communication protocol referred to as DICOM (Digital Imaging Communications in Medicine). Storing, retrieving, and viewing these images are handled by DICOM-enabled systems. DICOM images are stored in central repository servers called PACS (Picture Archival and Communication Systems). Remote viewing stations are DICOM-enabled applications that can query the PACS servers and retrieve the DICOM images for viewing. Modern medical images are quite large, reaching as much as 1 GB per file. When the viewing station is connected to the PACS server via a high-bandwidth local LAN, downloading of the images is relatively efficient and does not cause significant wasted time for physicians. Problems arise when the viewing station is located in a remote facility that has a low-bandwidth link to the PACS server. If the link between the PACS and remote facility is in the range of 1 Mbit/sec, downloading medical images is very slow. To overcome this problem, medical images are compressed to reduce the size for transmission. This paper describes a method of compression that maintains diagnostic quality of images while significantly reducing the volume to be transmitted, without any change to the existing PACS servers and viewer software, and without requiring any change in the way doctors retrieve and view images today.

  12. Endoscopic Trimodal Imaging Detects Colonic Neoplasia as Well as Standard Video Endoscopy

    NARCIS (Netherlands)

    Kuiper, Teaco; van den Broek, Frank J. C.; Naber, Anton H.; van Soest, Ellert J.; Scholten, Pieter; Mallant-Hent, Rosalie Ch; van den Brande, Jan; Jansen, Jeroen M.; van Oijen, Arnoud H. A. M.; Marsman, Willem A.; Bergman, Jacques J. G. H. M.; Fockens, Paul; Dekker, Evelien

    2011-01-01

    BACKGROUND & AIMS: Endoscopic trimodal imaging (ETMI) is a novel endoscopic technique that combines high-resolution endoscopy (HRE), autofluorescence imaging (AFI), and narrow-band imaging (NBI) that has only been studied in academic settings. We performed a randomized, controlled trial in a

  13. Intraoperative stereoscopic 3D video imaging: pushing the boundaries of surgical visualisation and applications for neurosurgical education.

    Science.gov (United States)

    Heath, Michael D; Cohen-Gadol, Aaron A

    2012-10-01

    In the past decades, we have witnessed waves of interest in three-dimensional (3D) stereoscopic imaging. Previously, the complexity associated with 3D technology led to its absence in the operating room. But recently, the public's resurrection of interest in this imaging modality has revived its exploration in surgery. Technological advances have also paved the way for incorporation of 3D stereoscopic imaging in neurosurgical education. Herein, the authors discuss the advantages of intraoperative 3D recording and display for neurosurgical learning and contemplate its future directions based on their experience with 3D technology and a review of the literature. Potential benefits of stereoscopic displays include an enhancement of subjective image quality, proper identification of the structure of interest from surrounding tissues and improved surface detection and depth judgment. Such benefits are critical during the intraoperative decision-making process and proper handling of the lesion (specifically, for surgery on aneurysms and tumours), and should therefore be available to the observers in the operating room and residents in training. Our trainees can relive the intraoperative experience of the primary surgeon by reviewing the recorded stereoscopic 3D videos. Proper 3D knowledge of surgical anatomy is important for operative success. 3D stereoscopic viewing of this anatomy may accelerate the learning curve of trainees and improve the standards of surgical teaching. More objective studies are relevant in further establishing the value of 3D technology in neurosurgical education.

  14. Efficient video panoramic image stitching based on an improved selection of Harris corners and a multiple-constraint corner matching.

    Directory of Open Access Journals (Sweden)

    Minchen Zhu

    Full Text Available Video panoramic image stitching is extremely time-consuming among other challenges. We present a new algorithm: (i Improved, self-adaptive selection of Harris corners. The successful stitching relies heavily on the accuracy of corner selection. We fragment each image into numerous regions and select corners within each region according to the normalized variance of region grayscales. Such a selection is self-adaptive and guarantees that corners are distributed proportional to region texture information. The possible clustering of corners is also avoided. (ii Multiple-constraint corner matching. The traditional Random Sample Consensus (RANSAC algorithm is inefficient, especially when handling a large number of images with similar features. We filter out many inappropriate corners according to their position information, and then generate candidate matching pairs based on grayscales of adjacent regions around corners. Finally we apply multiple constraints on every two pairs to remove incorrectly matched pairs. By a significantly reduced number of iterations needed in RANSAC, the stitching can be performed in a much more efficient manner. Experiments demonstrate that (i our corner matching is four times faster than normalized cross-correlation function (NCC rough match in RANSAC and (ii generated panoramas feature a smooth transition in overlapping image areas and satisfy real-time human visual requirements.

  15. A New Distance Measure Based on Generalized Image Normalized Cross-Correlation for Robust Video Tracking and Image Recognition.

    Science.gov (United States)

    Nakhmani, Arie; Tannenbaum, Allen

    2013-02-01

    We propose two novel distance measures, normalized between 0 and 1, and based on normalized cross-correlation for image matching. These distance measures explicitly utilize the fact that for natural images there is a high correlation between spatially close pixels. Image matching is used in various computer vision tasks, and the requirements to the distance measure are application dependent. Image recognition applications require more shift and rotation robust measures. In contrast, registration and tracking applications require better localization and noise tolerance. In this paper, we explore different advantages of our distance measures, and compare them to other popular measures, including Normalized Cross-Correlation (NCC) and Image Euclidean Distance (IMED). We show which of the proposed measures is more appropriate for tracking, and which is appropriate for image recognition tasks.

  16. The effects of physique-salient and physique non-salient exercise videos on women's body image, self-presentational concerns, and exercise motivation.

    Science.gov (United States)

    Ginis, Kathleen A Martin; Prapavessis, Harry; Haase, Anne M

    2008-06-01

    This experiment examined the effects of exposure to physique-salient (PS) and physique non-salient (PNS) exercise videos and the moderating influence of perceived physique discrepancies, on body image, self-presentational concerns, and exercise motivation. Eighty inactive women (M age=26) exercised to a 30 min instructional exercise video. In the PS condition, the video instructor wore revealing attire that emphasized her thin and toned physique. In the PNS condition, she wore attire that concealed her physique. Participants completed pre- and post-exercise measures of body image, social physique anxiety (SPA) and self-presentational efficacy (SPE) and a post-exercise measure of exercise motivation and perceived discrepancies with the instructor's body. No main or moderated effects emerged for video condition. However, greater perceived negative discrepancies were associated with poorer post-exercise body satisfaction and body evaluations, and higher state SPA. There were no effects on SPE or motivation. Results suggest that exercise videos that elicit perceived negative discrepancies can be detrimental to women's body images.

  17. SIFT-based dense pixel tracking on 0.35 T cine-MR images acquired during image-guided radiation therapy with application to gating optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mazur, Thomas R., E-mail: tmazur@radonc.wustl.edu, E-mail: hli@radonc.wustl.edu; Fischer-Valuck, Benjamin W.; Wang, Yuhe; Yang, Deshan; Mutic, Sasa; Li, H. Harold, E-mail: tmazur@radonc.wustl.edu, E-mail: hli@radonc.wustl.edu [Department of Radiation Oncology, Washington University School of Medicine, 4921 Parkview Place, Campus Box 8224, St. Louis, Missouri 63110 (United States)

    2016-01-15

    Purpose: To first demonstrate the viability of applying an image processing technique for tracking regions on low-contrast cine-MR images acquired during image-guided radiation therapy, and then outline a scheme that uses tracking data for optimizing gating results in a patient-specific manner. Methods: A first-generation MR-IGRT system—treating patients since January 2014—integrates a 0.35 T MR scanner into an annular gantry consisting of three independent Co-60 sources. Obtaining adequate frame rates for capturing relevant patient motion across large fields-of-view currently requires coarse in-plane spatial resolution. This study initially (1) investigate the feasibility of rapidly tracking dense pixel correspondences across single, sagittal plane images (with both moderate signal-to-noise and spatial resolution) using a matching objective for highly descriptive vectors called scale-invariant feature transform (SIFT) descriptors associated to all pixels that describe intensity gradients in local regions around each pixel. To more accurately track features, (2) harmonic analysis was then applied to all pixel trajectories within a region-of-interest across a short training period. In particular, the procedure adjusts the motion of outlying trajectories whose relative spectral power within a frequency bandwidth consistent with respiration (or another form of periodic motion) does not exceed a threshold value that is manually specified following the training period. To evaluate the tracking reliability after applying this correction, conventional metrics—including Dice similarity coefficients (DSCs), mean tracking errors (MTEs), and Hausdorff distances (HD)—were used to compare target segmentations obtained via tracking to manually delineated segmentations. Upon confirming the viability of this descriptor-based procedure for reliably tracking features, the study (3) outlines a scheme for optimizing gating parameters—including relative target position and a

  18. Use of 64 kbits/s digital channel for image transmission: Using low scan two-way video

    Science.gov (United States)

    Rahko, K.; Hongyan, L.; Kley, M.; Peuhkuri, M.; Rahko, M.

    1993-09-01

    At the seminar 'Competition in Telecommunications in Finland' on September 3rd, 1993, a test of two-way transferring an image by using 64 kbits/s digital channel was carried out. With the help of the Helsinki Telephone Company, a portrait was transferred to the lecture hall by using Vistacom Videophones, Nokia and Siemens ISDN exchange, as well as Nokia's and Siemens' user terminal equipment. It was shown on a screen through a video projector, so all visitors could see it. For human factors in telecommunications studies, every attendee was asked to give comments about the transferring quality. The report presents the results of the survey and a brief assessment of the technology.

  19. An innovative technique for recording picture-in-picture ultrasound videos.

    Science.gov (United States)

    Rajasekaran, Sathish; Finnoff, Jonathan T

    2013-08-01

    Many ultrasound educational products and ultrasound researchers present diagnostic and interventional ultrasound information using picture-in-picture videos, which simultaneously show the ultrasound image and transducer and patient positions. Traditional techniques for creating picture-in-picture videos are expensive, nonportable, or time-consuming. This article describes an inexpensive, simple, and portable way of creating picture-in-picture ultrasound videos. This technique uses a laptop computer with a video capture device to acquire the ultrasound feed. Simultaneously, a webcam captures a live video feed of the transducer and patient position and live audio. Both sources are streamed onto the computer screen and recorded by screen capture software. This technique makes the process of recording picture-in-picture ultrasound videos more accessible for ultrasound educators and researchers for use in their presentations or publications.

  20. 2011 Tohoku tsunami video and TLS based measurements: hydrographs, currents, inundation flow velocities, and ship tracks

    Science.gov (United States)

    Fritz, H. M.; Phillips, D. A.; Okayasu, A.; Shimozono, T.; Liu, H.; Takeda, S.; Mohammed, F.; Skanavis, V.; Synolakis, C. E.; Takahashi, T.

    2012-12-01

    The March 11, 2011, magnitude Mw 9.0 earthquake off the coast of the Tohoku region caused catastrophic damage and loss of life in Japan. The mid-afternoon tsunami arrival combined with survivors equipped with cameras on top of vertical evacuation buildings provided spontaneous spatially and temporally resolved inundation recordings. This report focuses on the surveys at 9 tsunami eyewitness video recording locations in Myako, Kamaishi, Kesennuma and Yoriisohama along Japan's Sanriku coast and the subsequent video image calibration, processing, tsunami hydrograph and flow velocity analysis. Selected tsunami video recording sites were explored, eyewitnesses interviewed and some ground control points recorded during the initial tsunami reconnaissance in April, 2011. A follow-up survey in June, 2011 focused on terrestrial laser scanning (TLS) at locations with high quality eyewitness videos. We acquired precise topographic data using TLS at the video sites producing a 3-dimensional "point cloud" dataset. A camera mounted on the Riegl VZ-400 scanner yields photorealistic 3D images. Integrated GPS measurements allow accurate georeferencing. The original video recordings were recovered from eyewitnesses and the Japanese Coast Guard (JCG). The analysis of the tsunami videos follows an adapted four step procedure originally developed for the analysis of 2004 Indian Ocean tsunami videos at Banda Aceh, Indonesia (Fritz et al., 2006). The first step requires the calibration of the sector of view present in the eyewitness video recording based on ground control points measured in the LiDAR data. In a second step the video image motion induced by the panning of the video camera was determined from subsequent images by particle image velocimetry (PIV) applied to fixed objects. The third step involves the transformation of the raw tsunami video images from image coordinates to world coordinates with a direct linear transformation (DLT) procedure. Finally, the instantaneous tsunami

  1. Activity Detection and Retrieval for Image and Video Data with Limited Training

    Science.gov (United States)

    2015-06-10

    Number of graduating undergraduates funded by a DoD funded Center of Excellence grant for Education , Research and Engineering: The number of...geometric snakes to segment the image into constant intensity regions. The Chan-Vese framework proposes to partition the image f()(x ∈  Ω ⊆ ℝ

  2. Extracting a Good Quality Frontal Face Image from a Low-Resolution Video Sequence

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2011-01-01

    Feeding low-resolution and low-quality images, from inexpensive surveillance cameras, to systems like, e.g., face recognition, produces erroneous and unstable results. Therefore, there is a need for a mechanism to bridge the gap between on one hand low-resolution and low-quality images and on the......Feeding low-resolution and low-quality images, from inexpensive surveillance cameras, to systems like, e.g., face recognition, produces erroneous and unstable results. Therefore, there is a need for a mechanism to bridge the gap between on one hand low-resolution and low-quality images...... and on the other hand facial analysis systems. The proposed system in this paper deals with exactly this problem. Our approach is to apply a reconstruction-based super-resolution algorithm. Such an algorithm, however, has two main problems: first, it requires relatively similar images with not too much noise...

  3. Target recognition and scene interpretation in image/video understanding systems based on network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-08-01

    Vision is only a part of a system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. These mechanisms provide a reliable recognition if the object is occluded or cannot be recognized as a whole. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for such models. It converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Network-Symbolic Transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps creating consistent models. Attention, separation of figure from ground and perceptual grouping are special kinds of network-symbolic transformations. Such Image/Video Understanding Systems will be reliably recognizing targets.

  4. Target recognition with image/video understanding systems based on active vision principle and network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-08-01

    Vision is only a part of a larger system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. This mechanism provides a reliable recognition if the target is occluded or cannot be recognized. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Logic of visual scenes can be captured in Network-Symbolic models and used for disambiguation of visual information. Network-Symbolic Transformations derive abstract structures, which allow for invariant recognition of an object as exemplar of a class. Active vision helps build consistent, unambiguous models. Such Image/Video Understanding Systems will be able reliably recognizing targets in real-world conditions.

  5. Acquired Techniques

    DEFF Research Database (Denmark)

    Lunde Nielsen, Espen; Halse, Karianne

    2013-01-01

    Acquired Techniques - a Leap into the Archive, at Aarhus School of Architecture. In collaboration with Karianne Halse, James Martin and Mika K. Friis. Following the footsteps of past travelers this is a journey into tools and techniques of the architectural process. The workshop will focus upon...

  6. Monochromatic blue light entrains diel activity cycles in the Norway lobster, Nephrops norvegicus (L. as measured by automated video-image analysis

    Directory of Open Access Journals (Sweden)

    Jacopo Aguzzi

    2009-12-01

    Full Text Available There is growing interest in developing automated, non-invasive techniques for long-lasting, laboratory-based monitoring of behaviour in organisms from deep-water continental margins which are of ecological and commercial importance. We monitored the burrow emergence rhythms in the Norway lobster, Nephrops norvegicus, which included: a characterising the regulation of behavioural activity outside the burrow under monochromatic blue light-darkness (LD cycles of 0.1 lx, recreating slope photic conditions (i.e. 200-300 m depth and constant darkness (DD, which is necessary for the study of the circadian system; b testing the performance of a newly designed digital video-image analysis system for tracking locomotor activity. We used infrared USB web cameras and customised software (in Matlab 7.1 to acquire and process digital frames of eight animals at a rate of one frame per minute under consecutive photoperiod stages for nine days each: LD, DD, and LD (subdivided into two stages, LD1 and LD2, for analysis purposes. The automated analysis allowed the production of time series of locomotor activity based on movements of the animals’ centroids. Data were studied with periodogram, waveform, and Fourier analyses. For the first time, we report robust diurnal burrow emergence rhythms during the LD period, which became weak in DD. Our results fit with field data accounting for midday peaks in catches at the depth of slopes. The comparison of the present locomotor pattern with those recorded at different light intensities clarifies the regulation of the clock of N. norvegicus at different depths.

  7. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    Energy Technology Data Exchange (ETDEWEB)

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L [UT MD Anderson Cancer Center, Houston, TX (United States); UT Graduate School of Biomedical Sciences, Houston, TX (United States); Yang, J; Beadle, B [UT MD Anderson Cancer Center, Houston, TX (United States)

    2014-06-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.

  8. Smartphone based automatic organ validation in ultrasound video.

    Science.gov (United States)

    Vaish, Pallavi; Bharath, R; Rajalakshmi, P

    2017-07-01

    Telesonography involves transmission of ultrasound video from remote areas to the doctors for getting diagnosis. Due to the lack of trained sonographers in remote areas, the ultrasound videos scanned by these untrained persons do not contain the proper information that is required by a physician. As compared to standard methods for video transmission, mHealth driven systems need to be developed for transmitting valid medical videos. To overcome this problem, we are proposing an organ validation algorithm to evaluate the ultrasound video based on the content present. This will guide the semi skilled person to acquire the representative data from patient. Advancement in smartphone technology allows us to perform high medical image processing on smartphone. In this paper we have developed an Application (APP) for a smartphone which can automatically detect the valid frames (which consist of clear organ visibility) in an ultrasound video and ignores the invalid frames (which consist of no-organ visibility), and produces a compressed sized video. This is done by extracting the GIST features from the Region of Interest (ROI) of the frame and then classifying the frame using SVM classifier with quadratic kernel. The developed application resulted with the accuracy of 94.93% in classifying valid and invalid images.

  9. Large-volume reconstruction of brain tissue from high-resolution serial section images acquired by SEM-based scanning transmission electron microscopy.

    Science.gov (United States)

    Kuwajima, Masaaki; Mendenhall, John M; Harris, Kristen M

    2013-01-01

    With recent improvements in instrumentation and computational tools, serial section electron microscopy has become increasingly straightforward. A new method for imaging ultrathin serial sections is developed based on a field emission scanning electron microscope fitted with a transmitted electron detector. This method is capable of automatically acquiring high-resolution serial images with a large field size and very little optical and physical distortions. In this chapter, we describe the procedures leading to the generation and analyses of a large-volume stack of high-resolution images (64 μm × 64 μm × 10 μm, or larger, at 2 nm pixel size), including how to obtain large-area serial sections of uniform thickness from well-preserved brain tissue that is rapidly perfusion-fixed with mixed aldehydes, processed with a microwave-enhanced method, and embedded into epoxy resin.

  10. Learning Trajectory for Transforming Teachers' Knowledge for Teaching Mathematics and Science with Digital Image and Video Technologies in an Online Learning Experience

    Science.gov (United States)

    Niess, Margaret L.; Gillow-Wiles, Henry

    2014-01-01

    This qualitative cross-case study explores the influence of a designed learning trajectory on transforming teachers' technological pedagogical content knowledge (TPACK) for teaching with digital image and video technologies. The TPACK Learning Trajectory embeds tasks with specific instructional strategies within a social metacognitive…

  11. Global adjustment for creating extended panoramic images in video-dermoscopy

    Science.gov (United States)

    Faraz, Khuram; Blondel, Walter; Daul, Christian

    2017-07-01

    This contribution presents a fast global adjustment scheme exploiting SURF descriptor locations for constructing large skin mosaics. Precision in pairwise image registration is well-preserved while significantly reducing the global mosaicing error.

  12. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  13. Series of aerial images over Quivira National Wildlife Refuge, acquired on September 27th and 29th, 1981

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This data set is of seven georeferenced aerial images taken over Quivira National Wildlife Refuge on September 27th and 29th, 1981. This data set is a clipped,...

  14. Fuzzy-Based Segmentation for Variable Font-Sized Text Extraction from Images/Videos

    Directory of Open Access Journals (Sweden)

    Samabia Tehsin

    2014-01-01

    Full Text Available Textual information embedded in multimedia can provide a vital tool for indexing and retrieval. A lot of work is done in the field of text localization and detection because of its very fundamental importance. One of the biggest challenges of text detection is to deal with variation in font sizes and image resolution. This problem gets elevated due to the undersegmentation or oversegmentation of the regions in an image. The paper addresses this problem by proposing a solution using novel fuzzy-based method. This paper advocates postprocessing segmentation method that can solve the problem of variation in text sizes and image resolution. The methodology is tested on ICDAR 2011 Robust Reading Challenge dataset which amply proves the strength of the recommended method.

  15. One decade of imaging precipitation measurement by 2D-video-distrometer

    Directory of Open Access Journals (Sweden)

    M. Schönhuber

    2007-01-01

    Full Text Available The 2D-Video-Distrometer (2DVD is a ground-based point-monitoring precipitation gauge. From each particle reaching the measuring area front and side contours as well as fall velocity and precise time stamp are recorded. In 1991 the 2DVD development has been started to clarify discrepancies found when comparing weather radar data analyses with literature models. Then being manufactured in a small scale series the first 2DVD delivery took place in 1996, 10 years back from now. An overview on present 2DVD features is given, and it is presented how the instrument was continuously improved in the past ten years. Scientific merits of 2DVD measurements are explained, including drop size readings without upper limit, drop shape and orientation angle information, contours of solid and melting particles, and an independent measurement of particles' fall velocity also in mixed phase events. Plans for a next generation instrument are described, by enhanced user-friendliness the unique data type shall be opened to a wider user community.

  16. SECRETS OF SONG VIDEO

    Directory of Open Access Journals (Sweden)

    Chernyshov Alexander V.

    2014-04-01

    Full Text Available The article focuses on the origins of the song videos as TV and Internet-genre. In addition, it considers problems of screen images creation depending on the musical form and the text of a songs in connection with relevant principles of accent and phraseological video editing and filming techniques as well as with additional frames and sound elements.

  17. Tumoral tracing and reconstruction of doses with images of MV acquired during treatment arco therapy volumetric; Seguimiento tumoral y reconstruccion de dosis con imagenes de MV adquiridas durante tratamientos de arcoterapia volumetrica

    Energy Technology Data Exchange (ETDEWEB)

    Azcona Armendariz, J. D.; Li, R.; Xing, L.

    2015-07-01

    Develop a strategy of tracking MV tumor on images acquired with flat panel and apply it to the characterization of the movement and dose reconstruction The research was conducted using a linear accelerator Varian True Beam, equipped with imaging system by Megavoltage. used images of patients with prostate cancer treated with volumetric arcotheraphy. (Author)

  18. Laser Graphic Video Display using Silicon Scanning Mirrors with Vertical Comb Fingers

    Science.gov (United States)

    Lee, Jin-Ho; Ko, Young-Chul; Mun, Yong-Kweun; Choi, Byoung-So; Kim, Jong-Min; Jeon, Duk Young

    2002-09-01

    We acquired a two-dimensional (2D) laser vector graphic video image using 1500 μm× 1200 μm silicon scanning mirrors with vertical comb fingers. Vector image signals from the graphic board were applied to two scanning mirrors, and a SHG green laser was directly modulated to shape independent graphic images. These scanning mirrors were originally designed for laser raster video display as a galvanometric vertical scanner, and are controlled perfectly by the ramp waveform of 60 Hz with the duty cycle of 90%.

  19. Luminal volume reconstruction from angioscopic video images of casts from human coronary arteries

    NARCIS (Netherlands)

    J.C.H. Schuurbiers (Johan); C.J. Slager (Cornelis); P.W.J.C. Serruys (Patrick)

    1994-01-01

    textabstractIntravascular angioscopy has been hampered by its limitation in quantifying obtained images. To circumvent this problem, a lightwire was used, which projects a ring of light onto the endoluminal wall in front of the angioscope. This investigation was designed to quantify luminal

  20. Embedded electronics for a video-rate distributed aperture passive millimeter-wave imager

    Science.gov (United States)

    Curt, Petersen F.; Bonnett, James; Schuetz, Christopher A.; Martin, Richard D.

    2013-05-01

    Optical upconversion for a distributed aperture millimeter wave imaging system is highly beneficial due to its superior bandwidth and limited susceptibility to EMI. These features mean the same technology can be used to collect information across a wide spectrum, as well as in harsh environments. Some practical uses of this technology include safety of flight in degraded visual environments (DVE), imaging through smoke and fog, and even electronic warfare. Using fiber-optics in the distributed aperture poses a particularly challenging problem with respect to maintaining coherence of the information between channels. In order to capture an image, the antenna aperture must be electronically steered and focused to a particular distance. Further, the state of the phased array must be maintained, even as environmental factors such as vibration, temperature and humidity adversely affect the propagation of the signals through the optical fibers. This phenomenon cannot be avoided or mitigated, but rather must be compensated for using a closed-loop control system. In this paper, we present an implementation of embedded electronics designed specifically for this purpose. This novel architecture is efficiently small, scalable to many simultaneously operating channels and sufficiently robust. We present our results, which include integration into a 220 channel imager and phase stability measurements as the system is stressed according to MIL-STD-810F vibration profiles of an H-53E heavy-lift helicopter.

  1. The Prediction of Position and Orientation Parameters of Uav for Video Imaging

    Science.gov (United States)

    Wierzbicki, D.

    2017-08-01

    The paper presents the results of the prediction for the parameters of the position and orientation of the unmanned aerial vehicle (UAV) equipped with compact digital camera. Issue focus in this paper is to achieve optimal accuracy and reliability of the geo-referenced video frames on the basis of data from the navigation sensors mounted on UAV. In experiments two mathematical models were used for the process of the prediction: the polynomial model and the trigonometric model. The forecast values of position and orientation of UAV were compared with readings low cost GPS and INS sensors mounted on the unmanned Trimble UX-5 platform. Research experiment was conducted on the preview of navigation data from 23 measuring epochs. The forecast coordinate values and angles of the turnover and the actual readings of the sensor Trimble UX-5 were compared in this paper. Based on the results of the comparison it was determined that: the best results of co-ordinate comparison of an unmanned aerial vehicle received for the storage with, whereas worst for the coordinate Y on the base of both prediction models, obtained value of standard deviation for the coordinate XYZ from both prediction models does not cross over a admissible criterion 10 m for the term of the exactitudes of the position of a unmanned aircraft. The best results of the comparison of the angles of the turn of a unmanned aircraft received for the angle Pitch, whereas worst for the angles Heading and Roll on the base of both prediction models. Obtained value of standard deviation for the angles of turn HPR from both prediction models does not exceed a admissible exactitude 5° only for the angle Pitch, however crosses over this value for the angles Heading and Roll.

  2. Ventilator Data Extraction with a Video Display Image Capture and Processing System.

    Science.gov (United States)

    Wax, David B; Hill, Bryan; Levin, Matthew A

    2017-06-01

    Medical hardware and software device interoperability standards are not uniform. The result of this lack of standardization is that information available on clinical devices may not be readily or freely available for import into other systems for research, decision support, or other purposes. We developed a novel system to import discrete data from an anesthesia machine ventilator by capturing images of the graphical display screen and using image processing to extract the data with off-the-shelf hardware and open-source software. We were able to successfully capture and verify live ventilator data from anesthesia machines in multiple operating rooms and store the discrete data in a relational database at a substantially lower cost than vendor-sourced solutions.

  3. Simple luminosity normalization of greenness, yellowness and redness/greenness for comparison of leaf spectral profiles in multi-temporally acquired remote sensing images.

    Science.gov (United States)

    Doi, Ryoichi

    2012-09-01

    Observation of leaf colour (spectral profiles) through remote sensing is an effective method of identifying the spatial distribution patterns of abnormalities in leaf colour, which enables appropriate plant management measures to be taken. However, because the brightness of remote sensing images varies with acquisition time, in the observation of leaf spectral profiles in multi-temporally acquired remote sensing images, changes in brightness must be taken into account. This study identified a simple luminosity normalization technique that enables leaf colours to be compared in remote sensing images over time. The intensity values of green and yellow (green+red) exhibited strong linear relationships with luminosity (R2 greater than 0.926) when various invariant rooftops in Bangkok or Tokyo were spectralprofiled using remote sensing images acquired at different time points. The values of the coefficient and constant or the coefficient of the formulae describing the intensity of green or yellow were comparable among the single Bangkok site and the two Tokyo sites, indicating the technique's general applicability. For single rooftops, the values of the coefficient of variation for green, yellow, and red/green were 16% or less (n=6-11), indicating an accuracy not less than those of well-established remote sensing measures such as the normalized difference vegetation index. After obtaining the above linear relationships, raw intensity values were normalized and a temporal comparison of the spectral profiles of the canopies of evergreen and deciduous tree species in Tokyo was made to highlight the changes in the canopies' spectral profiles. Future aspects of this technique are discussed herein.

  4. Overhead-Based Image and Video Geo-Localization Framework (Open Access)

    Science.gov (United States)

    2013-09-12

    States using 100 street-level query photos. The problem is very challenging because we are trying to match two het- erogenous image sources: a street...system on the whole Switzerland area . Bansal et al. [2] were able to match query street- level facades to airborne LIDAR imagery under challenging...cover imagery. This data covers various areas in the conti- nental United States and the world, but our system tested two world regions within the

  5. First results for an image processing workflow for hyperspatial imagery acquired with a low-cost unmanned aerial vehicle (UAV).

    Science.gov (United States)

    Very high-resolution images from unmanned aerial vehicles (UAVs) have great potential for use in rangeland monitoring and assessment, because the imagery fills the gap between ground-based observations and remotely sensed imagery from aerial or satellite sensors. However, because UAV imagery is ofte...

  6. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  7. Evaluation of a System for High-Accuracy 3D Image-Based Registration of Endoscopic Video to C-Arm Cone-Beam CT for Image-Guided Skull Base Surgery

    Science.gov (United States)

    Mirota, Daniel J.; Uneri, Ali; Schafer, Sebastian; Nithiananthan, Sajendra; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Taylor, Russell H.; Hager, Gregory D.; Siewerdsen, Jeffrey H.

    2014-01-01

    The safety of endoscopic skull base surgery can be enhanced by accurate navigation in preoperative computed tomography (CT) or, more recently, intraoperative cone-beam CT (CBCT). The ability to register real-time endoscopic video with CBCT offers an additional advantage by rendering information directly within the visual scene to account for intraoperative anatomical change. However, tracker localization error (~ 1–2 mm) limits the accuracy with which video and tomographic images can be registered. This paper reports the first implementation of image-based video-CBCT registration, conducts a detailed quantitation of the dependence of registration accuracy on system parameters, and demonstrates improvement in registration accuracy achieved by the image-based approach. Performance was evaluated as a function of parameters intrinsic to the image-based approach, including system geometry, CBCT image quality, and computational runtime. Overall system performance was evaluated in a cadaver study simulating transsphenoidal skull base tumor excision. Results demonstrated significant improvement (p < 0.001)in registration accuracy with a mean reprojection distance error of 1.28 mm for the image-based approach versus 1.82 mm for the conventional tracker-based method. Image-based registration was highly robust against CBCT image quality factors of noise and resolution, permitting integration with low-dose intraoperative CBCT. PMID:23372078

  8. PET imaging of HSV1-tk mutants with acquired specificity toward pyrimidine- and acycloguanosine-based radiotracers

    Energy Technology Data Exchange (ETDEWEB)

    Likar, Yury; Dobrenkov, Konstantin; Olszewska, Malgorzata; Shenker, Larissa; Hricak, Hedvig; Ponomarev, Vladimir [Memorial Sloan-Kettering Cancer Center, Molecular Imaging Laboratory, Department of Radiology, New York, NY (United States); Cai, Shangde [Memorial Sloan-Kettering Cancer Center, Radiochemistry/Cyclotron Core Facility, New York, NY (United States)

    2009-08-15

    The aim of this study was to create an alternative mutant of the herpes simplex virus type 1 thymidine kinase (HSV1-tk) reporter gene with reduced phosphorylation capacity for acycloguanosine derivatives, but not pyrimidine-based compounds that will allow for successful PET imaging. A new mutant of HSV1-tk reporter gene, suitable for PET imaging using pyrimidine-based radiotracers, was developed. The HSV1-tk mutant contains an arginine-to-glutamine substitution at position 176 (HSV1-R176Qtk) of the nucleoside binding region of the enzyme. The mutant-gene product showed favorable enzymatic characteristics toward pyrimidine-based nucleosides, while exhibiting reduced activity with acycloguanosine derivatives. In order to enhance HSV1-R176Qtk reporter activity with pyrimidine-based radiotracers, we introduced the R176Q substitution into the more active HSV1-sr39tk mutant. U87 human glioma cells transduced with the HSV1-R176Qsr39tk double mutant reporter gene showed high {sup 3}H-FEAU pyrimidine nucleoside and low {sup 3}H-penciclovir acycloguanosine analog uptake in vitro. PET imaging also demonstrated high {sup 18}F-FEAU and low {sup 18}F-FHBG accumulation in HSV1-R176Qsr39tk+ xenografts. The feasibility of imaging two independent nucleoside-specific HSV1-tk mutants in the same animal with PET was demonstrated. Two opposite xenografts expressing the HSV1-R176Qsr39tk reporter gene and the previously described acycloguanosine-specific mutant of HSV1-tk, HSV1-A167Ysr39tk reporter gene, were imaged using a short-lived pyrimidine-based {sup 18}F-FEAU and an acycloguanosine-based {sup 18}F-FHBG radiotracer, respectively, administered on 2 consecutive days. We conclude that in combination with acycloguanosine-specific HSV1-A167Ysr39tk reporter gene, a HSV1-tk mutant containing the R176Q substitution could be used for PET imaging of two different cell populations or concurrent molecular biological processes in the same living subject. (orig.)

  9. A 3-D nonlinear recursive digital filter for video image processing

    Science.gov (United States)

    Bauer, P. H.; Qian, W.

    1991-01-01

    This paper introduces a recursive 3-D nonlinear digital filter, which is capable of performing noise suppression without degrading important image information such as edges in space or time. It also has the property of unnoticeable bandwidth reduction immediately after a scene change, which makes the filter an attractive preprocessor to many interframe compression algorithms. The filter consists of a nonlinear 2-D spatial subfilter and a 1-D temporal filter. In order to achieve the required computational speed and increase the flexibility of the filter, all of the linear shift-variant filter modules are of the IIR type.

  10. A Robust Concurrent Approach for Road Extraction and Urbanization Monitoring Based on Superpixels Acquired from Spectral Remote Sensing Images

    Science.gov (United States)

    Seppke, Benjamin; Dreschler-Fischer, Leonie; Wilms, Christian

    2016-08-01

    The extraction of road signatures from remote sensing images as a promising indicator for urbanization is a classical segmentation problem. However, some segmentation algorithms often lead to non-sufficient results. One way to overcome this problem is the usage of superpixels, that represent a locally coherent cluster of connected pixels. Superpixels allow flexible, highly adaptive segmentation approaches due to the possibility of merging as well as splitting and form new basic image entities. On the other hand, superpixels require an appropriate representation containing all relevant information about topology and geometry to maximize their advantages.In this work, we present a combined geometric and topological representation based on a special graph representation, the so-called RS-graph. Moreover, we present the use of the RS-graph by means of a case study: the extraction of partially occluded road networks in rural areas from open source (spectral) remote sensing images by tracking. In addition, multiprocessing and GPU-based parallelization is used to speed up the construction of the representation and the application.

  11. 3D modelling of cultural heritage objects using video technology

    Directory of Open Access Journals (Sweden)

    Paulina Deliś

    2014-06-01

    Full Text Available In the paper, the process of creating 3D models of St. Anne’s Church’s facades is described. Some examples of architectural structures inside of St. Anne’s Church’s are presented. Video data were acquired with the fixed focal length lens f = 16 mm. It allowed to determine interior orientation parameters in a calibration process and to remove an influence of distortion. 3D models of heritage objects were generated using the Topcon Image Master software. The process of 3D model creating from video data involved the following steps: video frames selection for the orientation process, orientation of video frames using points with known coordinates from Terrestrial Laser Scanning, wireframe and TIN model generation. In order to assess the accuracy of the developed 3D models, points with known coordinates from Terrestrial Laser Scanning were used. The accuracy analysis showed that the accuracy of 3D models generated from video images is ±0.05 m.[b]Keywords[/b]: terrestrial photogrammetry, video, terrestrial laser scanning, 3D model, heritage objects

  12. Quantifying fish swimming behavior in response to acute exposure of aqueous copper using computer assisted video and digital image analysis

    Science.gov (United States)

    Calfee, Robin D.; Puglis, Holly J.; Little, Edward E.; Brumbaugh, William G.; Mebane, Christopher A.

    2016-01-01

    Behavioral responses of aquatic organisms to environmental contaminants can be precursors of other effects such as survival, growth, or reproduction. However, these responses may be subtle, and measurement can be challenging. Using juvenile white sturgeon (Acipenser transmontanus) with copper exposures, this paper illustrates techniques used for quantifying behavioral responses using computer assisted video and digital image analysis. In previous studies severe impairments in swimming behavior were observed among early life stage white sturgeon during acute and chronic exposures to copper. Sturgeon behavior was rapidly impaired and to the extent that survival in the field would be jeopardized, as fish would be swept downstream, or readily captured by predators. The objectives of this investigation were to illustrate protocols to quantify swimming activity during a series of acute copper exposures to determine time to effect during early lifestage development, and to understand the significance of these responses relative to survival of these vulnerable early lifestage fish. With mortality being on a time continuum, determining when copper first affects swimming ability helps us to understand the implications for population level effects. The techniques used are readily adaptable to experimental designs with other organisms and stressors.

  13. Developing a Video Steganography Toolkit

    OpenAIRE

    Ridgway, James; Stannett, Mike

    2014-01-01

    Although techniques for separate image and audio steganography are widely known, relatively little has been described concerning the hiding of information within video streams ("video steganography"). In this paper we review the current state of the art in this field, and describe the key issues we have encountered in developing a practical video steganography system. A supporting video is also available online at http://www.youtube.com/watch?v=YhnlHmZolRM

  14. A New Learning Control System for Basketball Free Throws Based on Real Time Video Image Processing and Biofeedback

    Directory of Open Access Journals (Sweden)

    R. Sarang

    2018-02-01

    Full Text Available Shooting free throws plays an important role in basketball. The major problem in performing a correct free throw seems to be inappropriate training. Training is performed offline and it is often not that persistent. The aim of this paper is to consciously modify and control the free throw using biofeedback. Elbow and shoulder dynamics are calculated by an image processing technique equipped with a video image acquisition system. The proposed setup in this paper, named learning control system, is able to quantify and provide feedback of the above parameters in real time as audio signals. Therefore, it yielded to performing a correct learning and conscious control of shooting. Experimental results showed improvements in the free throw shooting style including shot pocket and locked position. The mean values of elbow and shoulder angles were controlled approximately on 89o and 26o, for shot pocket and also these angles were tuned approximately on 180o and 47o respectively for the locked position (closed to the desired pattern of the free throw based on valid FIBA references. Not only the mean values enhanced but also the standard deviations of these angles decreased meaningfully, which shows shooting style convergence and uniformity. Also, in training conditions, the average percentage of making successful free throws increased from about 64% to even 87% after using this setup and in competition conditions the average percentage of successful free throws enhanced about 20%, although using the learning control system may not be the only reason for these outcomes. The proposed system is easy to use, inexpensive, portable and real time applicable.

  15. Evaluation of experimental UAV video change detection

    Science.gov (United States)

    Bartelsen, J.; Saur, G.; Teutsch, C.

    2016-10-01

    During the last ten years, the availability of images acquired from unmanned aerial vehicles (UAVs) has been continuously increasing due to the improvements and economic success of flight and sensor systems. From our point of view, reliable and automatic image-based change detection may contribute to overcoming several challenging problems in military reconnaissance, civil security, and disaster management. Changes within a scene can be caused by functional activities, i.e., footprints or skid marks, excavations, or humidity penetration; these might be recognizable in aerial images, but are almost overlooked when change detection is executed manually. With respect to the circumstances, these kinds of changes may be an indication of sabotage, terroristic activity, or threatening natural disasters. Although image-based change detection is possible from both ground and aerial perspectives, in this paper we primarily address the latter. We have applied an extended approach to change detection as described by Saur and Kr uger,1 and Saur et al.2 and have built upon the ideas of Saur and Bartelsen.3 The commercial simulation environment Virtual Battle Space 3 (VBS3) is used to simulate aerial "before" and "after" image acquisition concerning flight path, weather conditions and objects within the scene and to obtain synthetic videos. Video frames, which depict the same part of the scene, including "before" and "after" changes and not necessarily from the same perspective, are registered pixel-wise against each other by a photogrammetric concept, which is based on a homography. The pixel-wise registration is used to apply an automatic difference analysis, which, to a limited extent, is able to suppress typical errors caused by imprecise frame registration, sensor noise, vegetation and especially parallax effects. The primary concern of this paper is to seriously evaluate the possibilities and limitations of our current approach for image-based change detection with respect

  16. Multiple Feature Fusion Based on Co-Training Approach and Time Regularization for Place Classification in Wearable Video

    Directory of Open Access Journals (Sweden)

    Vladislavs Dovgalecs

    2013-01-01

    Full Text Available The analysis of video acquired with a wearable camera is a challenge that multimedia community is facing with the proliferation of such sensors in various applications. In this paper, we focus on the problem of automatic visual place recognition in a weakly constrained environment, targeting the indexing of video streams by topological place recognition. We propose to combine several machine learning approaches in a time regularized framework for image-based place recognition indoors. The framework combines the power of multiple visual cues and integrates the temporal continuity information of video. We extend it with computationally efficient semisupervised method leveraging unlabeled video sequences for an improved indexing performance. The proposed approach was applied on challenging video corpora. Experiments on a public and a real-world video sequence databases show the gain brought by the different stages of the method.

  17. Automated in-core image generation from video to aid visual inspection of nuclear power plant cores

    Energy Technology Data Exchange (ETDEWEB)

    Murray, Paul, E-mail: paul.murray@strath.ac.uk [Department of Electronic and Electrical Engineering, University of Strathclyde, Technology and Innovation Centre, 99 George Street, Glasgow, G1 1RD (United Kingdom); West, Graeme; Marshall, Stephen; McArthur, Stephen [Dept. Electronic and Electrical Engineering, University of Strathclyde, Royal College Building, 204 George Street, Glasgow G1 1XW (United Kingdom)

    2016-04-15

    Highlights: • A method is presented which improves visual inspection of reactor cores. • Significant time savings are made to activities on the critical outage path. • New information is extracted from existing data sources without additional overhead. • Examples from industrial case studies across the UK fleet of AGR stations. - Abstract: Inspection and monitoring of key components of nuclear power plant reactors is an essential activity for understanding the current health of the power plant and ensuring that they continue to remain safe to operate. As the power plants age, and the components degrade from their initial start-of-life conditions, the requirement for more and more detailed inspection and monitoring information increases. Deployment of new monitoring and inspection equipment on existing operational plant is complex and expensive, as the effect of introducing new sensing and imaging equipment to the existing operational functions needs to be fully understood. Where existing sources of data can be leveraged, the need for new equipment development and installation can be offset by the development of advanced data processing techniques. This paper introduces a novel technique for creating full 360° panoramic images of the inside surface of fuel channels from in-core inspection footage. Through the development of this technique, a number of technical challenges associated with the constraints of using existing equipment have been addressed. These include: the inability to calibrate the camera specifically for image stitching; dealing with additional data not relevant to the panorama construction; dealing with noisy images; and generalising the approach to work with two different capture devices deployed at seven different Advanced Gas Cooled Reactor nuclear power plants. The resulting data processing system is currently under formal assessment with a view to replacing the existing manual assembly of in-core defect montages. Deployment of the

  18. Power Distortion Optimization for Uncoded Linear Transformed Transmission of Images and Videos.

    Science.gov (United States)

    Xiong, Ruiqin; Zhang, Jian; Wu, Feng; Xu, Jizheng; Gao, Wen

    2017-01-01

    Recently, there is a resurgence of interest in uncoded transmission for wireless visual communication. While conventional coded systems suffer from cliff effect as the channel condition varies dynamically, uncoded linear-transformed transmission (ULT) provides elegant quality degradation for wide channel SNR range. ULT skips non-linear operations, such as quantization and entropy coding. Instead, it utilizes linear decorrelation transform and linear scaling power allocation to achieve optimized transmission. This paper presents a theoretical analysis for power-distortion optimization of ULT. In addition to the observation in our previous work that a decorrelation transform can bring significant performance gain, this paper reveals that exploiting the energy diversity in transformed signal is the key to achieve the full potential of decorrelation transform. In particular, we investigated the efficiency of ULT with exact or inexact signal statistics, highlighting the impact of signal energy modeling accuracy. Based on that, we further proposed two practical energy modeling schemes for ULT of visual signals. Experimental results show that the proposed schemes improve the quality of reconstructed images by 3~5 dB, while reducing the signal modeling overhead from hundreds or thousands of meta data to only a few meta data. The perceptual quality of reconstruction is significantly improved.

  19. Power-Distortion Optimization for Uncoded Linear-Transformed Transmission of Images and Videos.

    Science.gov (United States)

    Xiong, Ruiqin; Zhang, Jian; Wu, Feng; Xu, Jizheng; Gao, Wen

    2016-10-26

    Recently there is a resurgence of interest in uncoded transmission for wireless visual communication. While conventional coded systems suffer from cliff effect as the channel condition varies dynamically, uncoded linear-transformed transmission (ULT) provides elegant quality degradation for wide channel SNR range. ULT skips non-linear operations such as quantization and entropy coding. Instead, it utilizes linear decorrelation transform and linear scaling power allocation to achieve optimized transmission. This paper presents a theoretical analysis for power-distortion optimization of ULT. In addition to the observation in our previous work that a decorrelation transform can bring significant performance gain, this work reveals that exploiting the energy diversity in transformed signal is the key to achieve the full potential of decorrelation transform. In particular, we investigated the efficiency of ULT with exact or inexact signal statistics, highlighting the impact of signal energy modeling accuracy. Based on that, we further proposed two practical energy modeling schemes for ULT of visual signals. Experimental results show that the proposed schemes improve the quality of reconstructed images by 3 5dB, while reducing the signal modeling overhead from hundreds or thousands of meta data to only a few meta data. The perceptual quality of reconstruction is significantly improved.

  20. Image/video understanding systems based on network-symbolic models and active vision

    Science.gov (United States)

    Kuvich, Gary

    2004-07-01

    Vision is a part of information system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. It is hard to split the entire system apart, and vision mechanisms cannot be completely understood separately from informational processes related to knowledge and intelligence. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Vision is a component of situation awareness, motion and planning systems. Foveal vision provides semantic analysis, recognizing objects in the scene. Peripheral vision guides fovea to salient objects and provides scene context. Biologically inspired Network-Symbolic representation, in which both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, converts visual information into relational Network-Symbolic structures, avoiding precise artificial computations of 3-D models. Network-Symbolic transformations derive more abstract structures that allows for invariant recognition of an object as exemplar of a class and for a reliable identification even if the object is occluded. Systems with such smart vision will be able to navigate in real environment and understand real-world situations.

  1. Active vision and image/video understanding systems for UGV based on network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-09-01

    Vision evolved as a sensory system for reaching, grasping and other motion activities. In advanced creatures, it has become a vital component of situation awareness, navigation and planning systems. Vision is part of a larger information system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. It is hard to split such a system apart. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for natural processing of visual information. It converts visual information into relational Network-Symbolic models, avoiding artificial precise computations of 3-dimensional models. Logic of visual scenes can be captured in such models and used for disambiguation of visual information. Network-Symbolic transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps create unambiguous network-symbolic models. This approach is consistent with NIST RCS. The UGV, equipped with such smart vision, will be able to plan path and navigate in a real environment, perceive and understand complex real-world situations and act accordingly.

  2. Imaging of female infertility: a pictorial guide to the hysterosalpingography, ultrasonography, and magnetic resonance imaging findings of the congenital and acquired causes of female infertility.

    Science.gov (United States)

    Kaproth-Joslin, Katherine; Dogra, Vikram

    2013-11-01

    Hysterosalpingography is the gold standard in assessing the patency of the fallopian tubes, which is among the most common causes of female factor infertility, making this technique the most frequent first-choice imaging modality in the assessment of female infertility. Ultrasonography and magnetic resonance imaging are typically used for evaluation of indeterminate or complicated cases of female infertility and presurgical planning. Imaging also plays a role in the detection of the secondary causes of ovarian factor infertility, including endometriosis and polycystic ovarian syndrome. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  4. Image-guided depth propagation for 2-D-to-3-D video conversion using superpixel matching and adaptive autoregressive model

    Science.gov (United States)

    Cai, Jiji; Jung, Cheolkon

    2017-09-01

    We propose image-guided depth propagation for two-dimensional (2-D)-to-three-dimensional (3-D) video conversion using superpixel matching and the adaptive autoregressive (AR) model. We adopt key frame-based depth propagation that propagates the depth map in the key frame to nonkey frames. Moreover, we use the adaptive AR model for depth refinement to penalize depth-color inconsistency. First, we perform superpixel matching to estimate motion vectors at the superpixel level instead of block matching based on the fixed block size. Then, we conduct depth compensation based on motion vectors to generate the depth map in the nonkey frame. However, the size of two superpixels is not exactly the same due to the segment-based matching, which causes matching errors in the compensated depth map. Thus, we introduce an adaptive image-guided AR model to minimize matching errors and produce the final depth map by minimizing AR prediction errors. Finally, we employ depth-image-based rendering to generate stereoscopic views from 2-D videos and their depth maps. Experimental results demonstrate that the proposed method successfully performs depth propagation and produces high-quality depth maps for 2-D-to-3-D video conversion.

  5. [Video documentation in forensic practice].

    Science.gov (United States)

    Schyma, C; Schyma, P

    1995-01-01

    The authors report in part 1 about their experiences with the Canon Ex1 Hi camcorder and the possibilities of documentation with the modern video technique. Application examples in legal medicine and criminalistics are described autopsy, scene, reconstruction of crimes etc. The online video documentation of microscopic sessions makes the discussion of findings easier. The use of video films for instruction produces a good resonance. The use of the video documentation can be extended by digitizing (Part 2). Two frame grabbers are presented, with which we obtained good results in digitizing of images captured from video. The best quality of images is achieved by online use of an image analysis chain. Corel 5.0 and PicEd Cora 4.0 allow complete image processings and analysis. The digital image processing influences the objectivity of the documentation. The applicabilities of image libraries are discussed.

  6. 2011 Japan tsunami survivor video based hydrograph and flow velocity measurements using LiDAR

    Science.gov (United States)

    Fritz, H. M.; Phillips, D. A.; Okayasu, A.; Shimozono, T.; Liu, H.; Mohammed, F.; Skanavis, V.; Synolakis, C. E.; Takahashi, T.

    2012-04-01

    On March 11, 2011, a magnitude Mw 9.0 earthquake occurred off the coast of Japan's Tohoku region causing catastrophic damage and loss of life. Numerous tsunami reconnaissance trips were conducted in Japan (Tohoku Earthquake and Tsunami Joint Survey Group). This report focuses on the surveys at 9 tsunami eyewitness video recording locations in Yoriisohama, Kesennuma, Kamaishi and Miyako along Japan's Sanriku coast and the subsequent video image calibration, processing, tsunami hydrograph and flow velocity analysis. Selected tsunami video recording sites were visited, eyewitnesses interviewed and some ground control points recorded during the initial tsunami reconnaissance from April 9 to 25. A follow-up survey from June 9 to 15, 2011 focused on terrestrial laser scanning (TLS) at locations with previously identified high quality eyewitness videos. We acquired precise topographic data using TLS at nine video sites with multiple scans acquired from different instrument positions at each site. These ground-based LiDAR measurements produce a 3-dimensional "point cloud" dataset. Digital photography from a scanner-mounted camera yields photorealistic 3D images. Integrated GPS measurements allow accurate georeferencing of the TLS data in an absolute reference frame such as WGS84. We deployed a Riegl VZ-400 scanner (1550 nm wavelength laser, 42,000 measurements/second, requires the calibration of the sector of view present in the eyewitness video recording based on visually identifiable ground control points measured in the LiDAR point cloud data. In a second step the video image motion induced by the panning of the video camera was determined from subsequent raw color images by means of planar particle image velocimetry (PIV) applied to fixed objects in the field of view. The third step involves the transformation of the raw tsunami video images from image coordinates to world coordinates. The mapping from video frame to real world coordinates follows the direct linear

  7. 2011 Japan tsunami current and flow velocity measurements from survivor videos using LiDAR

    Science.gov (United States)

    Fritz, H. M.; Phillips, D. A.; Okayasu, A.; Shimozono, T.; Liu, H.; Mohammed, F.; Skanavis, V.; Synolakis, C.; Takahashi, T.

    2011-12-01

    On March 11, 2011, a magnitude Mw 9.0 earthquake occurred off the coast of Japan's Tohoku region causing catastrophic damage and loss of life. Numerous tsunami reconnaissance trips were conducted in Japan (Tohoku Earthquake and Tsunami Joint Survey Group). This report focuses on the surveys at 9 tsunami eyewitness video recording locations in Yoriisohama, Kesennuma, Kamaishi and Miyako along Japan's Sanriku coast and the subsequent video image calibration, processing and tsunami flow velocity analysis. Selected tsunami video recording sites were visited, eyewitnesses interviewed and some ground control points recorded during the initial tsunami reconnaissance from April 9 to 25. A follow-up survey from June 9 to 15, 2011 focused on terrestrial laser scanning (TLS) at locations with previously identified high quality eyewitness videos. We acquired precise topographic data using TLS at nine video sites with multiple scans acquired from different instrument positions at each site. These ground-based LiDAR measurements produce a 3-dimensional "point cloud" dataset. Digital photography from a scanner-mounted camera yields photorealistic 3D images. Integrated GPS measurements allow accurate georeferencing of the TLS data in an absolute reference frame such as WGS84. We deployed a Riegl VZ-400 scanner (1550 nm wavelength laser, 42,000 measurements/second, requires the calibration of the sector of view present in the eyewitness video recording based on visually identifiable ground control points measured in the LiDAR point cloud data. In a second step the video image motion induced by the panning of the video camera was determined from subsequent raw color images by means of planar particle image velocimetry (PIV) applied to fixed objects in the field of view. The third step involves the transformation of the raw tsunami video images from image coordinates to world coordinates. The mapping from video frame to real world coordinates follows the direct linear transformation

  8. Free-viewpoint video synthesis from mixed resolution multi-view images and low resolution depth maps

    Science.gov (United States)

    Emori, Takaaki; Tehrani, Mehrdad P.; Takahashi, Keita; Fujii, Toshiaki

    2015-03-01

    Streaming application of multi-view and free-viewpoint video is potentially attractive but due to the limitation of bandwidth, transmitting all multi-view video in high resolution may not be feasible. Our goal is to propose a new streaming data format that can be adapted to the limited bandwidth and capable of free-viewpoint video streaming using multi-view video plus depth (MVD). Given a requested free-viewpoint, we use the two closest views and corresponding depth maps to perform free-viewpoint video synthesis. We propose a new data format that consists of all views and corresponding depth maps in a lowered resolution, and the two closest views to the requested viewpoint in the high resolution. When the requested viewpoint changes, the two closest viewpoints will change, but one or both of views are transmitted only in the low resolution during the delay time. Therefore, the resolution compensation is required. In this paper, we investigated several cases where one or both of the views are transmitted only in the low resolution. We proposed adequate view synthesis method for multi resolution multi-view video plus depth. Experimental results show that our framework achieves view synthesis quality close to high resolution multi-view video plus depth.

  9. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  10. Clinical evaluation of spatial accuracy of a fusion imaging technique combining previously acquired computed tomography and real-time ultrasound for imaging of liver metastases.

    Science.gov (United States)

    Hakime, Antoine; Deschamps, Frederic; De Carvalho, Enio Garcia Marques; Teriitehau, Christophe; Auperin, Anne; De Baere, Thierry

    2011-04-01

    This study was designed to evaluate the spatial accuracy of matching volumetric computed tomography (CT) data of hepatic metastases with real-time ultrasound (US) using a fusion imaging system (VNav) according to different clinical settings. Twenty-four patients with one hepatic tumor identified on enhanced CT and US were prospectively enrolled. A set of three landmarks markers was chosen on CT and US for image registration. US and CT images were then superimposed using the fusion imaging display mode. The difference in spatial location between the tumor visible on the CT and the US on the overlay images (reviewer #1, comment #2) was measured in the lateral, anterior-posterior, and vertical axis. The maximum difference (Dmax) was evaluated for different predictive factors. CT performed 1-30 days before registration versus immediately before. Use of general anesthesia for CT and US versus no anesthesia. Anatomic landmarks versus landmarks that include at least one nonanatomic structure, such as a cyst or a calcification Overall, Dmax was 11.53 ± 8.38 mm. Dmax was 6.55 ± 7.31 mm with CT performed immediately before VNav versus 17.4 ± 5.18 with CT performed 1-30 days before (p < 0.0001). Dmax was 7.05 ± 6.95 under general anesthesia and 16.81 ± 6.77 without anesthesia (p < 0.0015). Landmarks including at least one nonanatomic structure increase Dmax of 5.2 mm (p < 0.0001). The lowest Dmax (1.9 ± 1.4 mm) was obtained when CT and VNav were performed under general anesthesia, one immediately after the other. VNav is accurate when adequate clinical setup is carefully selected. Only under these conditions (reviewer #2), liver tumors not identified on US can be accurately targeted for biopsy or radiofrequency ablation using fusion imaging.

  11. Coxofemoral joint kinematics using video fluoroscopic images of treadmill-walking cats: development of a technique to assess osteoarthritis-associated disability.

    Science.gov (United States)

    Guillot, Martin; Gravel, Pierre; Gauthier, Marie-Lou; Leblond, Hugues; Tremblay, Maurice; Rossignol, Serge; Martel-Pelletier, Johanne; Pelletier, Jean-Pierre; de Guise, Jacques A; Troncy, Eric

    2015-02-01

    The objectives of this pilot study were to develop a video fluoroscopy kinematics method for the assessment of the coxofemoral joint in cats with and without osteoarthritis (OA)-associated disability. Two non-OA cats and four cats affected by coxofemoral OA were evaluated by video fluoroscopy. Video fluoroscopic images of the coxofemoral joints were captured at 120 frames/s using a customized C-arm X-ray system while cats walked freely on a treadmill at 0.4 m/s. The angle patterns over time of the coxofemoral joints were extracted using a graphic user interface following four steps: (i) correction for image distortion; (ii) image denoising and contrast enhancement; (iii) frame-to-frame anatomical marker identification; and (iv) statistical gait analysis. Reliability analysis was performed. The cats with OA presented greater intra-subject stride and gait cycle variability. Three cats with OA presented a left-right asymmetry in the range of movement of the coxofemoral joint angle in the sagittal plane (two with no overlap of the 95% confidence interval, and one with only a slight overlap) consistent with their painful OA joint, and a longer gait cycle duration. Reliability analysis revealed an absolute variation in the coxofemoral joint angle of 2º-6º, indicating that the two-dimensional video fluoroscopy technique provided reliable data. Improvement of this method is recommended: variability would likely be reduced if a larger field of view could be recorded, allowing the identification and tracking of each femoral axis, rather than the trochanter landmarks. The range of movement of the coxofemoral joint has the potential to be an objective marker of OA-associated disability. © ISFM and AAFP 2014.

  12. Images.

    Science.gov (United States)

    Barr, Catherine, Ed.

    1997-01-01

    The theme of this month's issue is "Images"--from early paintings and statuary to computer-generated design. Resources on the theme include Web sites, CD-ROMs and software, videos, books, and others. A page of reproducible activities is also provided. Features include photojournalism, inspirational Web sites, art history, pop art, and myths. (AEF)

  13. Toward real-time remote processing of laparoscopic video.

    Science.gov (United States)

    Ronaghi, Zahra; Duffy, Edward B; Kwartowitz, David M

    2015-10-01

    Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and use small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery uses the images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, California). The video streams generate approximately 360 MB of data per second, demonstrating a trend toward increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process, and visualize data in real time is essential for the performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We have performed image processing algorithms on a high-definition head phantom video (1920 × 1080 pixels) and transferred the video using a message passing interface. The total transfer time is around 53 ms or 19 fps. We will optimize and parallelize these algorithms to reduce the total time to 30 ms.

  14. Video imaging of cytosolic Ca2+ in pancreatic beta-cells stimulated by glucose, carbachol, and ATP.

    Science.gov (United States)

    Theler, J M; Mollard, P; Guérineau, N; Vacher, P; Pralong, W F; Schlegel, W; Wollheim, C B

    1992-09-05

    In order to define the differences in the distribution of cytosolic free Ca2+ ([Ca2+]i) in pancreatic beta-cells stimulated with the fuel secretagogue glucose or the Ca(2+)-mobilizing agents carbachol and ATP, we applied digital video imaging to beta-cells loaded with fura-2.83% of the cells responded to glucose with an increase in [Ca2+]i after a latency of 117 +/- 24 s (mean +/- S.E., 85 cells). Of these cells, 16% showed slow wave oscillations (frequency 0.35/min). In order to assess the relationship between membrane potential and the distribution of the [Ca2+]i rise, digital image analysis and perforated patch-clamp methods were applied simultaneously. The system used allowed sufficient temporal resolution to visualize a subplasmalemmal Ca2+ transient due to a single glucose-induced action potential. Glucose could also elicit a slow depolarization which did not cause Ca2+ influx until the appearance of the first of a train of action potentials. [Ca2+]i rose progressively during spike firing. Inhibition of Ca2+ influx by EGTA abolished the glucose-induced rise in [Ca2+]i. In contrast, the peak amplitude of the [Ca2+]i response to carbachol was not significantly different in normal or in Ca(2+)-deprived medium. Occasionally, the increase of the [Ca2+]i rise was polarized to one area of the cell different from the subplasmalemmal rise caused by glucose. The amplitude of the response and the number of responding cells were significantly increased when carbachol was applied after the addition of high glucose (11.2 mM). ATP also raised [Ca2+]i and promoted both Ca2+ mobilization and Ca2+ influx. The intracellular distribution of [Ca2+]i was homogeneous during the onset of the response. A polarity in the [Ca2+]i distribution could be detected either in the descending phase of the peak or in subsequent peaks during [Ca2+]i oscillations caused by ATP. In the absence of extracellular Ca2+, the sequential application of ATP and carbachol revealed that carbachol was still

  15. Development of a video image-based QA system for the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system

    Energy Technology Data Exchange (ETDEWEB)

    Ebe, Kazuyu, E-mail: nrr24490@nifty.com; Tokuyama, Katsuichi; Baba, Ryuta; Ogihara, Yoshisada; Ichikawa, Kosuke; Toyama, Joji [Joetsu General Hospital, 616 Daido-Fukuda, Joetsu-shi, Niigata 943-8507 (Japan); Sugimoto, Satoru [Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo 113-8421 (Japan); Utsunomiya, Satoru; Kagamu, Hiroshi; Aoyama, Hidefumi [Graduate School of Medical and Dental Sciences, Niigata University, Niigata 951-8510 (Japan); Court, Laurence [The University of Texas MD Anderson Cancer Center, Houston, Texas 77030-4009 (United States)

    2015-08-15

    Purpose: To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Methods: Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio–caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on the target phantom during a reproduction of the patient’s tumor motion. A substitute target with the patient’s tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors’ QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients’ tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Results: Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors

  16. Feasibility of Radon projection acquisition for compressive imaging in MMW region based new video rate 16×16 GDD FPA camera

    Science.gov (United States)

    Levanon, Assaf; Konstantinovsky, Michael; Kopeika, Natan S.; Yitzhaky, Yitzhak; Stern, A.; Turak, Svetlana; Abramovich, Amir

    2015-05-01

    In this article we present preliminary results for the combination of two interesting fields in the last few years: 1) Compressed imaging (CI), which is a joint sensing and compressing process, that attempts to exploit the large redundancy in typical images in order to capture fewer samples than usual. 2) Millimeter Waves (MMW) imaging. MMW based imaging systems are required for a large variety of applications in many growing fields such as medical treatments, homeland security, concealed weapon detection, and space technology. Moreover, the possibility to create a reliable imaging in low visibility conditions such as heavy cloud, smoke, fog and sandstorms in the MMW region, generate high interest from military groups in order to be ready for new combat. The lack of inexpensive room temperature imaging sensors makes it difficult to provide a suitable MMW system for many of the above applications. A system based on Glow Discharge Detector (GDD) Focal Plane Arrays (FPA) can be very efficient in real time imaging with significant results. The GDD is located in free space and it can detect MMW radiation almost isotropically. In this article, we present a new approach of reconstruction MMW imaging by rotation scanning of the target. The Collection process here, based on Radon projections allows implementation of the compressive sensing principles into the MMW region. Feasibility of concept was obtained as radon line imaging results. MMW imaging results with our resent sensor are also presented for the first time. The multiplexing frame rate of 16×16 GDD FPA permits real time video rate imaging of 30 frames per second and comprehensive 3D MMW imaging. It uses commercial GDD lamps with 3mm diameter, Ne indicator lamps as pixel detectors. Combination of these two fields should make significant improvement in MMW region imaging research, and new various of possibilities in compressing sensing technique.

  17. TRAFFIC SIGN RECOGNATION WITH VIDEO PROCESSING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Musa AYDIN

    2013-01-01

    Full Text Available In this study, traffic signs are aimed to be recognized and identified from a video image which is taken through a video camera. To accomplish our aim, a traffic sign recognition program has been developed in MATLAB/Simulink environment. The target traffic sign are recognized in the video image with the developed program.

  18. Video quality assessment for web content mirroring

    Science.gov (United States)

    He, Ye; Fei, Kevin; Fernandez, Gustavo A.; Delp, Edward J.

    2014-03-01

    Due to the increasing user expectation on watching experience, moving web high quality video streaming content from the small screen in mobile devices to the larger TV screen has become popular. It is crucial to develop video quality metrics to measure the quality change for various devices or network conditions. In this paper, we propose an automated scoring system to quantify user satisfaction. We compare the quality of local videos with the videos transmitted to a TV. Four video quality metrics, namely Image Quality, Rendering Quality, Freeze Time Ratio and Rate of Freeze Events are used to measure video quality change during web content mirroring. To measure image quality and rendering quality, we compare the matched frames between the source video and the destination video using barcode tools. Freeze time ratio and rate of freeze events are measured after extracting video timestamps. Several user studies are conducted to evaluate the impact of each objective video quality metric on the subjective user watching experience.

  19. Acquired cystic kidney disease

    Energy Technology Data Exchange (ETDEWEB)

    Choyke, P.L. [National Institutes of Health, Bethesda, MD (United States). Dept. of Diagnostic Radiology

    2000-11-01

    Acquired cystic kidney disease (ACKD), also known as acquired renal cystic disease (ARCD,) occurs in patients who are on dialysis for end-stage renal disease. It is generally accepted that ACKD develops as a consequence of sustained uremia and can first manifest even before dialysis is initiated while the patient is still in chronic renal failure. The role of immune suppression, particularly in transplant recipients, in the development of ACKD, is still under investigation. The prevalence of ACKD is directly related to the duration of dialysis and the risk of cancer is directly related to the presence of cysts. Herein we review the current understanding of the pathophysiology and imaging implications of ACKD. (orig.)

  20. Monitoring of Wheat Growth Status and Mapping of Wheat Yield’s within-Field Spatial Variations Using Color Images Acquired from UAV-camera System

    Directory of Open Access Journals (Sweden)

    Mengmeng Du

    2017-03-01

    Full Text Available Applications of remote sensing using unmanned aerial vehicle (UAV in agriculture has proved to be an effective and efficient way of obtaining field information. In this study, we validated the feasibility of utilizing multi-temporal color images acquired from a low altitude UAV-camera system to monitor real-time wheat growth status and to map within-field spatial variations of wheat yield for smallholder wheat growers, which could serve as references for site-specific operations. Firstly, eight orthomosaic images covering a small winter wheat field were generated to monitor wheat growth status from heading stage to ripening stage in Hokkaido, Japan. Multi-temporal orthomosaic images indicated straightforward sense of canopy color changes and spatial variations of tiller densities. Besides, the last two orthomosaic images taken from about two weeks prior to harvesting also notified the occurrence of lodging by visual inspection, which could be used to generate navigation maps guiding drivers or autonomous harvesting vehicles to adjust operation speed according to specific lodging situations for less harvesting loss. Subsequently orthomosaic images were geo-referenced so that further study on stepwise regression analysis among nine wheat yield samples and five color vegetation indices (CVI could be conducted, which showed that wheat yield correlated with four accumulative CVIs of visible-band difference vegetation index (VDVI, normalized green-blue difference index (NGBDI, green-red ratio index (GRRI, and excess green vegetation index (ExG, with the coefficient of determination and RMSE as 0.94 and 0.02, respectively. The average value of sampled wheat yield was 8.6 t/ha. The regression model was also validated by using leave-one-out cross validation (LOOCV method, of which root-mean-square error of predication (RMSEP was 0.06. Finally, based on the stepwise regression model, a map of estimated wheat yield was generated, so that within

  1. Probabilistic recognition of human faces from video

    DEFF Research Database (Denmark)

    Zhou, Saohua; Krüger, Volker; Chellappa, Rama

    2003-01-01

    Recognition of human faces using a gallery of still or video images and a probe set of videos is systematically investigated using a probabilistic framework. In still-to-video recognition, where the gallery consists of still images, a time series state space model is proposed to fuse temporal...... demonstrate that, due to the propagation of the identity variable over time, a degeneracy in posterior probability of the identity variable is achieved to give improved recognition. The gallery is generalized to videos in order to realize video-to-video recognition. An exemplar-based learning strategy...... of the identity variable produces the recognition result. The model formulation is very general and it allows a variety of image representations and transformations. Experimental results using videos collected by NIST/USF and CMU illustrate the effectiveness of this approach for both still-to-video and video...

  2. Video tracking in the extreme: video analysis for nocturnal underwater animal movement.

    Science.gov (United States)

    Patullo, B W; Jolley-Rogers, G; Macmillan, D L

    2007-11-01

    Computer analysis of video footage is one option for recording locomotor behavior for a range of neurophysiological and behavioral studies. This technique is reasonably well established and accepted, but its use for some behavioral analyses remains a challenge. For example, filming through water can lead to reflection, and filming nocturnal activity can reduce resolution and clarity of filmed images. The aim of this study was to develop a noninvasive method for recording nocturnal activity in aquatic decapods and test the accuracy of analysis by video tracking software. We selected crayfish, Cherax destructor, because they are often active at night, they live underwater, and data on their locomotion is important for answering biological and physiological questions such as how they explore and navigate. We constructed recording arenas and filmed animals in infrared light. Wethen compared human observer data and software-acquired values. In this article, we outline important apparatus and software issues to obtain reliable computer tracking.

  3. System of video observation for electron beam welding process

    Science.gov (United States)

    Laptenok, V. D.; Seregin, Y. N.; Bocharov, A. N.; Murygin, A. V.; Tynchenko, V. S.

    2016-04-01

    Equipment of video observation system for electron beam welding process was developed. Construction of video observation system allows to reduce negative effects on video camera during the process of electron beam welding and get qualitative images of this process.

  4. A reconsideration of the noise equivalent power and the data analysis procedure for the infrared imaging video bolometers

    Energy Technology Data Exchange (ETDEWEB)

    Pandya, Shwetang N., E-mail: pandya.shwetang@LHD.nifs.ac.jp; Sano, Ryuichi [High Temperature Plasma Physics Research Division, The Graduate University of Advanced Studies, 322-6 Oroshi-cho, Toki 509-5292 (Japan); Peterson, Byron J.; Kobayashi, Masahiro; Mukai, Kiyofumi [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki 509-5292 (Japan); Pandya, Santosh P. [Institute for Plasma Research, Nr. Indira Bridge, Village Bhat, Gandhinagar 382-428 (India)

    2014-12-15

    The infrared imaging video bolometer (IRVB) used for measurement of the two-dimensional (2D) radiation profiles from the Large Helical Device has been significantly upgraded recently to improve its signal to noise ratio, sensitivity, and calibration, which ultimately provides quantitative measurements of the radiation from the plasma. The reliability of the quantified data needs to be established by various checks. The noise estimates also need to be revised and more realistic values need to be established. It is shown that the 2D heat diffusion equation can be used for estimating the power falling on the IRVB foil, even with a significant amount of spatial variation in the thermal diffusivity across the area of the platinum foil found experimentally during foil calibration. The equation for the noise equivalent power density (NEPD) is re-derived to include the errors in the measurement of the thermophysical and the optical properties of the IRVB foil. The theoretical value estimated using this newly derived equation matches closely, within 5.5%, with the mean experimental value. The change in the contribution of each error term of the NEPD equation with rising foil temperature is also studied and the blackbody term is found to dominate the other terms at elevated operating temperatures. The IRVB foil is also sensitive to the charge exchange (CX) neutrals escaping from the plasma. The CX neutral contribution is estimated to be marginally higher than the noise equivalent power (NEP) of the IRVB. It is also established that the radiation measured by the IRVB originates from the impurity line radiation from the plasma and not from the heated divertor tiles. The change in the power density due to noise reduction measures such as data smoothing and averaging is found to be comparable to the IRVB NEPD. The precautions that need to be considered during background subtraction are also discussed with experimental illustrations. Finally, the analysis algorithm with all the

  5. Digital video.

    Science.gov (United States)

    Johnson, Don; Johnson, Mike

    2004-04-01

    The process of digital capture, editing, and archiving video has become an important aspect of documenting arthroscopic surgery. Recording the arthroscopic findings before and after surgery is an essential part of the patient's medical record. The hardware and software has become more reasonable to purchase, but the learning curve to master the software is steep. Digital video is captured at the time of arthroscopy to a hard disk, and written to a CD at the end of the operative procedure. The process of obtaining video of open procedures is more complex. Outside video of the procedure is recorded on digital tape with a digital video camera. The camera must be plugged into a computer to capture the video on the hard disk. Adobe Premiere software is used to edit the video and render the finished video to the hard drive. This finished video is burned onto a CD. We outline the choice of computer hardware and software for the manipulation of digital video. The techniques of backup and archiving the completed projects and files also are outlined. The uses of digital video for education and the formats that can be used in PowerPoint presentations are discussed.

  6. Representing videos in tangible products

    Science.gov (United States)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  7. Chip-on-the-tip compact flexible endoscopic epifluorescence video-microscope for in-vivo imaging in medicine and biomedical research

    OpenAIRE

    Matz, Gregor; Messerschmidt, Bernhard; G?bel, Werner; Filser, Severin; Betz, Christian S.; Kirsch, Matthias; Uckermann, Ortrud; Kunze, Marcel; Fl?mig, Sven; Ehrhardt, Andr?; Irion, Klaus-Martin; Haack, Mareike; Dorostkar, Mario M.; Herms, Jochen; Gross, Herbert

    2017-01-01

    We demonstrate a 60 mg light video-endomicroscope with a cylindrical shape of the rigid tip of only 1.6 mm diameter and 6.7 mm length. A novel implementation method of the illumination unit in the endomicroscope is presented. It allows for the illumination of the biological sample with fiber-coupled LED light at 455 nm and the imaging of the red-shifted fluorescence light above 500 nm in epi-direction. A large numerical aperture of 0.7 leads to a sub-cellular resolution and yields to high-con...

  8. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  9. Games people play: How video games improve probabilistic learning.

    Science.gov (United States)

    Schenk, Sabrina; Lech, Robert K; Suchan, Boris

    2017-09-29

    Recent research suggests that video game playing is associated with many cognitive benefits. However, little is known about the neural mechanisms mediating such effects, especially with regard to probabilistic categorization learning, which is a widely unexplored area in gaming research. Therefore, the present study aimed to investigate the neural correlates of probabilistic classification learning in video gamers in comparison to non-gamers. Subjects were scanned in a 3T magnetic resonance imaging (MRI) scanner while performing a modified version of the weather prediction task. Behavioral data yielded evidence for better categorization performance of video gamers, particularly under conditions characterized by stronger uncertainty. Furthermore, a post-experimental questionnaire showed that video gamers had acquired higher declarative knowledge about the card combinations and the related weather outcomes. Functional imaging data revealed for video gamers stronger activation clusters in the hippocampus, the precuneus, the cingulate gyrus and the middle temporal gyrus as well as in occipital visual areas and in areas related to attentional processes. All these areas are connected with each other and represent critical nodes for semantic memory, visual imagery and cognitive control. Apart from this, and in line with previous studies, both groups showed activation in brain areas that are related to attention and executive functions as well as in the basal ganglia and in memory-associated regions of the medial temporal lobe. These results suggest that playing video games might enhance the usage of declarative knowledge as well as hippocampal involvement and enhances overall learning performance during probabilistic learning. In contrast to non-gamers, video gamers showed better categorization performance, independently of the uncertainty of the condition. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. A comparison of the quality of image acquisition between the incident dark field and sidestream dark field video-microscopes

    NARCIS (Netherlands)

    E. Gilbert-Kawai; J. Coppel (Jonny); V. Bountziouka (Vassiliki); C. Ince (Can); D. Martin (Daniel)

    2016-01-01

    markdownabstract__Background__ The ‘Cytocam’ is a third generation video-microscope, which enables real time visualisation of the in vivo microcirculation. Based upon the principle of incident dark field (IDF) illumination, this hand held computer-controlled device was designed to address the

  11. A comparison of the quality of image acquisition between the incident dark field and sidestream dark field video-microscopes

    NARCIS (Netherlands)

    Gilbert-Kawai, Edward; Coppel, Jonny; Bountziouka, Vassiliki; Ince, Can; Martin, Daniel; Ahuja, V.; Aref-Adib, G.; Burnham, R.; Chisholm, A.; Clarke, K.; Coates, D.; Coates, M.; Cook, D.; Cox, M.; Dhillon, S.; Dougall, C.; Doyle, P.; Duncan, P.; Edsell, M.; Edwards, L.; Evans, L.; Gardiner, P.; Grocott, M.; Gunning, P.; Hart, N.; Harrington, J.; Harvey, J.; Holloway, C.; Howard, D.; Hurlbut, D.; Imray, C.; Jonas, M.; van der Kaaij, J.; Khosravi, M.; Kolfschoten, N.; Levett, D.; Luery, H.; Luks, A.; Martin, D.; McMorrow, R.; Meale, P.; Mitchell, K.; Montgomery, H.; Morgan, G.; Morgan, J.; Murray, A.; Mythen, M.; Newman, S.; O'Dwyer, M.; Pate, J.; Plant, T.; Pun, M.; Richards, P.; Richardson, A.; Rodway, G.; Simpson, J.; Stroud, C.; Stroud, M.; Stygal, J.; Symons, B.; Szawarski, P.; van Tulleken, A.; van Tulleken, C.; Vercueil, A.; Wandrag, L.; Wilson, M.; Windsor, J.; Basnyat, B.; Clarke, C.; Hornbein, T.; Milledge, J.; West, J.; Abraham, S.; Adams, T.; Anseeuw, W.; Astin, R.; Burdall, O.; Carroll, J.; Cobb, A.; Coppel, J.; Couppis, O.; Court, J.; Cumptsey, A.; Davies, T.; Diamond, N.; Geliot, T.; Gilbert-Kawai, E.; Gilbert-Kawai, G.; Gnaiger, E.; Haldane, C.; Hennis, P.; Horscroft, J.; Jack, S.; Jarvis, B.; Jenner, W.; Jones, G.; Kenth, J.; Kotwica, A.; Kumar, R. B. C.; Lacey, J.; Laner, V.; Mahomed, Z.; Moonie, J.; Mythen, P.; O'Brien, K.; Ruggles-Brice, I.; Salmon, K.; Sheperdigian, A.; Smedley, T.; Tomlinson, C.; Ward, S.; Wight, A.; Wilkinson, C.; Wythe, S.; Feelisch, M.; Hanson, M.; Moon, R.; Peters, M.

    2016-01-01

    Background: The 'Cytocam' is a third generation video-microscope, which enables real time visualisation of the in vivo microcirculation. Based upon the principle of incident dark field (IDF) illumination, this hand held computer-controlled device was designed to address the technical limitations of

  12. Pilot study on real-time motion detection in UAS video data by human observer and image exploitation algorithm

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Voit, Michael; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2017-05-01

    Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction with target tracking algorithms, a gaze-enhanced user interface is beneficial. In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm. Besides identifying an appropriate interaction technique for the user interface - again, we compare gaze-based and traditional mouse-based interaction - we focus on the benefit an IDM algorithm might provide for an UAS video analyst. In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing with automatic support, once performing without it. We compare the two conditions considering performance in terms of effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire). The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.

  13. Interactive case vignettes utilizing simulated pathologist-clinician encounters with whole slide imaging and video tutorials of whole slide scans improves student understanding of disease processes.

    Science.gov (United States)

    Horn, Adam J; Czarnecki, Donna; Lele, Subodh M

    2012-01-01

    One of the drawbacks of studying pathology in the second year of medical school in a classroom setting is the relatively limited exposure to patient encounters/clinical rotations, making it difficult to understand and fully appreciate the significance of the course material, specifically the molecular and tissue aspects of disease. In this study, we determined if case vignettes incorporating pathologist-clinician encounters with whole slide imaging (WSI) and narrated/annotated videos of whole slide (WS) scans in addition to clinical data improved student understanding of pathologic disease processes. Case vignettes were created for several genitourinary disease processes that utilized clinical data including narratives of pathologist-clinician encounters, WSI, and annotated video tutorials of WS scans (designed to simulate "double-heading"). The students were encouraged to view the virtual slide first, with the video tutorials being provided to offer additional assistance. The case vignettes were created to be interactive with a detailed explanation of each correct and incorrect question choice. The cases were made available to all second year medical students via a website and could be viewed only after completing a 10 question pre-test. A post-test could be completed after viewing all cases followed by a brief satisfaction survey. Ninety-six students completed the pre-test with an average score of 7.7/10. Fifty-seven students completed the post-test with an average score of 9.4/10. Thirty-six students completed the satisfaction survey. 94% agreed or strongly agreed that this was a useful exercise and 91% felt that it helped them better understand the topics. The development of interactive case vignettes incorporating simulated pathologist-clinician encounters with WSI and video tutorials of WS scans helps to improve student enthusiasm to learn and grasp pathologic aspects of disease processes that lead to clinical therapeutic decision making.

  14. Interactive case vignettes utilizing simulated pathologist-clinician encounters with whole slide imaging and video tutorials of whole slide scans improves student understanding of disease processes

    Directory of Open Access Journals (Sweden)

    Adam J Horn

    2012-01-01

    Full Text Available Background: One of the drawbacks of studying pathology in the second year of medical school in a classroom setting is the relatively limited exposure to patient encounters/clinical rotations, making it difficult to understand and fully appreciate the significance of the course material, specifically the molecular and tissue aspects of disease. In this study, we determined if case vignettes incorporating pathologist-clinician encounters with whole slide imaging (WSI and narrated/annotated videos of whole slide (WS scans in addition to clinical data improved student understanding of pathologic disease processes. Materials and Methods: Case vignettes were created for several genitourinary disease processes that utilized clinical data including narratives of pathologist-clinician encounters, WSI, and annotated video tutorials of WS scans (designed to simulate "double-heading". The students were encouraged to view the virtual slide first, with the video tutorials being provided to offer additional assistance. The case vignettes were created to be interactive with a detailed explanation of each correct and incorrect question choice. The cases were made available to all second year medical students via a website and could be viewed only after completing a 10 question pre-test. A post-test could be completed after viewing all cases followed by a brief satisfaction survey. Results: Ninety-six students completed the pre-test with an average score of 7.7/10. Fifty-seven students completed the post-test with an average score of 9.4/10. Thirty-six students completed the satisfaction survey. 94% agreed or strongly agreed that this was a useful exercise and 91% felt that it helped them better understand the topics. Conclusion: The development of interactive case vignettes incorporating simulated pathologist-clinician encounters with WSI and video tutorials of WS scans helps to improve student enthusiasm to learn and grasp pathologic aspects of disease

  15. Preliminary study of synthetic aperture tissue harmonic imaging on in-vivo data

    DEFF Research Database (Denmark)

    Rasmussen, Joachim Hee; Hemmsen, Martin Christian; Sloth Madsen, Signe

    2013-01-01

    that was implemented on the UltraView system acquires both SASB-THI and DRF-THI simultaneously. Twenty-four simultaneously acquired video sequences of in-vivo abdominal SASB-THI and DRF-THI scans on 3 volunteers of 4 different sections of liver and kidney tissues were created. Videos of the in-vivo scans were......-mode imaging. Synthetic aperture sequential beamforming tissue harmonic imaging (SASB-THI) was implemented on a commercially available BK 2202 Pro Focus UltraView ultrasound system and compared to dynamic receive focused tissue harmonic imaging (DRF-THI) in clinical scans. The scan sequence...

  16. Estimation of the Above Ground Biomass of Tropical Forests using Polarimetric and Tomographic SAR Data Acquired at P Band and 3-D Imaging Techniques

    Science.gov (United States)

    Ferro-Famil, L.; El Hajj Chehade, B.; Ho Tong Minh, D.; Tebaldini, S.; LE Toan, T.

    2016-12-01

    Developing and improving methods to monitor forest biomass in space and time is a timely challenge, especially for tropical forests, for which SAR imaging at larger wavelength presents an interesting potential. Nevertheless, directly estimating tropical forest biomass from classical 2-D SAR images may reveal a very complex and ill-conditioned problem, since a SAR echo is composed of numerous contributions, whose features and importance depend on many geophysical parameters, such has ground humidity, roughness, topography… that are not related to biomass. Recent studies showed that SAR modes of diversity, i.e. polarimetric intensity ratios or interferometric phase centers, do not fully resolve this under-determined problem, whereas Pol-InSAR tree height estimates may be related to biomass through allometric relationships, with, in general over tropical forests, significant levels of uncertainty and lack of robustness. In this context, 3-D imaging using SAR tomography represents an appealing solution at larger wavelengths, for which wave penetration properties ensures a high quality mapping of a tropical forest reflectivity in the vertical direction. This paper presents a series of studies led, in the frame of the preparation of the next ESA mission BIOMASS, on the estimation of biomass over a tropical forest in French Guiana, using Polarimetric SAR Tomographic (Pol-TomSAR) data acquired at P band by ONERA. It is then shown that Pol-TomoSAR significantly improves the retrieval of forest above ground biomass (AGB) in a high biomass forest (200 up to 500 t/ha), with an error of only 10% at 1.5-ha resolution using a reflectivity estimates sampled at a predetermined elevation. The robustness of this technique is tested by applying the same approach over another site, and results show a similar relationship between AGB and tomographic reflectivity over both sites. The excellent ability of Pol-TomSAR to retrieve both canopy top heights and ground topography with an error

  17. Three-dimensional image reconstruction with free open-source OsiriX software in video-assisted thoracoscopic lobectomy and segmentectomy.

    Science.gov (United States)

    Yao, Fei; Wang, Jian; Yao, Ju; Hang, Fangrong; Lei, Xu; Cao, Yongke

    2017-03-01

    The aim of this retrospective study was to evaluate the practice and the feasibility of Osirix, a free and open-source medical imaging software, in performing accurate video-assisted thoracoscopic lobectomy and segmentectomy. From July 2014 to April 2016, 63 patients received anatomical video-assisted thoracoscopic surgery (VATS), either lobectomy or segmentectomy, in our department. Three-dimensional (3D) reconstruction images of 61 (96.8%) patients were preoperatively obtained with contrast-enhanced computed tomography (CT). Preoperative resection simulations were accomplished with patient-individual reconstructed 3D images. For lobectomy, pulmonary lobar veins, arteries and bronchi were identified meticulously by carefully reviewing the 3D images on the display. For segmentectomy, the intrasegmental veins in the affected segment for division and the intersegmental veins to be preserved were identified on the 3D images. Patient preoperative characteristics, surgical outcomes and postoperative data were reviewed from a prospective database. The study cohort of 63 patients included 33 (52.4%) men and 30 (47.6%) women, of whom 46 (73.0%) underwent VATS lobectomy and 17 (27.0%) underwent VATS segmentectomy. There was 1 conversion from VATS lobectomy to open thoracotomy because of fibrocalcified lymph nodes. A VATS lobectomy was performed in 1 case after completing the segmentectomy because invasive adenocarcinoma was detected by intraoperative frozen-section analysis. There were no 30-day or 90-day operative mortalities CONCLUSIONS: The free, simple, and user-friendly software program Osirix can provide a 3D anatomic structure of pulmonary vessels and a clear vision into the space between the lesion and adjacent tissues, which allows surgeons to make preoperative simulations and improve the accuracy and safety of actual surgery. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  18. Video Stabilization Using Feature Point Matching

    Science.gov (United States)

    Kulkarni, Shamsundar; Bormane, D. S.; Nalbalwar, S. L.

    2017-01-01

    Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper an algorithm is proposed to stabilize jittery videos. A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video is identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances.

  19. Diversity-Aware Multi-Video Summarization

    Science.gov (United States)

    Panda, Rameswar; Mithun, Niluthpol Chowdhury; Roy-Chowdhury, Amit K.

    2017-10-01

    Most video summarization approaches have focused on extracting a summary from a single video; we propose an unsupervised framework for summarizing a collection of videos. We observe that each video in the collection may contain some information that other videos do not have, and thus exploring the underlying complementarity could be beneficial in creating a diverse informative summary. We develop a novel diversity-aware sparse optimization method for multi-video summarization by exploring the complementarity within the videos. Our approach extracts a multi-video summary which is both interesting and representative in describing the whole video collection. To efficiently solve our optimization problem, we develop an alternating minimization algorithm that minimizes the overall objective function with respect to one video at a time while fixing the other videos. Moreover, we introduce a new benchmark dataset, Tour20, that contains 140 videos with multiple human created summaries, which were acquired in a controlled experiment. Finally, by extensive experiments on the new Tour20 dataset and several other multi-view datasets, we show that the proposed approach clearly outperforms the state-of-the-art methods on the two problems-topic-oriented video summarization and multi-view video summarization in a camera network.

  20. Video games

    OpenAIRE

    Kolář, Vojtěch

    2012-01-01

    This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...

  1. Computationally efficient video restoration for Nyquist sampled imaging sensors combining an affine-motion-based temporal Kalman filter and adaptive Wiener filter.

    Science.gov (United States)

    Rucci, Michael; Hardie, Russell C; Barnard, Kenneth J

    2014-05-01

    In this paper, we present a computationally efficient video restoration algorithm to address both blur and noise for a Nyquist sampled imaging system. The proposed method utilizes a temporal Kalman filter followed by a correlation-model based spatial adaptive Wiener filter (AWF). The Kalman filter employs an affine background motion model and novel process-noise variance estimate. We also propose and demonstrate a new multidelay temporal Kalman filter designed to more robustly treat local motion. The AWF is a spatial operation that performs deconvolution and adapts to the spatially varying residual noise left in the Kalman filter stage. In image areas where the temporal Kalman filter is able to provide significant noise reduction, the AWF can be aggressive in its deconvolution. In other areas, where less noise reduction is achieved with the Kalman filter, the AWF balances the deconvolution with spatial noise reduction. In this way, the Kalman filter and AWF work together effectively, but without the computational burden of full joint spatiotemporal processing. We also propose a novel hybrid system that combines a temporal Kalman filter and BM3D processing. To illustrate the efficacy of the proposed methods, we test the algorithms on both simulated imagery and video collected with a visible camera.

  2. Myocardial iron overload assessed by magnetic resonance imaging (MRI)T2* in multi-transfused patients with thalassemia and acquired anemias.

    Science.gov (United States)

    Fragasso, Alberto; Ciancio, Angela; Mannarella, Clara; Gaudiano, Carlo; Scarciolla, Oronzo; Ottonello, Carlo; Francone, Marco; Nardella, Michele; Peluso, Angelo; Melpignano, Angela; Veglio, Maria Rosaria; Quarta, Giovanni; Turchetti, Cristiano

    2011-02-01

    Cardiac complications secondary to iron overload remain a significant matter in patients with transfusion dependent anemias. To evaluate cardiac siderosis, Magnetic resonance imaging T2* (MRI T2*) was performed in 3 cohorts of transfusion dependent patients: 99 with thalassemia major (TM), 20 with thalassemia intermedia (TI), and 10 with acquired anemias (AA). Serum ferritin was measured and all patients underwent echocardiographic evaluation. In TM patients cardiac T2* pathologic values (below 20 ms) were found in 37 patients. Serum ferritin was negatively associated with age (r=-0.32, p=0.001) and weakly with T2* values (r=-0.19, p=0.057). A positive correlation was found between T2* and LVEF (r=0.27, p=0.006). Out of 37 patients with T2*<20 ms, 18 (48%) had serum ferritin values<1000 ng/ml. In TI cohort, 3 patients had cardiac T2* pathologic values. In AA cohort, pathologic T2* values were found in 2 patients, who received 234 and 199 PRBC units, respectively, and were both on chelation therapy (in one patient ferritin value was 399 ng/ml). T2* values were negatively associated, but not significantly, with the number of PRBC transfused (r=-0.53, p=0.07). In our experience, 37% of TM patients had a myocardial iron overload assessed by MRI T2*; this value is higher than in TI patients. Serum ferritin measurement was a poor predictor of myocardial siderosis. In patients with AA, more than 200 PRBC units transfused were required to induce cardiac hemosiderosis, in spite of chelation therapy and, in one patient, of normal ferritin values. Copyright © 2010 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  3. The Image as a Relate: Video as a Resource for Listening to and Giving Voice to Persons with Learning Disabilities

    Science.gov (United States)

    Rojas, Susana; Sanahuja, Josep Ma

    2012-01-01

    Our work, based on two pieces of research, aims to show how we try to get the voice of persons with learning disabilities. In both pieces of work, the second of which is still under progress, we have used video as the means by which we can collect and show what people are saying in the context and situation in which they find themselves. As we…

  4. CHARACTER RECOGNITION OF VIDEO SUBTITLES\\

    Directory of Open Access Journals (Sweden)

    Satish S Hiremath

    2016-11-01

    Full Text Available An important task in content based video indexing is to extract text information from videos. The challenges involved in text extraction and recognition are variation of illumination on each video frame with text, the text present on the complex background and different font size of the text. Using various image processing algorithms like morphological operations, blob detection and histogram of oriented gradients the character recognition of video subtitles is implemented. Segmentation, feature extraction and classification are the major steps of character recognition. Several experimental results are shown to demonstrate the performance of the proposed algorithm

  5. Video processing project

    CSIR Research Space (South Africa)

    Globisch, R

    2009-03-01

    Full Text Available Video processing source code for algorithms and tools used in software media pipelines (e.g. image scalers, colour converters, etc.) The currently available source code is written in C++ with their associated libraries and DirectShow- Filters....

  6. Borehole-explosion and air-gun data acquired in the 2011 Salton Seismic Imaging Project (SSIP), southern California: description of the survey

    Science.gov (United States)

    Rose, Elizabeth J.; Fuis, Gary S.; Stock, Joann M.; Hole, John A.; Kell, Annie M.; Kent, Graham; Driscoll, Neal W.; Goldman, Mark; Reusch, Angela M.; Han, Liang; Sickler, Robert R.; Catchings, Rufus D.; Rymer, Michael J.; Criley, Coyn J.; Scheirer, Daniel S.; Skinner, Steven M.; Slayday-Criley, Coye J.; Murphy, Janice M.; Jensen, Edward G.; McClearn, Robert; Ferguson, Alex J.; Butcher, Lesley A.; Gardner, Max A.; Emmons, Iain; Loughran, Caleb L.; Svitek, Joseph R.; Bastien, Patrick C.; Cotton, Joseph A.; Croker, David S.; Harding, Alistair J.; Babcock, Jeffrey M.; Harder, Steven H.; Rosa, Carla M.

    2013-01-01

    earthquake energy can travel through the sediments. All of these factors determine how hard the earth will shake during a major earthquake. If we can improve on our understanding of how and where earthquakes will occur, and how strong their resultant shaking will be, then buildings can be designed or retrofitted accordingly in order to resist damage and collapse, and emergency plans can be adequately prepared. In addition, SSIP will investigate the processes of rifting and magmatism in the Salton Trough in order to better understand this important plate-boundary region. The Salton Trough is a unique rift in that subsidence is accompanied by huge influxes of infilling sediment from the Colorado River. Volcanism that accompanies the subsidence here is muted by these influxes of sediment. The Salton Trough, in the central part of the Imperial Valley, is apparently made up of entirely new crust: young sediment in the upper crust and basaltic intrusive rocks in the mid-to-lower crust (Fuis and others, 1984). Similar to the ultrasound and computed tomography (CT) scans performed by the medical industry, seismic imaging is a collection of techniques that enable scientists to obtain a picture of what is underground. The petroleum industry routinely uses these techniques to search for oil and gas at relatively shallow depths; however, the scope of this project demanded that we image as much as 30 km into the Earth’s crust. This project generated and recorded seismic waves, similar to sound waves, which move downward into the Earth and are bent (refracted) or echoed (reflected) back to the surface. SSIP acquired data in a series of intersecting lines that cover key areas of the Salton Trough. The sources of sound waves were detonations (shots) in deep boreholes, designed to create energy equivalent to magnitude 1–2 earthquakes. The study region routinely experiences earthquakes of these magnitudes, but earthquakes are not located in such a way as to permit us to create the

  7. Towards real-time remote processing of laparoscopic video

    Science.gov (United States)

    Ronaghi, Zahra; Duffy, Edward B.; Kwartowitz, David M.

    2015-03-01

    Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery (IGS) uses images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, CA, USA). The video streams generate approximately 360 megabytes of data per second, demonstrating a trend towards increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process and visualize data in real-time is essential for performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We aim to develop a medical video processing system using an OpenFlow software defined network that is capable of connecting to multiple remote medical facilities and HPC servers.

  8. Tiny videos: a large data set for nonparametric video retrieval and frame classification.

    Science.gov (United States)

    Karpenko, Alexandre; Aarabi, Parham

    2011-03-01

    In this paper, we present a large database of over 50,000 user-labeled videos collected from YouTube. We develop a compact representation called "tiny videos" that achieves high video compression rates while retaining the overall visual appearance of the video as it varies over time. We show that frame sampling using affinity propagation-an exemplar-based clustering algorithm-achieves the best trade-off between compression and video recall. We use this large collection of user-labeled videos in conjunction with simple data mining techniques to perform related video retrieval, as well as classification of images and video frames. The classification results achieved by tiny videos are compared with the tiny images framework [24] for a variety of recognition tasks. The tiny images data set consists of 80 million images collected from the Internet. These are the largest labeled research data sets of videos and images available to date. We show that tiny videos are better suited for classifying scenery and sports activities, while tiny images perform better at recognizing objects. Furthermore, we demonstrate that combining the tiny images and tiny videos data sets improves classification precision in a wider range of categories.

  9. New algorithm for iris recognition based on video sequences

    Science.gov (United States)

    Bourennane, Salah; Fossati, Caroline; Ketchantang, William

    2010-07-01

    Among existing biometrics, iris recognition systems are among the most accurate personal biometric identification systems. However, the acquisition of a workable iris image requires strict cooperation of the user; otherwise, the image will be rejected by a verification module because of its poor quality, inducing a high false reject rate (FRR). The FRR may also increase when iris localization fails or when the pupil is too dilated. To improve the existing methods, we propose to use video sequences acquired in real time by a camera. In order to keep the same computational load to identify the iris, we propose a new method to estimate the iris characteristics. First, we propose a new iris texture characterization based on Fourier-Mellin transform, which is less sensitive to pupil dilatations than previous methods. Then, we develop a new iris localization algorithm that is robust to variations of quality (partial occlusions due to eyelids and eyelashes, light reflects, etc.), and finally, we introduce a fast and new criterion of suitable image selection from an iris video sequence for an accurate recognition. The accuracy of each step of the algorithm in the whole proposed recognition process is tested and evaluated using our own iris video database and several public image databases, such as CASIA, UBIRIS, and BATH.

  10. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  11. Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera

    National Research Council Canada - National Science Library

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems...

  12. Video-Based Physiologic Monitoring During an Acute Hypoxic Challenge: Heart Rate, Respiratory Rate, and Oxygen Saturation.

    Science.gov (United States)

    Addison, Paul S; Jacquel, Dominique; Foo, David M H; Antunes, André; Borg, Ulf R

    2017-09-01

    The physiologic information contained in the video photoplethysmogram is well documented. However, extracting this information during challenging conditions requires new analysis techniques to capture and process the video image streams to extract clinically useful physiologic parameters. We hypothesized that heart rate, respiratory rate, and oxygen saturation trending can be evaluated accurately from video information during acute hypoxia. Video footage was acquired from multiple desaturation episodes during a porcine model of acute hypoxia using a standard visible light camera. A novel in-house algorithm was used to extract photoplethysmographic cardiac pulse and respiratory information from the video image streams and process it to extract a continuously reported video-based heart rate (HRvid), respiratory rate (RRvid), and oxygen saturation (SvidO2). This information was then compared with HR and oxygen saturation references from commercial pulse oximetry and the known rate of respiration from the ventilator. Eighty-eight minutes of data were acquired during 16 hypoxic episodes in 8 animals. A linear mixed-effects regression showed excellent responses relative to a nonhypoxic reference signal with slopes of 0.976 (95% confidence interval [CI], 0.973-0.979) for HRvid; 1.135 (95% CI, 1.101-1.168) for RRvid, and 0.913 (95% CI, 0.905-0.920) for video-based oxygen saturation. These results were obtained while maintaining continuous uninterrupted vital sign monitoring for the entire study period. Video-based monitoring of HR, RR, and oxygen saturation may be performed with reasonable accuracy during acute hypoxic conditions in an anesthetized porcine hypoxia model using standard visible light camera equipment. However, the study was conducted during relatively low motion. A better understanding of the effect of motion and the effect of ambient light on the video photoplethysmogram may help refine this monitoring technology for use in the clinical environment.

  13. Chip-on-the-tip compact flexible endoscopic epifluorescence video-microscope for in-vivo imaging in medicine and biomedical research.

    Science.gov (United States)

    Matz, Gregor; Messerschmidt, Bernhard; Göbel, Werner; Filser, Severin; Betz, Christian S; Kirsch, Matthias; Uckermann, Ortrud; Kunze, Marcel; Flämig, Sven; Ehrhardt, André; Irion, Klaus-Martin; Haack, Mareike; Dorostkar, Mario M; Herms, Jochen; Gross, Herbert

    2017-07-01

    We demonstrate a 60 mg light video-endomicroscope with a cylindrical shape of the rigid tip of only 1.6 mm diameter and 6.7 mm length. A novel implementation method of the illumination unit in the endomicroscope is presented. It allows for the illumination of the biological sample with fiber-coupled LED light at 455 nm and the imaging of the red-shifted fluorescence light above 500 nm in epi-direction. A large numerical aperture of 0.7 leads to a sub-cellular resolution and yields to high-contrast images within a field of view of 160 μ m. A miniaturized chip-on-the-tip CMOS image sensor with more than 150,000 pixels captures the multicolor images at 30 fps. Considering size, plug-and-play capability, optical performance, flexibility and weight, we hence present a probe which sets a new benchmark in the field of epifluorescence endomicroscopes. Several ex-vivo and in-vivo experiments in rodents and humans suggest future application in biomedical fields, especially in the neuroscience community, as well as in medical applications targeting optical biopsies or the detection of cellular anomalies.

  14. The learning curve for narrow-band imaging in the diagnosis of precancerous gastric lesions by using Web-based video.

    Science.gov (United States)

    Dias-Silva, Diogo; Pimentel-Nunes, Pedro; Magalhães, Joana; Magalhães, Ricardo; Veloso, Nuno; Ferreira, Carlos; Figueiredo, Pedro; Moutinho, Pedro; Dinis-Ribeiro, Mário

    2014-06-01

    A simplified narrow-band imaging (NBI) endoscopy classification of gastric precancerous and cancerous lesions was derived and validated in a multicenter study. This classification comes with the need for dissemination through adequate training. To address the learning curve of this classification by endoscopists with differing expertise and to assess the feasibility of a YouTube-based learning program to disseminate it. Prospective study. Five centers. Six gastroenterologists (3 trainees, 3 fully trained endoscopists [FTs]). Twenty tests provided through a Web-based program containing 10 randomly ordered NBI videos of gastric mucosa were taken. Feedback was sent 7 days after every test submission. Measures of accuracy of the NBI classification throughout the time. From the first to the last 50 videos, a learning curve was observed with a 10% increase in global accuracy, for both trainees (from 64% to 74%) and FTs (from 56% to 65%). After 200 videos, sensitivity and specificity of 80% and higher for intestinal metaplasia were observed in half the participants, and a specificity for dysplasia greater than 95%, along with a relevant likelihood ratio for a positive result of 7 to 28 and likelihood ratio for a negative result of 0.21 to 0.82, were achieved by all of the participants. No constant learning curve was observed for the identification of Helicobacter pylori gastritis and sensitivity to dysplasia. The trainees had better results in all of the parameters, except specificity for dysplasia, compared with the FTs. Globally, participants agreed that the program's structure was adequate, except on the feedback, which should have consisted of a more detailed explanation of each answer. No formal sample size estimate. A Web-based learning program could be used to teach and disseminate classifications in the endoscopy field. In this study, an NBI classification for gastric mucosal features seems to be easily learned for the identification of gastric preneoplastic

  15. Acquired Porphyria Cutanea Tarda

    Science.gov (United States)

    Koval, Andrew; Danby, C. W. E.; Petermann, H.

    1965-01-01

    Currently, the porphyrias are classified in four main groups: congenital porphyria, acute intermittent porphyria, porphyria cutanea tarda hereditaria, and porphyria cutanea tarda symptomatica. The acquired form of porphyria (porphyria cutanea tarda symptomatica) occurs in older males and is nearly always associated with chronic alcoholism and hepatic cirrhosis. The main clinical changes are dermatological, with excessive skin fragility and photosensitivity resulting in erosions and bullae. Biochemically, high levels of uroporphyrin are found in the urine and stools. Treatment to date has been symptomatic and usually unsuccessful. A case of porphyria cutanea tarda symptomatica is presented showing dramatic improvement of both the skin lesions and porphyrin levels in urine and blood following repeated phlebotomy. Possible mechanisms of action of phlebotomy on porphyria cutanea tarda symptomatica are discussed. ImagesFig. 1Fig. 2 PMID:14341652

  16. Video Podcasts

    DEFF Research Database (Denmark)

    Nortvig, Anne Mette; Sørensen, Birgitte Holm

    2016-01-01

    This project’s aim was to support and facilitate master’s students’ preparation and collaboration by making video podcasts of short lectures available on YouTube prior to students’ first face-to-face seminar. The empirical material stems from group interviews, from statistical data created through...... YouTube analytics and from surveys answered by students after the seminar. The project sought to explore how video podcasts support learning and reflection online and how students use and reflect on the integration of online activities in the videos. Findings showed that students engaged actively...

  17. A Novel Morphometry-Based Protocol of Automated Video-Image Analysis for Species Recognition and Activity Rhythms Monitoring in Deep-Sea Fauna

    Directory of Open Access Journals (Sweden)

    Paolo Menesatti

    2009-10-01

    Full Text Available The understanding of ecosystem dynamics in deep-sea areas is to date limited by technical constraints on sampling repetition. We have elaborated a morphometry-based protocol for automated video-image analysis where animal movement tracking (by frame subtraction is accompanied by species identification from animals’ outlines by Fourier Descriptors and Standard K-Nearest Neighbours methods. One-week footage from a permanent video-station located at 1,100 m depth in Sagami Bay (Central Japan was analysed. Out of 150,000 frames (1 per 4 s, a subset of 10.000 was analyzed by a trained operator to increase the efficiency of the automated procedure. Error estimation of the automated and trained operator procedure was computed as a measure of protocol performance. Three displacing species were identified as the most recurrent: Zoarcid fishes (eelpouts, red crabs (Paralomis multispina, and snails (Buccinum soyomaruae. Species identification with KNN thresholding produced better results in automated motion detection. Results were discussed assuming that the technological bottleneck is to date deeply conditioning the exploration of the deep-sea.

  18. Small UAV-Acquired, High-resolution, Georeferenced Still Imagery

    Energy Technology Data Exchange (ETDEWEB)

    Ryan Hruska

    2005-09-01

    Currently, small Unmanned Aerial Vehicles (UAVs) are primarily used for capturing and down-linking real-time video. To date, their role as a low-cost airborne platform for capturing high-resolution, georeferenced still imagery has not been fully utilized. On-going work within the Unmanned Vehicle Systems Program at the Idaho National Laboratory (INL) is attempting to exploit this small UAV-acquired, still imagery potential. Initially, a UAV-based still imagery work flow model was developed that includes initial UAV mission planning, sensor selection, UAV/sensor integration, and imagery collection, processing, and analysis. Components to support each stage of the work flow are also being developed. Critical to use of acquired still imagery is the ability to detect changes between images of the same area over time. To enhance the analysts’ change detection ability, a UAV-specific, GIS-based change detection system called SADI or System for Analyzing Differences in Imagery is under development. This paper will discuss the associated challenges and approaches to collecting still imagery with small UAVs. Additionally, specific components of the developed work flow system will be described and graphically illustrated using varied examples of small UAV-acquired still imagery.

  19. A video event trigger for high frame rate, high resolution video technology

    Science.gov (United States)

    Williams, Glenn L.

    1991-01-01

    When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.

  20. Rate-Adaptive Video Compression (RAVC) Universal Video Stick (UVS)

    Science.gov (United States)

    Hench, David L.

    2009-05-01

    The H.264 video compression standard, aka MPEG 4 Part 10 aka Advanced Video Coding (AVC) allows new flexibility in the use of video in the battlefield. This standard necessitates encoder chips to effectively utilize the increased capabilities. Such chips are designed to cover the full range of the standard with designers of individual products given the capability of selecting the parameters that differentiate a broadcast system from a video conferencing system. The SmartCapture commercial product and the Universal Video Stick (UVS) military versions are about the size of a thumb drive with analog video input and USB (Universal Serial Bus) output and allow the user to select the parameters of imaging to the. Thereby, allowing the user to select video bandwidth (and video quality) using four dimensions of quality, on the fly, without stopping video transmission. The four dimensions are: 1) spatial, change from 720 pixel x 480 pixel to 320 pixel x 360 pixel to 160 pixel x 180 pixel, 2) temporal, change from 30 frames/ sec to 5 frames/sec, 3) transform quality with a 5 to 1 range, 4) and Group of Pictures (GOP) that affects noise immunity. The host processor simply wraps the H.264 network abstraction layer packets into the appropriate network packets. We also discuss the recently adopted scalable amendment to H.264 that will allow limit RAVC at any point in the communication chain by throwing away preselected packets.

  1. Image quality improvement in adaptive optics scanning laser ophthalmoscopy assisted capillary visualization using B-spline-based elastic image registration.

    Science.gov (United States)

    Uji, Akihito; Ooto, Sotaro; Hangai, Masanori; Arichika, Shigeta; Yoshimura, Nagahisa

    2013-01-01

    To investigate the effect of B-spline-based elastic image registration on adaptive optics scanning laser ophthalmoscopy (AO-SLO)-assisted capillary visualization. AO-SLO videos were acquired from parafoveal areas in the eyes of healthy subjects and patients with various diseases. After nonlinear image registration, the image quality of capillary images constructed from AO-SLO videos using motion contrast enhancement was compared before and after B-spline-based elastic (nonlinear) image registration performed using ImageJ. For objective comparison of image quality, contrast-to-noise ratios (CNRS) for vessel images were calculated. For subjective comparison, experienced ophthalmologists ranked images on a 5-point scale. All AO-SLO videos were successfully stabilized by elastic image registration. CNR was significantly higher in capillary images stabilized by elastic image registration than in those stabilized without registration. The average ratio of CNR in images with elastic image registration to CNR in images without elastic image registration was 2.10 ± 1.73, with no significant difference in the ratio between patients and healthy subjects. Improvement of image quality was also supported by expert comparison. Use of B-spline-based elastic image registration in AO-SLO-assisted capillary visualization was effective for enhancing image quality both objectively and subjectively.

  2. 2011 Tohoku tsunami hydrographs, currents, flow velocities and ship tracks based on video and TLS measurements

    Science.gov (United States)

    Fritz, Hermann M.; Phillips, David A.; Okayasu, Akio; Shimozono, Takenori; Liu, Haijiang; Takeda, Seiichi; Mohammed, Fahad; Skanavis, Vassilis; Synolakis, Costas E.; Takahashi, Tomoyuki

    2013-04-01

    The March 11, 2011, magnitude Mw 9.0 earthquake off the Tohoku coast of Japan caused catastrophic damage and loss of life to a tsunami aware population. The mid-afternoon tsunami arrival combined with survivors equipped with cameras on top of vertical evacuation buildings provided fragmented spatially and temporally resolved inundation recordings. This report focuses on the surveys at 9 tsunami eyewitness video recording locations in Myako, Kamaishi, Kesennuma and Yoriisohama along Japan's Sanriku coast and the subsequent video image calibration, processing, tsunami hydrograph and flow velocity analysis. Selected tsunami video recording sites were explored, eyewitnesses interviewed and some ground control points recorded during the initial tsunami reconnaissance in April, 2011. A follow-up survey in June, 2011 focused on terrestrial laser scanning (TLS) at locations with high quality eyewitness videos. We acquired precise topographic data using TLS at the video sites producing a 3-dimensional "point cloud" dataset. A camera mounted on the Riegl VZ-400 scanner yields photorealistic 3D images. Integrated GPS measurements allow accurate georeferencing. The original video recordings were recovered from eyewitnesses and the Japanese Coast Guard (JCG). The analysis of the tsunami videos follows an adapted four step procedure originally developed for the analysis of 2004 Indian Ocean tsunami videos at Banda Aceh, Indonesia (Fritz et al., 2006). The first step requires the calibration of the sector of view present in the eyewitness video recording based on ground control points measured in the LiDAR data. In a second step the video image motion induced by the panning of the video camera was determined from subsequent images by particle image velocimetry (PIV) applied to fixed objects. The third step involves the transformation of the raw tsunami video images from image coordinates to world coordinates with a direct linear transformation (DLT) procedure. Finally, the

  3. Development of a THz spectroscopic imaging system

    Energy Technology Data Exchange (ETDEWEB)

    Usami, M [TOCHIGI Nikon Corporation, 770 Midori, Ohtawara, Tochigi (Japan); Iwamoto, T [TOCHIGI Nikon Corporation, 770 Midori, Ohtawara, Tochigi (Japan); Fukasawa, R [TOCHIGI Nikon Corporation, 770 Midori, Ohtawara, Tochigi (Japan); Tani, M [Kansai Advanced Research Center, Communications Research Laboratory, 588-2 Iwaoka, Nishi-ku, Kobe (Japan); Watanabe, M [Kansai Advanced Research Center, Communications Research Laboratory, 588-2 Iwaoka, Nishi-ku, Kobe (Japan); Sakai, K [Kansai Advanced Research Center, Communications Research Laboratory, 588-2 Iwaoka, Nishi-ku, Kobe (Japan)

    2002-11-07

    We have developed a real-time THz imaging system based on the two-dimensional (2D) electro-optic (EO) sampling technique. Employing the 2D EO-sampling technique, we can obtain THz images using a CCD camera at a video rate of up to 30 frames per second. A spatial resolution of 1.4 mm was achieved. This resolution was reasonably close to the theoretical limit determined by diffraction. We observed not only static objects but also moving ones. To acquire spectroscopic information, time-domain images were collected. By processing these images on a computer, we can obtain spectroscopic images. Spectroscopy for silicon wafers was demonstrated.

  4. Non-intrusive telemetry applications in the oilsands: from visible light and x-ray video to acoustic imaging and spectroscopy

    Science.gov (United States)

    Shaw, John M.

    2013-06-01

    While the production, transport and refining of oils from the oilsands of Alberta, and comparable resources elsewhere is performed at industrial scales, numerous technical and technological challenges and opportunities persist due to the ill defined nature of the resource. For example, bitumen and heavy oil comprise multiple bulk phases, self-organizing constituents at the microscale (liquid crystals) and the nano scale. There are no quantitative measures available at the molecular level. Non-intrusive telemetry is providing promising paths toward solutions, be they enabling technologies targeting process design, development or optimization, or more prosaic process control or process monitoring applications. Operation examples include automated large object and poor quality ore during mining, and monitoring the thickness and location of oil water interfacial zones within separation vessels. These applications involve real-time video image processing. X-ray transmission video imaging is used to enumerate organic phases present within a vessel, and to detect individual phase volumes, densities and elemental compositions. This is an enabling technology that provides phase equilibrium and phase composition data for production and refining process development, and fluid property myth debunking. A high-resolution two-dimensional acoustic mapping technique now at the proof of concept stage is expected to provide simultaneous fluid flow and fluid composition data within porous inorganic media. Again this is an enabling technology targeting visualization of diverse oil production process fundamentals at the pore scale. Far infrared spectroscopy coupled with detailed quantum mechanical calculations, may provide characteristic molecular motifs and intermolecular association data required for fluid characterization and process modeling. X-ray scattering (SAXS/WAXS/USAXS) provides characteristic supramolecular structure information that impacts fluid rheology and process

  5. The 15 March 2007 paroxysm of Stromboli: video-image analysis, and textural and compositional features of the erupted deposit

    Science.gov (United States)

    Andronico, Daniele; Taddeucci, Jacopo; Cristaldi, Antonio; Miraglia, Lucia; Scarlato, Piergiorgio; Gaeta, Mario

    2013-07-01

    On 15 March 2007, a paroxysmal event occurred within the crater terrace of Stromboli, in the Aeolian Islands (Italy). Infrared and visible video recordings from the monitoring network reveal that there was a succession of highly explosive pulses, lasting about 5 min, from at least four eruptive vents. Initially, brief jets with low apparent temperature were simultaneously erupted from the three main vent regions, becoming hotter and transitioning to bomb-rich fountaining that lasted for 14 s. Field surveys estimate the corresponding fallout deposit to have a mass of ˜1.9 × 107 kg that, coupled with the video information on eruption duration, provides a mean mass eruption rate of ˜5.4 × 105 kg/s. Textural and chemical analyses of the erupted tephra reveal unexpected complexity, with grain-size bimodality in the samples associated with the different percentages of ash types (juvenile, lithics, and crystals) that reflects almost simultaneous deposition from multiple and evolving plumes. Juvenile glass chemistry ranges from a gas-rich, low porphyricity end member (typical of other paroxysmal events) to a gas-poor high porphyricity one usually associated with low-intensity Strombolian explosions. Integration of our diverse data sets reveals that (1) the 2007 event was a paroxysmal explosion driven by a magma sharing common features with large-scale paroxysms as well as with "ordinary" Strombolian explosions; (2) initial vent opening by the release of a pressurized gas slug and subsequent rapid magma vesiculation and ejection, which were recorded both by the infrared camera and in the texture of fallout products; and (3) lesser paroxysmal events can be highly dynamic and produce surprisingly complex fallout deposits, which would be difficult to interpret from the geological record alone.

  6. Virtual unenhanced CT images acquired from dual-energy CT urography: accuracy of attenuation values and variation with contrast material phase.

    Science.gov (United States)

    Sahni, V A; Shinagare, A B; Silverman, S G

    2013-03-01

    To determine how representative virtual unenhanced (VNE) images are of true unenhanced (TNE) images when performing computed tomography (CT) urography on a dual-energy CT (DECT) system, and whether the images are affected by the contrast material phase. In this retrospective, institutional review board-approved, Health Insurance Portability and Accountability Act (HIPAA)-compliant study, TNE were compared with VNE images derived from the nephrographic (VNEn) and excretory (VNEe) phases in 100 consecutive CT urograms. Two readers in consensus measured attenuation values of abdominal organs, fat, and renal lesions (>1 cm). Image noise was correlated with patient thickness. Detectability of renal stones was evaluated. Image quality and acceptability was assessed using a five-point scale. Expected dose saving by removing the TNE phase was calculated. VNE attenuation values of liver, renal parenchyma, and aorta were significantly different to TNE values (p < 0.05); spleen and fat attenuation values showed no significant difference. No significant difference was found between VNEn and VNEe images. Image noise was significantly greater in TNE images (p < 0.0001) and correlated with patient thickness. VNEn and VNEe images had sensitivities of 76.6 and 65.6% for detection of stones, identifying all stones greater than 3 and 4 mm, respectively. Both VNE images received significantly lower image quality scores than TNE images (p < 0.0001); however, the majority of images were deemed acceptable. The mean theoretical dose saving by removing the TNE phase was 35%. Although VNE images demonstrate high reader acceptability, accuracy of attenuation values and detection of small stones is limited. The contrast material phase, however, does not affect attenuation values. Further validation of VNE images is recommended prior to clinical implementation. Copyright © 2012 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  7. High-speed digital video tracking system for generic applications

    Science.gov (United States)

    Walton, James S.; Hallamasek, Karen G.

    2001-04-01

    The value of high-speed imaging for making subjective assessments is widely recognized, but the inability to acquire useful data from image sequences in a timely fashion has severely limited the use of the technology. 4DVideo has created a foundation for a generic instrument that can capture kinematic data from high-speed images. The new system has been designed to acquire (1) two-dimensional trajectories of points; (2) three-dimensional kinematics of structures or linked rigid-bodies; and (3) morphological reconstructions of boundaries. The system has been designed to work with an unlimited number of cameras configured as nodes in a network, with each camera able to acquire images at 1000 frames per second (fps) or better, with a spatial resolution of 512 X 512 or better, and an 8-bit gray scale. However, less demanding configurations are anticipated. The critical technology is contained in the custom hardware that services the cameras. This hardware optimizes the amount of information stored, and maximizes the available bandwidth. The system identifies targets using an algorithm implemented in hardware. When complete, the system software will provide all of the functionality required to capture and process video data from multiple perspectives. Thereafter it will extract, edit and analyze the motions of finite targets and boundaries.

  8. Fast Aerial Video Stitching

    Directory of Open Access Journals (Sweden)

    Jing Li

    2014-10-01

    Full Text Available The highly efficient and robust stitching of aerial video captured by unmanned aerial vehicles (UAVs is a challenging problem in the field of robot vision. Existing commercial image stitching systems have seen success with offline stitching tasks, but they cannot guarantee high-speed performance when dealing with online aerial video sequences. In this paper, we present a novel system which has an unique ability to stitch high-frame rate aerial video at a speed of 150 frames per second (FPS. In addition, rather than using a high-speed vision platform such as FPGA or CUDA, our system is running on a normal personal computer. To achieve this, after the careful comparison of the existing invariant features, we choose the FAST corner and binary descriptor for efficient feature extraction and representation, and present a spatial and temporal coherent filter to fuse the UAV motion information into the feature matching. The proposed filter can remove the majority of feature correspondence outliers and significantly increase the speed of robust feature matching by up to 20 times. To achieve a balance between robustness and efficiency, a dynamic key frame-based stitching framework is used to reduce the accumulation errors. Extensive experiments on challenging UAV datasets demonstrate that our approach can break through the speed limitation and generate an accurate stitching image for aerial video stitching tasks.

  9. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    OpenAIRE

    Dat Tien Nguyen; Ki Wan Kim; Hyung Gil Hong; Ja Hyung Koo; Min Cheol Kim; Kang Ryoung Park

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has ...

  10. Edge Polynomial Fractal Compression Algorithm for High Quality Video Transmission. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Freddie

    1999-06-01

    In this final report, Physical Optics Corporation (POC) provides a review of its Edge Polynomial Autonomous Compression (EPAC) technology. This project was undertaken to meet the need for low bandwidth transmission of full-motion video images. In addition, this report offers a synopsis of the logical data representation study that was performed to compress still images and video. The mapping singularities and polynomial representation of 3-D surfaces were found to be ideal for very high image compression. Our efforts were then directed to extending the EPAC algorithm for the motion of singularities by tracking the 3-D coordinates of characteristic points and the development of system components. Finally, we describe the integration of the software with the hardware components. This process consists of acquiring and processing each separate camera view, combining the information from different cameras to calculate the location of an object in three dimensions, and tracking the information history and the behavior of the objects.

  11. Improved signal to noise ratio and sensitivity of an infrared imaging video bolometer on large helical device by using an infrared periscope

    Energy Technology Data Exchange (ETDEWEB)

    Pandya, Shwetang N., E-mail: pandya.shwetang@LHD.nifs.ac.jp; Sano, Ryuichi [High Temperature Plasma Physics Research Division, The Graduate University of Advanced Studies, 322-6 Oroshi-cho, Toki 509-5292 (Japan); Peterson, Byron J.; Mukai, Kiyofumi [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki 509-5292 (Japan); Enokuchi, Akito; Takeyama, Norihide [GENESIA Corporation, 3-38-4-601 Shimo-Renjaku, Mitaka, Tokyo 181-0013 (Japan)

    2014-07-15

    An Infrared imaging Video Bolometer (IRVB) diagnostic is currently being used in the Large Helical Device (LHD) for studying the localization of radiation structures near the magnetic island and helical divertor X-points during plasma detachment and for 3D tomography. This research demands high signal to noise ratio (SNR) and sensitivity to improve the temporal resolution for studying the evolution of radiation structures during plasma detachment and a wide IRVB field of view (FoV) for tomography. Introduction of an infrared periscope allows achievement of a higher SNR and higher sensitivity, which in turn, permits a twofold improvement in the temporal resolution of the diagnostic. Higher SNR along with wide FoV is achieved simultaneously by reducing the separation of the IRVB detector (metal foil) from the bolometer's aperture and the LHD plasma. Altering the distances to meet the aforesaid requirements results in an increased separation between the foil and the IR camera. This leads to a degradation of the diagnostic performance in terms of its sensitivity by 1.5-fold. Using an infrared periscope to image the IRVB foil results in a 7.5-fold increase in the number of IR camera pixels imaging the foil. This improves the IRVB sensitivity which depends on the square root of the number of IR camera pixels being averaged per bolometer channel. Despite the slower f-number (f/# = 1.35) and reduced transmission (τ{sub 0} = 89%, due to an increased number of lens elements) for the periscope, the diagnostic with an infrared periscope operational on LHD has improved in terms of sensitivity and SNR by a factor of 1.4 and 4.5, respectively, as compared to the original diagnostic without a periscope (i.e., IRVB foil being directly imaged by the IR camera through conventional optics). The bolometer's field of view has also increased by two times. The paper discusses these improvements in apt details.

  12. Design of a Lossless Image Compression System for Video Capsule Endoscopy and Its Performance in In-Vivo Trials

    Directory of Open Access Journals (Sweden)

    Tareq H. Khan

    2014-11-01

    Full Text Available In this paper, a new low complexity and lossless image compression system for capsule endoscopy (CE is presented. The compressor consists of a low-cost YEF color space converter and variable-length predictive with a combination of Golomb-Rice and unary encoding. All these components have been heavily optimized for low-power and low-cost and lossless in nature. As a result, the entire compression system does not incur any loss of image information. Unlike transform based algorithms, the compressor can be interfaced with commercial image sensors which send pixel data in raster-scan fashion that eliminates the need of having large buffer memory. The compression algorithm is capable to work with white light imaging (WLI and narrow band imaging (NBI with average compression ratio of 78% and 84% respectively. Finally, a complete capsule endoscopy system is developed on a single, low-power, 65-nm field programmable gate arrays (FPGA chip. The prototype is developed using circular PCBs having a diameter of 16 mm. Several in-vivo and ex-vivo trials using pig's intestine have been conducted using the prototype to validate the performance of the proposed lossless compression algorithm. The results show that, compared with all other existing works, the proposed algorithm offers a solution to wireless capsule endoscopy with lossless and yet acceptable level of compression.

  13. An Evaluation of Video-to-Video Face Verification

    NARCIS (Netherlands)

    Poh, N.; Chan, C.H.; Kittler, J.; Marcel, S.; Mc Cool, C.; Argones Rúa, E.; Alba Castro, J.L.; Villegas, M.; Paredes, R.; Štruc, V.; Pavešić, N.; Salah, A.A.; Fang, H.; Costen, N.

    2010-01-01

    Person recognition using facial features, e.g., mug-shot images, has long been used in identity documents. However, due to the widespread use of web-cams and mobile devices embedded with a camera, it is now possible to realize facial video recognition, rather than resorting to just still images. In

  14. Akademisk video

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2017-01-01

    Dette kapitel har fokus på metodiske problemstillinger, der opstår i forhold til at bruge (digital) video i forbindelse med forskningskommunikation, ikke mindst online. Video har længe været benyttet i forskningen til dataindsamling og forskningskommunikation. Med digitaliseringen og internettet er...... der dog opstået nye muligheder og udfordringer i forhold til at formidle og distribuere forskningsresultater til forskellige målgrupper via video. Samtidig er klassiske metodologiske problematikker som forskerens positionering i forhold til det undersøgte stadig aktuelle. Både klassiske og nye...... problemstillinger diskuteres i kapitlet, som rammesætter diskussionen ud fra forskellige positioneringsmuligheder: formidler, historiefortæller, eller dialogist. Disse positioner relaterer sig til genrer inden for ’akademisk video’. Afslutningsvis præsenteres en metodisk værktøjskasse med redskaber til planlægning...

  15. Intelligent network video understanding modern video surveillance systems

    CERN Document Server

    Nilsson, Fredrik

    2008-01-01

    Offering ready access to the security industry's cutting-edge digital future, Intelligent Network Video provides the first complete reference for all those involved with developing, implementing, and maintaining the latest surveillance systems. Pioneering expert Fredrik Nilsson explains how IP-based video surveillance systems provide better image quality, and a more scalable and flexible system at lower cost. A complete and practical reference for all those in the field, this volume:Describes all components relevant to modern IP video surveillance systemsProvides in-depth information about ima

  16. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... World Videos. The workshops were run on December 4, 2016, in Cancun in Mexico. The two workshops together received 13 papers. Each paper was then reviewed by at least two expert reviewers in the field. In all, 11 papers were accepted to be presented at the workshops. The topics covered in the papers...

  17. Satellite Video Stabilization with Geometric Distortion

    OpenAIRE

    Wang, Xia; Zhang, Guo; Shen, Xin; Li, Beibei; Jiang, Yonghua

    2016-01-01

    There is an exterior orientation difference in each satellite video frame, and the corresponding points have different image locations in adjacent frames images which has geometric distortion. So the projection model, affine model and other classical image stabilization registration model cannot accurately describe the relationship between adjacent frames. This paper proposes a new satellite video image stabilization method with geometric distortion to solve the problem, based on the simulate...

  18. Point Cloud Generation from Aerial Image Data Acquired by a Quadrocopter Type Micro Unmanned Aerial Vehicle and a Digital Still Camera

    Directory of Open Access Journals (Sweden)

    Tomi Rosnell

    2012-01-01

    Full Text Available The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.

  19. Images from Bits: Non-Iterative Image Reconstruction for Quanta Image Sensors

    Directory of Open Access Journals (Sweden)

    Stanley H. Chan

    2016-11-01

    Full Text Available A quanta image sensor (QIS is a class of single-photon imaging devices that measure light intensity using oversampled binary observations. Because of the stochastic nature of the photon arrivals, data acquired by QIS is a massive stream of random binary bits. The goal of image reconstruction is to recover the underlying image from these bits. In this paper, we present a non-iterative image reconstruction algorithm for QIS. Unlike existing reconstruction methods that formulate the problem from an optimization perspective, the new algorithm directly recovers the images through a pair of nonlinear transformations and an off-the-shelf image denoising algorithm. By skipping the usual optimization procedure, we achieve orders of magnitude improvement in speed and even better image reconstruction quality. We validate the new algorithm on synthetic datasets, as well as real videos collected by one-bit single-photon avalanche diode (SPAD cameras.

  20. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  1. Binocular video ophthalmoscope for simultaneous recording of sequences of the human retina to compare dynamic parameters

    Science.gov (United States)

    Tornow, Ralf P.; Milczarek, Aleksandra; Odstrcilik, Jan; Kolar, Radim

    2017-07-01

    A parallel video ophthalmoscope was developed to acquire short video sequences (25 fps, 250 frames) of both eyes simultaneously with exact synchronization. Video sequences were registered off-line to compensate for eye movements. From registered video sequences dynamic parameters like cardiac cycle induced reflection changes and eye movements can be calculated and compared between eyes.

  2. Algorithm combination of deblurring and denoising on video frames using the method search of local features on image

    Directory of Open Access Journals (Sweden)

    Semenishchev Evgeny

    2017-01-01

    Full Text Available In this paper, we propose an approach that allows us to perform an operation to reduce error in the form of noise and lubrication. To improve the processing speed and the possibility of parallelization of the process, we use the approach is based on the search for local features on the image.

  3. Short Project-Based Learning with MATLAB Applications to Support the Learning of Video-Image Processing

    Science.gov (United States)

    Gil, Pablo

    2017-01-01

    University courses concerning Computer Vision and Image Processing are generally taught using a traditional methodology that is focused on the teacher rather than on the students. This approach is consequently not effective when teachers seek to attain cognitive objectives involving their students' critical thinking. This manuscript covers the…

  4. Flexible Fiber-Optic High-Speed Imaging of Vocal Fold Vibration: A Preliminary Report.

    Science.gov (United States)

    Woo, Peak; Baxter, Peter

    2017-03-01

    High-speed video (HSV) imaging of vocal fold vibration has been possible only through the rigid endoscope. This study reports that a fiberscope-based high-speed imaging system may allow HSV imaging of naturalistic voicing. Twenty-two subjects were recorded using a commercially available black and white high-speed camera (Photron Motion Tools, 256 × 120 pixel, 2000 frames per second, 8 second acquisition time). The camera gain is set to +6 db. The camera is coupled to a standard fiber-optic laryngoscope (Olympus ENF P-4) with a 300-W Xenon light. Image acquisition was done by asking the subject to perform repeated phonation at modal phonation. Video images were processed using commercial video editing and video noise reduction software (After effects, Magix, and Neat Video 4.1). After video processing, the video images were analyzed using digital kymography (DKG). The HSV black and white video acquired by the camera is gray and lacks contrast. By adjustment of image contrast, brightness, and gamma and using noise reduction software, the flexible laryngoscopy image can be converted to video image files suitable for DKG and waveform analysis. The increased noise still makes edge tracking for objective analysis difficult, but subjective analysis of DKG plot is possible. This is the first report of HSV acquisition in an unsedated patient using a fiberscope. Image enhancement and noise reduction can enhance the HSV to allow extraction of the digital kymogram. Further image enhancement may allow for objective analysis of the vibratory waveform. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  5. Lossless Compression of Broadcast Video

    DEFF Research Database (Denmark)

    Martins, Bo; Eriksen, N.; Faber, E.

    1998-01-01

    We investigate several techniques for lossless and near-lossless compression of broadcast video.The emphasis is placed on the emerging international standard for compression of continous-tone still images, JPEG-LS, due to its excellent compression performance and moderatecomplexity. Except for one...... artificial sequence containing uncompressible data all the 4:2:2, 8-bit test video material easily compresses losslessly to a rate below 125 Mbit/s. At this rate, video plus overhead can be contained in a single telecom 4th order PDH channel or a single STM-1 channel. Difficult 4:2:2, 10-bit test material...

  6. Estimation of Web video multiplicity

    Science.gov (United States)

    Cheung, SenChing S.; Zakhor, Avideh

    1999-12-01

    With ever more popularity of video web-publishing, many popular contents are being mirrored, reformatted, modified and republished, resulting in excessive content duplication. While such redundancy provides fault tolerance for continuous availability of information, it could potentially create problems for multimedia search engines in that the search results for a given query might become repetitious, and cluttered with a large number of duplicates. As such, developing techniques for detecting similarity and duplication is important to multimedia search engines. In addition, content providers might be interested in identifying duplicates of their content for legal, contractual or other business related reasons. In this paper, we propose an efficient algorithm called video signature to detect similar video sequences for large databases such as the web. The idea is to first form a 'signature' for each video sequence by selection a small number of its frames that are most similar to a number of randomly chosen seed images. Then the similarity between any tow video sequences can be reliably estimated by comparing their respective signatures. Using this method, we achieve 85 percent recall and precision ratios on a test database of 377 video sequences. As a proof of concept, we have applied our proposed algorithm to a collection of 1800 hours of video corresponding to around 45000 clips from the web. Our results indicate that, on average, every video in our collection from the web has around five similar copies.

  7. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real W...

  8. Single molecule dynamics in a virtual cell: a three-dimensional model that produces simulated fluorescence video-imaging data.

    Science.gov (United States)

    Mashanov, Gregory I

    2014-09-06

    The analysis of single molecule imaging experiments is complicated by the stochastic nature of single molecule events, by instrument noise and by the limited information which can be gathered about any individual molecule observed. Consequently, it is important to cross check experimental results using a model simulating single molecule dynamics (e.g. movements and binding events) in a virtual cell-like environment. The output of such a model should match the real data format allowing researchers to compare simulated results with the real experiments. The proposed model exploits the advantages of 'object-oriented' computing. First of all, the ability to create and manipulate a number of classes, each containing an arbitrary number of single molecule objects. These classes may include objects moving within the 'cytoplasm'; objects moving at the 'plasma membrane'; and static objects located inside the 'body'. The objects of a given class can interact with each other and/or with the objects of other classes according to their physical and chemical properties. Each model run generates a sequence of images, each containing summed images of all fluorescent objects emitting light under given illumination conditions with realistic levels of noise and emission fluctuations. The model accurately reproduces reported single molecule experiments and predicts the outcome of future experiments.

  9. Modeling camera orientation and 3D structure from a sequence of images taken by a perambulating commercial video camera

    Science.gov (United States)

    M-Rouhani, Behrouz; Anderson, James A. D. W.

    1997-04-01

    In this paper we report the degree of reliability of image sequences taken by off-the-shelf TV cameras for modeling camera rotation and reconstructing 3D structure using computer vision techniques. This is done in spite of the fact that computer vision systems usually use imaging devices that are specifically designed for the human vision. Our scenario consists of a static scene and a mobile camera moving through the scene. The scene is any long axial building dominated by features along the three principal orientations and with at least one wall containing prominent repetitive planar features such as doors, windows bricks etc. The camera is an ordinary commercial camcorder moving along the axial axis of the scene and is allowed to rotate freely within the range +/- 10 degrees in all directions. This makes it possible that the camera be held by a walking unprofessional cameraman with normal gait, or to be mounted on a mobile robot. The system has been tested successfully on sequence of images of a variety of structured, but fairly cluttered scenes taken by different walking cameramen. The potential application areas of the system include medicine, robotics and photogrammetry.

  10. Motion tracking and electromyography assist the removal of mirror hand contributions to fNIRS images acquired during a finger tapping task performed by children with cerebral palsy

    Science.gov (United States)

    Hervey, Nathan; Khan, Bilal; Shagman, Laura; Tian, Fenghua; Delgado, Mauricio R.; Tulchin-Francis, Kirsten; Shierk, Angela; Smith, Linsley; Reid, Dahlia; Clegg, Nancy J.; Liu, Hanli; MacFarlane, Duncan; Alexandrakis, George

    2013-03-01

    Functional neurological imaging has been shown to be valuable in evaluating brain plasticity in children with cerebral palsy (CP). In recent studies it has been demonstrated that functional near-infrared spectroscopy (fNIRS) is a viable and sensitive method for imaging motor cortex activities in children with CP. However, during unilateral finger tapping tasks children with CP often exhibit mirror motions (unintended motions in the non-tapping hand), and current fNIRS image formation techniques do not account for this. Therefore, the resulting fNIRS images contain activation from intended and unintended motions. In this study, cortical activity was mapped with fNIRS on four children with CP and five controls during a finger tapping task. Finger motion and arm muscle activation were concurrently measured using motion tracking cameras and electromyography (EMG). Subject-specific regressors were created from motion capture and EMG data and used in a general linear model (GLM) analysis in an attempt to create fNIRS images representative of different motions. The analysis provided an fNIRS image representing activation due to motion and muscle activity for each hand. This method could prove to be valuable in monitoring brain plasticity in children with CP by providing more consistent images between measurements. Additionally, muscle effort versus cortical effort was compared between control and CP subjects. More cortical effort was required to produce similar muscle effort in children with CP. It is possible this metric could be a valuable diagnostic tool in determining response to treatment.

  11. Port Video and Logo

    OpenAIRE

    Whitehead, Stuart; Rush, Joshua

    2013-01-01

    Logo PDF files should be accessible by any PDF reader such as Adobe Reader. SVG files of the logo are vector graphics accessible by programs such as Inkscape or Adobe Illustrator. PNG files are image files of the logo that should be able to be opened by any operating system's default image viewer. The final report is submitted in both .doc (Microsoft Word) and .pdf formats. The video is submitted in .avi format and can be viewed with Windows Media Player or VLC. Audio .wav files are also ...

  12. Astronomy Video Contest

    Science.gov (United States)

    McFarland, John

    2008-05-01

    During Galileo's lifetime his staunchest supporter was Johannes Kepler, Imperial Mathematician to the Holy Roman Emperor. Johannes Kepler will be in St. Louis to personally offer a tribute to Galileo. Set Galileo's astronomy discoveries to music and you get the newest song by the well known acappella group, THE CHROMATICS. The song, entitled "Shoulders of Giants” was written specifically for IYA-2009 and will be debuted at this conference. The song will also be used as a base to create a music video by synchronizing a person's own images to the song's lyrics and tempo. Thousands of people already do this for fun and post their videos on YOU TUBE and other sites. The ASTRONOMY VIDEO CONTEST will be launched as a vehicle to excite, enthuse and educate people about astronomy and science. It will be an annual event administered by the Johannes Kepler Project and will continue to foster the goals of IYA-2009 for years to come. During this presentation the basic categories, rules, and prizes for the Astronomy Video Contest will be covered and finally the new song "Shoulders of Giants” by THE CHROMATICS will be unveiled

  13. Digital systems to acquire radiological imaging. Characteristics and quality control; Sistemas digitales de adquisicion de imagenes radiograficas. Caracteristicas y Control de Calidad

    Energy Technology Data Exchange (ETDEWEB)

    Torres Cabrera, R.; Hernando Gonzalez, I.

    2006-07-01

    Due to its special characteristics, quality control in digital radiographic systems is very important, even more than in conventional film-screen systems. Differences between digital and analogical images,a in terms of dynamics range, spatial and contrast resolution, and the flexibility of data post-processing require some actions to maintain clinical images in an optimum quality level. Revision 1 of the Spanish Protocol of Quality Control in Diagnostic Radiology includes a chapter dedicated to the quality control of these digital systems for the acquisition of radiographic images. In this paper the different parameters for quality control procedures are described. Also some difficulties to be concerned about (absence of levels of tolerance, access to the raw-data images and related information, availability of use anthropomorphic phantoms, etc, etc) are noted, as well as the most significant aspects of the differences in relation to the ana logical systems. (Author) 15 refs.

  14. Achievable Resolution from Images of Biological Specimens Acquired from a 4k × 4k CCD Camera in a 300-kV Electron Cryomicroscope

    Science.gov (United States)

    Chen, Dong-Hua; Jakana, Joanita; Liu, Xiangan; Schmid, Michael F.; Chiu, Wah

    2008-01-01

    Bacteriorhodopsin and ε 15 bacteriophage were used as biological test specimens to evaluate the potential structural resolution with images captured from a 4k × 4k charge-coupled device (CCD) camera in a 300-kV electron cryomicroscope. The phase residuals computed from the bacteriorhodopsin CCD images taken at 84,000 × effective magnification averaged 15.7° out to 5.8-Å resolution relative to Henderson’s published values. Using a single-particle reconstruction technique, we obtained an 8.2-Å icosahedral structure of ε 15 bacteriophage with the CCD images collected at an effective magnification of 56,000 ×. These results demonstrate that it is feasible to retrieve biological structures to a resolution close to 2/3 of the Nyquist frequency from the CCD images recorded in a 300-kV electron cryomicroscope at a moderately high but practically acceptable microscope magnification. PMID:18514542

  15. Achievable resolution from images of biological specimens acquired from a 4k x 4k CCD camera in a 300-kV electron cryomicroscope.

    Science.gov (United States)

    Chen, Dong-Hua; Jakana, Joanita; Liu, Xiangan; Schmid, Michael F; Chiu, Wah

    2008-07-01

    Bacteriorhodopsin and epsilon 15 bacteriophage were used as biological test specimens to evaluate the potential structural resolution with images captured from a 4k x 4k charge-coupled device (CCD) camera in a 300-kV electron cryomicroscope. The phase residuals computed from the bacteriorhodopsin CCD images taken at 84,000x effective magnification averaged 15.7 degrees out to 5.8-A resolution relative to Henderson's published values. Using a single-particle reconstruction technique, we obtained an 8.2-A icosahedral structure of epsilon 15 bacteriophage with the CCD images collected at an effective magnification of 56,000x. These results demonstrate that it is feasible to retrieve biological structures to a resolution close to 2/3 of the Nyquist frequency from the CCD images recorded in a 300-kV electron cryomicroscope at a moderately high but practically acceptable microscope magnification.

  16. 4K Video-Laryngoscopy and Video-Stroboscopy: Preliminary Findings.

    Science.gov (United States)

    Woo, Peak

    2016-01-01

    4K video is a new format. At 3840 × 2160 resolution, it has 4 times the resolution of standard 1080 high definition (HD) video. Magnification can be done without loss of resolution. This study uses 4K video for video-stroboscopy. Forty-six patients were examined by conventional video-stroboscopy (digital 3 chip CCD) and compared with 4K video-stroboscopy. The video was recorded on a Blackmagic 4K cinema production camera in CinemaDNG RAW format. The video was played back on a 4K monitor and compared to standard video. Pathological conditions included: polyps, scar, cysts, cancer, sulcus, and nodules. Successful 4K video recordings were achieved in all subjects using a 70° rigid endoscope. The camera system is bulky. The examination is performed similarly to standard video-stroboscopy. Playback requires a 4K monitor. As expected, the images were far clearer in detail than standard video. Stroboscopy video using the 4K camera was consistently able to show more detail. Two patients had diagnosis change after 4K viewing. 4K video is an exciting new technology that can be applied to laryngoscopy. It allows for cinematic 4K quality recordings. Both continuous and stroboscopic light can be used for visualization. Its clinical utility is feasible, but usefulness must be proven. © The Author(s) 2015.

  17. Utilising E-on Vue and Unity 3D scenes to generate synthetic images and videos for visible signature analysis

    Science.gov (United States)

    Madden, Christopher S.; Richards, Noel J.; Culpepper, Joanne B.

    2016-10-01

    This paper investigates the ability to develop synthetic scenes in an image generation tool, E-on Vue, and a gaming engine, Unity 3D, which can be used to generate synthetic imagery of target objects across a variety of conditions in land environments. Developments within these tools and gaming engines have allowed the computer gaming industry to dramatically enhance the realism of the games they develop; however they utilise short cuts to ensure that the games run smoothly in real-time to create an immersive effect. Whilst these short cuts may have an impact upon the realism of the synthetic imagery, they do promise a much more time efficient method of developing imagery of different environmental conditions and to investigate the dynamic aspect of military operations that is currently not evaluated in signature analysis. The results presented investigate how some of the common image metrics used in target acquisition modelling, namely the Δμ1, Δμ2, Δμ3, RSS, and Doyle metrics, perform on the synthetic scenes generated by E-on Vue and Unity 3D compared to real imagery of similar scenes. An exploration of the time required to develop the various aspects of the scene to enhance its realism are included, along with an overview of the difficulties associated with trying to recreate specific locations as a virtual scene. This work is an important start towards utilising virtual worlds for visible signature evaluation, and evaluating how equivalent synthetic imagery is to real photographs.

  18. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition...

  19. Video Analytics

    DEFF Research Database (Denmark)

    include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition......This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...

  20. Photoplethysmography Signal Analysis for Optimal Region-of-Interest Determination in Video Imaging on a Built-In Smartphone under Different Conditions

    Directory of Open Access Journals (Sweden)

    Yunyoung Nam

    2017-10-01

    Full Text Available Smartphones and tablets are widely used in medical fields, which can improve healthcare and reduce healthcare costs. Many medical applications for smartphones and tablets have already been developed and widely used by both health professionals and patients. Specifically, video recordings of fingertips made using a smartphone camera contain a pulsatile component caused by the cardiac pulse equivalent to that present in a photoplethysmographic signal. By performing peak detection on the pulsatile signal, it is possible to estimate a continuous heart rate and a respiratory rate. To estimate the heart rate and respiratory rate accurately, which pixel regions of the color bands give the most optimal signal quality should be investigated. In this paper, we investigate signal quality to determine the best signal quality by the largest amplitude values for three different smartphones under different conditions. We conducted several experiments to obtain reliable PPG signals and compared the PPG signal strength in the three color bands when the flashlight was both on and off. We also evaluated the intensity changes of PPG signals obtained from the smartphones with motion artifacts and fingertip pressure force. Furthermore, we have compared the PSNR of PPG signals of the full-size images with that of the region of interests (ROIs.

  1. Robotic video photogrammetry system

    Science.gov (United States)

    Gustafson, Peter C.

    1997-07-01

    For many years, photogrammetry has been in use at TRW. During that time, needs have arisen for highly repetitive measurements. In an effort to satisfy these needs in a timely manner, a specialized Robotic Video Photogrammetry System (RVPS) was developed by TRW in conjunction with outside vendors. The primary application for the RVPS has strict accuracy requirements that demand significantly more images than the previously used film-based system. The time involved in taking these images was prohibitive but by automating the data acquisition process, video techniques became a practical alternative to the more traditional film- based approach. In fact, by applying video techniques, measurement productivity was enhanced significantly. Analysis involved was also brought `on-board' to the RVPS, allowing shop floor acquisition and delivery of results. The RVPS has also been applied in other tasks and was found to make a critical improvement in productivity, allowing many more tests to be run in a shorter time cycle. This paper will discuss the creation of the system and TRW's experiences with the RVPS. Highlighted will be the lessons learned during these efforts and significant attributes of the process not common to the standard application of photogrammetry for industrial measurement. As productivity and ease of use continue to drive the application of photogrammetry in today's manufacturing climate, TRW expects several systems, with technological improvements applied, to be in use in the near future.

  2. DSPACE hardware architecture for on-board real-time image/video processing in European space missions

    Science.gov (United States)

    Saponara, Sergio; Donati, Massimiliano; Fanucci, Luca; Odendahl, Maximilian; Leupers, Reiner; Errico, Walter

    2013-02-01

    The on-board data processing is a vital task for any satellite and spacecraft due to the importance of elaborate the sensing data before sending them to the Earth, in order to exploit effectively the bandwidth to the ground station. In the last years the amount of sensing data collected by scientific and commercial space missions has increased significantly, while the available downlink bandwidth is comparatively stable. The increasing demand of on-board real-time processing capabilities represents one of the critical issues in forthcoming European missions. Faster and faster signal and image processing algorithms are required to accomplish planetary observation, surveillance, Synthetic Aperture Radar imaging and telecommunications. The only available space-qualified Digital Signal Processor (DSP) free of International Traffic in Arms Regulations (ITAR) restrictions faces inadequate performance, thus the development of a next generation European DSP is well known to the space community. The DSPACE space-qualified DSP architecture fills the gap between the computational requirements and the available devices. It leverages a pipelined and massively parallel core based on the Very Long Instruction Word (VLIW) paradigm, with 64 registers and 8 operational units, along with cache memories, memory controllers and SpaceWire interfaces. Both the synthesizable VHDL and the software development tools are generated from the LISA high-level model. A Xilinx-XC7K325T FPGA is chosen to realize a compact PCI demonstrator board. Finally first synthesis results on CMOS standard cell technology (ASIC 180 nm) show an area of around 380 kgates and a peak performance of 1000 MIPS and 750 MFLOPS at 125MHz.

  3. Characterizing popularity dynamics of online videos

    OpenAIRE

    Ren, Zhuo-Ming; Shi; Liao, Hao

    2016-01-01

    Online popularity has a major impact on videos, music, news and other contexts in online systems. Characterizing online popularity dynamics is nature to explain the observed properties in terms of the already acquired popularity of each individual. In this paper, we provide a quantitative, large scale, temporal analysis of the popularity dynamics in two online video-provided websites, namely MovieLens and Netflix. The two collected data sets contain over 100 million records and even span...

  4. Interactive Video, The Next Step

    Science.gov (United States)

    Strong, L. R.; Wold-Brennon, R.; Cooper, S. K.; Brinkhuis, D.

    2012-12-01

    Video has the ingredients to reach us emotionally - with amazing images, enthusiastic interviews, music, and video game-like animations-- and it's emotion that motivates us to learn more about our new interest. However, watching video is usually passive. New web-based technology is expanding and enhancing the video experience, creating opportunities to use video with more direct interaction. This talk will look at an Educaton and Outreach team's experience producing video-centric curriculum using innovative interactive media tools from TED-Ed and FlixMaster. The Consortium for Ocean Leadership's Deep Earth Academy has partnered with the Center for Dark Energy Biosphere Investigations (C-DEBI) to send educators and a video producer aboard three deep sea research expeditions to the Juan de Fuca plate to install and service sub-seafloor observatories. This collaboration between teachers, students, scientists and media producers has proved a productive confluence, providing new ways of understanding both ground-breaking science and the process of science itself - by experimenting with new ways to use multimedia during ocean-going expeditions and developing curriculum and other projects post-cruise.

  5. Acquired Functional Asplenia in Sarcoidosis

    Science.gov (United States)

    Stone, Richard W.; McDaniel, Willie R.; Armstrong, Earl M.; Young, Roscoe C.; Higginbotham-Ford, Edith A.

    1985-01-01

    Sarcoidosis is a recently identified cause of functional asplenia that can be diagnosed by radionuclide imaging. A 31-year-old woman with a five-year history of histologically compatible sarcoidosis was found to have nonvisualization of the spleen on technetium 99m sulfur colloid (radiopharmaceutical) liver-spleen scan. This scintigraphic finding was accompanied by poikilocytosis and Howell-Jolly bodies in the peripheral blood smear. A subsequent gallium 67 citrate scan reflected an abnormal increase in concentration of activity in the spleen, suggesting an active inflammatory process. Based upon this constellation of findings, it was concluded that acquired functional asplenia is the result of reticuloendothelial cell replacement via infiltration of the spleen by epithelioid cell granulomas of active sarcoidosis. This case also illustrates the reversibility of functional asplenia of sarcoidosis following adrenocorticosteroid therapy. Functional asplenia in sarcoidosis is now found to have a recognizable radionuclide imaging pattern. ImagesFigure 1Figure 2Figure 3 PMID:3908697

  6. Quantification and recognition of parkinsonian gait from monocular video imaging using kernel-based principal component analysis

    Science.gov (United States)

    2011-01-01

    Background The computer-aided identification of specific gait patterns is an important issue in the assessment of Parkinson's disease (PD). In this study, a computer vision-based gait analysis approach is developed to assist the clinical assessments of PD with kernel-based principal component analysis (KPCA). Method Twelve PD patients and twelve healthy adults with no neurological history or motor disorders within the past six months were recruited and separated according to their "Non-PD", "Drug-On", and "Drug-Off" states. The participants were asked to wear light-colored clothing and perform three walking trials through a corridor decorated with a navy curtain at their natural pace. The participants' gait performance during the steady-state walking period was captured by a digital camera for gait analysis. The collected walking image frames were then transformed into binary silhouettes for noise reduction and compression. Using the developed KPCA-based method, the features within the binary silhouettes can be extracted to quantitatively determine the gait cycle time, stride length, walking velocity, and cadence. Results and Discussion The KPCA-based method uses a feature-extraction approach, which was verified to be more effective than traditional image area and principal component analysis (PCA) approaches in classifying "Non-PD" controls and "Drug-Off/On" PD patients. Encouragingly, this method has a high accuracy rate, 80.51%, for recognizing different gaits. Quantitative gait parameters are obtained, and the power spectrums of the patients' gaits are analyzed. We show that that the slow and irregular actions of PD patients during walking tend to transfer some of the power from the main lobe frequency to a lower frequency band. Our results indicate the feasibility of using gait performance to evaluate the motor function of patients with PD. Conclusion This KPCA-based method requires only a digital camera and a decorated corridor setup. The ease of use and

  7. What Makes a Word Easy to Acquire? The Effects of Word Class, Frequency, Imageability and Phonological Neighbourhood Density on Lexical Development

    Science.gov (United States)

    Hansen, Pernille

    2017-01-01

    This article analyses how a set of psycholinguistic factors may account for children's lexical development. Age of acquisition is compared to a measure of lexical development based on vocabulary size rather than age, and robust regression models are used to assess the individual and joint effects of word class, frequency, imageability and…

  8. Formulation and error analysis for a generalized image point correspondence algorithm

    Science.gov (United States)

    Shapiro, Linda (Editor); Rosenfeld, Azriel (Editor); Fotedar, Sunil; Defigueiredo, Rui J. P.; Krishen, Kumar

    1992-01-01

    A Generalized Image Point Correspondence (GIPC) algorithm, which enables the determination of 3-D motion parameters of an object in a configuration where both the object and the camera are moving, is discussed. A detailed error analysis of this algorithm has been carried out. Furthermore, the algorithm was tested on both simulated and video-acquired data, and its accuracy was determined.

  9. A Computational Framework for Vertical Video Editing

    OpenAIRE

    Gandhi, Vineet; Ronfard, Rémi

    2015-01-01

    International audience; Vertical video editing is the process of digitally editing the image within the frame as opposed to horizontal video editing, which arranges the shots along a timeline. Vertical editing can be a time-consuming and error-prone process when using manual key-framing and simple interpolation. In this paper, we present a general framework for automatically computing a variety of cinematically plausible shots from a single input video suitable to the special case of live per...

  10. Smoking in Video Games: A Systematic Review.

    OpenAIRE

    Forsyth, SR; Malone, RE

    2016-01-01

    INTRODUCTION: Video games are played by a majority of adolescents, yet little is known about whether and how video games are associated with smoking behavior and attitudes. This systematic review examines research on the relationship between video games and smoking. METHODS: We searched MEDLINE, psycINFO, and Web of Science through August 20, 2014. Twenty-four studies met inclusion criteria. Studies were synthesized qualitatively in four domains: the prevalence and incidence of smoking imager...

  11. Light-Emitting Diode-Assisted Narrow Band Imaging Video Endoscopy System in Head and Neck Cancer

    Science.gov (United States)

    Chang, Hsin-Jen; Wang, Wen-Hung; Chang, Yen-Liang; Jeng, Tzuan-Ren; Wu, Chun-Te; Angot, Ludovic; Lee, Chun-Hsing

    2015-01-01

    Background/Aims To validate the effectiveness of a newly developed light-emitting diode (LED)-narrow band imaging (NBI) system for detecting early malignant tumors in the oral cavity. Methods Six men (mean age, 51.5 years) with early oral mucosa lesions were screened using both the conventional white light and LED-NBI systems. Results Small elevated or ulcerative lesions were found under the white light view, and typical scattered brown spots were identified after shifting to the LED-NBI view for all six patients. Histopathological examination confirmed squamous cell carcinoma. The clinical stage was early malignant lesions (T1), and the patients underwent wide excision for primary cancer. This is the pilot study documenting the utility of a new LED-NBI system as an adjunctive technique to detect early oral cancer using the diagnostic criterion of the presence of typical scattered brown spots in six high-risk patients. Conclusions Although large-scale screening programs should be established to further verify the accuracy of this technology, its lower power consumption, lower heat emission, and higher luminous efficiency appear promising for future clinical applications. PMID:25844342

  12. Full-frame video stabilization with motion inpainting.

    Science.gov (United States)

    Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung

    2006-07-01

    Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.

  13. A Method of Sharing Tacit Knowledge by a Bulletin Board Link to Video Scene and an Evaluation in the Field of Nursing Skill

    Science.gov (United States)

    Shimada, Satoshi; Azuma, Shouzou; Teranaka, Sayaka; Kojima, Akira; Majima, Yukie; Maekawa, Yasuko

    We developed the system that knowledge could be discovered and shared cooperatively in the organization based on the SECI model of knowledge management. This system realized three processes by the following method. (1)A video that expressed skill is segmented into a number of scenes according to its contents. Tacit knowledge is shared in each scene. (2)Tacit knowledge is extracted by bulletin board linked to each scene. (3)Knowledge is acquired by repeatedly viewing the video scene with the comment that shows the technical content to be practiced. We conducted experiments that the system was used by nurses working for general hospitals. Experimental results show that the nursing practical knack is able to be collected by utilizing bulletin board linked to video scene. Results of this study confirmed the possibility of expressing the tacit knowledge of nurses' empirical nursing skills sensitively with a clue of video images.

  14. Characterizing popularity dynamics of online videos

    Science.gov (United States)

    Ren, Zhuo-Ming; Shi, Yu-Qiang; Liao, Hao

    2016-07-01

    Online popularity has a major impact on videos, music, news and other contexts in online systems. Characterizing online popularity dynamics is nature to explain the observed properties in terms of the already acquired popularity of each individual. In this paper, we provide a quantitative, large scale, temporal analysis of the popularity dynamics in two online video-provided websites, namely MovieLens and Netflix. The two collected data sets contain over 100 million records and even span a decade. We characterize that the popularity dynamics of online videos evolve over time, and find that the dynamics of the online video popularity can be characterized by the burst behaviors, typically occurring in the early life span of a video, and later restricting to the classic preferential popularity increase mechanism.

  15. MPnRAGE: A technique to simultaneously acquire hundreds of differently contrasted MPRAGE images with applications to quantitative T1 mapping

    Science.gov (United States)

    Kecskemeti, Steven; Samsonov, Alexey; Hurley, Samuel A.; Dean, Douglas C.; Field, Aaron; Alexander, Andrew L

    2015-01-01

    Purpose To introduce a new technique called MPnRAGE, which produces hundreds of images with different T1 contrasts and a B1 corrected T1 map. Theory and Methods An interleaved 3D radial k-space trajectory with a sliding window reconstruction is used in conjunction with magnetization preparation pulses. This work modifies the SNAPSHOT-FLASH T1 fitting equations for radial imaging with view-sharing and develops a new rapid B1 correction procedure. MPnRAGE is demonstrated in phantoms and volunteers, including 2 volunteers with 8 scans each and 8 volunteers with 2 scans each. T1 values from MPnRAGE were compared with those from fast spin echo inversion recovery (FSE-IR) in phantoms and a healthy human brain at 3T. Results The T1 fit for human white and gray matter was T1MPnRAGE=1.00 · T1FSE-IR + 24 ms, r2=0.990. Voxel-wise coefficient of variation in T1 measurements across 8 times points was between 0.02 and 0.08. ROI based T1 values were reproducible to within 2% and agree well with literature values. Conclusions In the same amount of time as a traditional MPRAGE exam (7.5 minutes), MPnRAGE was shown to produce hundreds of images with alternate T1 contrasts as well as an accurate and reproducible T1 map that is robust to B1 errors. PMID:25885265

  16. [Application of functional MR-images acquired at low field in planning of neurosurgical operation close to an eloquent brain area].

    Science.gov (United States)

    Auer, Tibor; Schwarcz, Attila; Janszky, József; Horváth, Zsolt; Kosztolányi, Péter; Dóczi, Tamás

    2007-01-20

    Presentation of functional MRI performed at low magnetic field (1 Tesla) for planning microsurgical operation in a patient suffering from tumor close to an eloquent brain area. Microsurgical removal navigated by frameless stereotaxy of an intrinsic tumor located in eloquent area is indicated if speech function is not damaged, i.e. exact localisation and relationship of the tumor and speech area can be defined. Before operation an optimized EPI based 2D sequence was applied to yield functional MR images. At the planning of the operation the paradigm used for the localization of the sensory language cortex contained passive listening to a text. Control investigations were performed one month postoperatively. A specific psychological test, as an additional investigation to estimate the accurate level of the sensory language function, was also conducted. Low resolution (matrix of 64x 64) functional MR images visualized sensory speech center and auditory cortex satisfactorily. The scans showed clearly that the Wernicke's region was situated just above the tumor (WHO grade II glioma), and this finding increased the safety of intraoperative localization and reduced the risk of morbidity. Control examinations revealed minimal decrease in sensory language function, however, it was not noticeable for either the patient or her surroundings. Optimized functional MR imaging performed at low magnetic field can support planning of neurosurgical operations and reduce the morbidity of microsurgical interventions.

  17. Human dental age estimation by calculation of pulp-tooth volume ratios yielded on clinically acquired cone beam computed tomography images of monoradicular teeth.

    Science.gov (United States)

    Star, Hazha; Thevissen, Patrick; Jacobs, Reinhilde; Fieuws, Steffen; Solheim, Tore; Willems, Guy

    2011-01-01

    Secondary dentine is responsible for a decrease in the volume of the dental pulp cavity with aging. The aim of this study is to evaluate a human dental age estimation method based on the ratio between the volume of the pulp and the volume of its corresponding tooth, calculated on clinically taken cone beam computed tomography (CBCT) images from monoradicular teeth. On the 3D images of 111 clinically obtained CBCT images (Scanora(®) 3D dental cone beam unit) of 57 female and 54 male patients ranging in age between 10 and 65 years, the pulp-tooth volume ratio of 64 incisors, 32 canines, and 15 premolars was calculated with Simplant(®) Pro software. A linear regression model was fit with age as dependent variable and ratio as predictor, allowing for interactions of specific gender or tooth type. The obtained pulp-tooth volume ratios were the strongest related to age on incisors. © 2010 American Academy of Forensic Sciences.

  18. Image analyzers for bioscience applications.

    Science.gov (United States)

    Ramm, P

    1990-01-01

    Image analysis systems are becoming more sophosticated, less costly, and very common in research laboratories. Therefore, the bioscience researcher is faced with a bewildering array of choices in establishing an image analysis facility. Critical components and characteristics of commercial image analyzers are discussed. State-of-the-art systems feature a graphical user interface, a powerful operating system (e.g., Microsoft OS/2), 1000 line image acquisition, processing and display, true color imaging, and very flexible scanner interfaces. Such systems are best suited to technically difficult applications, such as ratio fluorescence, or to automated analysis of anatomical features, particularly in stained material. Less powerful image analyzers offer medium resolution, and typically work with monochrome data acquired from video cameras. Such systems are suitable for many bioscience applications, including quantitative autoradiography and routine morphometry.

  19. 13 point video tape quality guidelines

    Energy Technology Data Exchange (ETDEWEB)

    Gaunt, R.

    1997-05-01

    Until high definition television (ATV) arrives, in the U.S. we must still contend with the National Television Systems Committee (NTSC) video standard (or PAL or SECAM-depending on your country). NTSC, a 40-year old standard designed for transmission of color video camera images over a small bandwidth, is not well suited for the sharp, full-color images that todays computers are capable of producing. PAL and SECAM also suffers from many of NTSC`s problems, but to varying degrees. Video professionals, when working with computer graphic (CG) images, use two monitors: a computer monitor for producing CGs and an NTSC monitor to view how a CG will look on video. More often than not, the NTSC image will differ significantly from the CG image, and outputting it to NTSC as an artist works enables the him or her to see the images as others will see it. Below are thirteen guidelines designed to increase the quality of computer graphics recorded onto video tape. Viewing your work in NTSC and attempting to follow the below tips will enable you to create higher quality videos. No video is perfect, so don`t expect to abide by every guideline every time.

  20. Repeatability of Brain Volume Measurements Made with the Atlas-based Method from T1-weighted Images Acquired Using a 0.4 Tesla Low Field MR Scanner.

    Science.gov (United States)

    Goto, Masami; Suzuki, Makoto; Mizukami, Shinya; Abe, Osamu; Aoki, Shigeki; Miyati, Tosiaki; Fukuda, Michinari; Gomi, Tsutomu; Takeda, Tohoru

    2016-10-11

    An understanding of the repeatability of measured results is important for both the atlas-based and voxel-based morphometry (VBM) methods of magnetic resonance (MR) brain volumetry. However, many recent studies that have investigated the repeatability of brain volume measurements have been performed using static magnetic fields of 1-4 tesla, and no study has used a low-strength static magnetic field. The aim of this study was to investigate the repeatability of measured volumes using the atlas-based method and a low-strength static magnetic field (0.4 tesla). Ten healthy volunteers participated in this study. Using a 0.4 tesla magnetic resonance imaging (MRI) scanner and a quadrature head coil, three-dimensional T1-weighted images (3D-T1WIs) were obtained from each subject, twice on the same day. VBM8 software was used to construct segmented normalized images [gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) images]. The regions-of-interest (ROIs) of GM, WM, CSF, hippocampus (HC), orbital gyrus (OG), and cerebellum posterior lobe (CPL) were generated using WFU PickAtlas. The percentage change was defined as[100 × (measured volume with first segmented image - mean volume in each subject)/(mean volume in each subject)]The average percentage change was calculated as the percentage change in the 6 ROIs of the 10 subjects. The mean of the average percentage changes for each ROI was as follows: GM, 0.556%; WM, 0.324%; CSF, 0.573%; HC, 0.645%; OG, 1.74%; and CPL, 0.471%. The average percentage change was higher for the orbital gyrus than for the other ROIs. We consider that repeatability of the atlas-based method is similar between 0.4 and 1.5 tesla MR scanners. To our knowledge, this is the first report to show that the level of repeatability with a 0.4 tesla MR scanner is adequate for the estimation of brain volume change by the atlas-based method.