WorldWideScience

Sample records for cell imaging videos

  1. High-speed video imaging and digital analysis of microscopic features in contracting striated muscle cells

    Science.gov (United States)

    Roos, Kenneth P.; Taylor, Stuart R.

    1993-02-01

    The rapid motion of microscopic features such as the cross striations of single contracting muscle cells are difficult to capture with conventional optical microscopes, video systems, and image processing approaches. An integrated digital video imaging microscope system specifically designed to capture images from single contracting muscle cells at speeds of up to 240 Hz and to analyze images to extract features critical for the understanding of muscle contraction is described. This system consists of a brightfield microscope with immersion optics coupled to a high-speed charge-coupled device (CCD) video camera, super-VHS (S- VHS) and optical media disk video recording (OMDR) systems, and a semiautomated digital image analysis system. Components are modified to optimize spatial and temporal resolution to permit the evaluation of submicrometer features in real physiological time. This approach permits the critical evaluation of the magnitude, time course, and uniformity of contractile function throughout the volume of a single living cell with higher temporal and spatial resolutions than previously possible.

  2. Ultrafast video imaging of cell division from zebrafish egg using multimodal microscopic system

    Science.gov (United States)

    Lee, Sung-Ho; Jang, Bumjoon; Kim, Dong Hee; Park, Chang Hyun; Bae, Gyuri; Park, Seung Woo; Park, Seung-Han

    2017-07-01

    Unlike those of other ordinary laser scanning microscopies in the past, nonlinear optical laser scanning microscopy (SHG, THG microscopy) applied ultrafast laser technology which has high peak powers with relatively inexpensive, low-average-power. It short pulse nature reduces the ionization damage in organic molecules. And it enables us to take bright label-free images. In this study, we measured cell division of zebrafish egg with ultrafast video images using multimodal nonlinear optical microscope. The result shows in-vivo cell division label-free imaging with sub-cellular resolution.

  3. Feature point tracking and trajectory analysis for video imaging in cell biology.

    Science.gov (United States)

    Sbalzarini, I F; Koumoutsakos, P

    2005-08-01

    This paper presents a computationally efficient, two-dimensional, feature point tracking algorithm for the automated detection and quantitative analysis of particle trajectories as recorded by video imaging in cell biology. The tracking process requires no a priori mathematical modeling of the motion, it is self-initializing, it discriminates spurious detections, and it can handle temporary occlusion as well as particle appearance and disappearance from the image region. The efficiency of the algorithm is validated on synthetic video data where it is compared to existing methods and its accuracy and precision are assessed for a wide range of signal-to-noise ratios. The algorithm is well suited for video imaging in cell biology relying on low-intensity fluorescence microscopy. Its applicability is demonstrated in three case studies involving transport of low-density lipoproteins in endosomes, motion of fluorescently labeled Adenovirus-2 particles along microtubules, and tracking of quantum dots on the plasma membrane of live cells. The present automated tracking process enables the quantification of dispersive processes in cell biology using techniques such as moment scaling spectra.

  4. Color image and video enhancement

    CERN Document Server

    Lecca, Michela; Smolka, Bogdan

    2015-01-01

    This text covers state-of-the-art color image and video enhancement techniques. The book examines the multivariate nature of color image/video data as it pertains to contrast enhancement, color correction (equalization, harmonization, normalization, balancing, constancy, etc.), noise removal and smoothing. This book also discusses color and contrast enhancement in vision sensors and applications of image and video enhancement.   ·         Focuses on enhancement of color images/video ·         Addresses algorithms for enhancing color images and video ·         Presents coverage on super resolution, restoration, in painting, and colorization.

  5. Video: reprogramming cells.

    Science.gov (United States)

    2008-12-19

    This video introduction to Science's year-end special issue features Shinya Yamanaka of Kyoto University, George Daley of Harvard University, and Science's Gretchen Vogel reviewing some of the work that led studies in reprogramming cells to be tagged the top scientific story for 2008.

  6. Video image stabilization and registration--plus

    Science.gov (United States)

    Hathaway, David H. (Inventor)

    2009-01-01

    A method of stabilizing a video image displayed in multiple video fields of a video sequence includes the steps of: subdividing a selected area of a first video field into nested pixel blocks; determining horizontal and vertical translation of each of the pixel blocks in each of the pixel block subdivision levels from the first video field to a second video field; and determining translation of the image from the first video field to the second video field by determining a change in magnification of the image from the first video field to the second video field in each of horizontal and vertical directions, and determining shear of the image from the first video field to the second video field in each of the horizontal and vertical directions.

  7. Radiation effects on video imagers

    Science.gov (United States)

    Yates, G. J.; Bujnosek, J. J.; Jaramillo, S. A.; Walton, R. B.; Martinez, T. M.

    1986-02-01

    Radiation senstivity of several photoconductive, photoemissive, and solid state silicon-based video imagers was measured by analysing stored photo-charge induced by irradiation with continuous and pulsed sources of high energy photons and neutrons. Transient effects as functions of absorbed dose, dose rate, fluences, and ionizing particle energy are presented.

  8. Multimedia image and video processing

    CERN Document Server

    Guan, Ling

    2012-01-01

    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  9. Enhancement system of nighttime infrared video image and visible video image

    Science.gov (United States)

    Wang, Yue; Piao, Yan

    2016-11-01

    Visibility of Nighttime video image has a great significance for military and medicine areas, but nighttime video image has so poor quality that we can't recognize the target and background. Thus we enhance the nighttime video image by fuse infrared video image and visible video image. According to the characteristics of infrared and visible images, we proposed improved sift algorithm andαβ weighted algorithm to fuse heterologous nighttime images. We would deduced a transfer matrix from improved sift algorithm. The transfer matrix would rapid register heterologous nighttime images. And theαβ weighted algorithm can be applied in any scene. In the video image fusion system, we used the transfer matrix to register every frame and then used αβ weighted method to fuse every frame, which reached the time requirement soft video. The fused video image not only retains the clear target information of infrared video image, but also retains the detail and color information of visible video image and the fused video image can fluency play.

  10. Real-time video-image analysis

    Science.gov (United States)

    Eskenazi, R.; Rayfield, M. J.; Yakimovsky, Y.

    1979-01-01

    Digitizer and storage system allow rapid random access to video data by computer. RAPID (random-access picture digitizer) uses two commercially-available, charge-injection, solid-state TV cameras as sensors. It can continuously update its memory with each frame of video signal, or it can hold given frame in memory. In either mode, it generates composite video output signal representing digitized image in memory.

  11. EFFICIENT VIDEO ANNOTATIONS BY AN IMAGE GROUPS

    Directory of Open Access Journals (Sweden)

    K . Mahi balan

    2015-10-01

    Full Text Available Searching desirable events in uncontrolled videos is a challenging task. So, researches mainly focus on obtaining concepts from numerous labelled videos. But it is time consuming and labour expensive to collect a large amount of required labelled videos for training event models under various condition. To avoid this problem, we propose to leverage abundant Web images for videos since Web images contain a rich source of information with many events roughly annotated and taken under various conditions. However, information from the Web is difficult .so,brute force knowledge transfer of images may hurt the video annotation performance. so, we propose a novel Group-based Domain Adaptation learning framework to leverage different groups of knowledge (source target queried from the Web image search engine to consumer videos (domain target. Different from old methods using multiple source domains of images, our method makes the Web images according to their intrinsic semantic relationships instead of source. Specifically, two different types of groups ( event-specific groups and concept-specific groups are exploited to respectively describe the event-level and concept-level semantic meanings of target-domain videos.

  12. Enhanced Video Surveillance (EVS) with speckle imaging

    Energy Technology Data Exchange (ETDEWEB)

    Carrano, C J

    2004-01-13

    Enhanced Video Surveillance (EVS) with Speckle Imaging is a high-resolution imaging system that substantially improves resolution and contrast in images acquired over long distances. This technology will increase image resolution up to an order of magnitude or greater for video surveillance systems. The system's hardware components are all commercially available and consist of a telescope or large-aperture lens assembly, a high-performance digital camera, and a personal computer. The system's software, developed at LLNL, extends standard speckle-image-processing methods (used in the astronomical community) to solve the atmospheric blurring problem associated with imaging over medium to long distances (hundreds of meters to tens of kilometers) through horizontal or slant-path turbulence. This novel imaging technology will not only enhance national security but also will benefit law enforcement, security contractors, and any private or public entity that uses video surveillance to protect their assets.

  13. Still image and video compression with MATLAB

    CERN Document Server

    Thyagarajan, K

    2010-01-01

    This book describes the principles of image and video compression techniques and introduces current and popular compression standards, such as the MPEG series. Derivations of relevant compression algorithms are developed in an easy-to-follow fashion. Numerous examples are provided in each chapter to illustrate the concepts. The book includes complementary software written in MATLAB SIMULINK to give readers hands-on experience in using and applying various video compression methods. Readers can enhance the software by including their own algorithms.

  14. Quality assessment of video image capture systems

    Science.gov (United States)

    Rowberg, Alan H.; Lian, Jing

    1991-05-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. A series of digital phantoms has been developed for display on either a CT9800 or Hilite Advantage scanner. The phantom images have been stored on magnetic tape in the standard tape archive format used by General Electric, so that the images may be loaded onto the scanner at any time. These images are then captured using a commercial video image capture board in a PC/286 computer, where the images are not only to be displayed, but also analyzed with the use of an automated process implemented in a computer program on the same PC. Results of the analyses are saved, together with the data and time of image acquisition, so that the results can be displayed graphically, as trend plots.

  15. Intergraph video and images exploitation capabilities

    Science.gov (United States)

    Colla, Simone; Manesis, Charalampos

    2013-08-01

    The current paper focuses on the capture, fusion and process of aerial imagery in order to leverage full motion video, giving analysts the ability to collect, analyze, and maximize the value of video assets. Unmanned aerial vehicles (UAV) have provided critical real-time surveillance and operational support to military organizations, and are a key source of intelligence, particularly when integrated with other geospatial data. In the current workflow, at first, the UAV operators plan the flight by using a flight planning software. During the flight the UAV send a live video stream directly on the field to be processed by Intergraph software, to generate and disseminate georeferenced images trough a service oriented architecture based on ERDAS Apollo suite. The raw video-based data sources provide the most recent view of a situation and can augment other forms of geospatial intelligence - such as satellite imagery and aerial photos - to provide a richer, more detailed view of the area of interest. To effectively use video as a source of intelligence, however, the analyst needs to seamlessly fuse the video with these other types of intelligence, such as map features and annotations. Intergraph has developed an application that automatically generates mosaicked georeferenced image, tags along the video route which can then be seamlessly integrated with other forms of static data, such as aerial photos, satellite imagery, or geospatial layers and features. Consumers will finally have the ability to use a single, streamlined system to complete the entire geospatial information lifecycle: capturing geospatial data using sensor technology; processing vector, raster, terrain data into actionable information; managing, fusing, and sharing geospatial data and video toghether; and finally, rapidly and securely delivering integrated information products, ensuring individuals can make timely decisions.

  16. Super resolution of images and video

    CERN Document Server

    Katsaggelos, Aggelos K

    2007-01-01

    This book focuses on the super resolution of images and video. The authors' use of the term super resolution (SR) is used to describe the process of obtaining a high resolution (HR) image, or a sequence of HR images, from a set of low resolution (LR) observations. This process has also been referred to in the literature as resolution enhancement (RE). SR has been applied primarily to spatial and temporal RE, but also to hyperspectral image enhancement. This book concentrates on motion based spatial RE, although the authors also describe motion free and hyperspectral image SR problems. Also exa

  17. Video surveillance with speckle imaging

    Science.gov (United States)

    Carrano, Carmen J.; Brase, James M.

    2007-07-17

    A surveillance system looks through the atmosphere along a horizontal or slant path. Turbulence along the path causes blurring. The blurring is corrected by speckle processing short exposure images recorded with a camera. The exposures are short enough to effectively freeze the atmospheric turbulence. Speckle processing is used to recover a better quality image of the scene.

  18. Video-based noncooperative iris image segmentation.

    Science.gov (United States)

    Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig

    2011-02-01

    In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.

  19. Image and Video for Hearing Impaired People

    Directory of Open Access Journals (Sweden)

    Aran Oya

    2007-01-01

    Full Text Available We present a global overview of image- and video-processing-based methods to help the communication of hearing impaired people. Two directions of communication have to be considered: from a hearing person to a hearing impaired person and vice versa. In this paper, firstly, we describe sign language (SL and the cued speech (CS language which are two different languages used by the deaf community. Secondly, we present existing tools which employ SL and CS video processing and recognition for the automatic communication between deaf people and hearing people. Thirdly, we present the existing tools for reverse communication, from hearing people to deaf people that involve SL and CS video synthesis.

  20. Image and Video for Hearing Impaired People

    Directory of Open Access Journals (Sweden)

    Thomas Burger

    2008-04-01

    Full Text Available We present a global overview of image- and video-processing-based methods to help the communication of hearing impaired people. Two directions of communication have to be considered: from a hearing person to a hearing impaired person and vice versa. In this paper, firstly, we describe sign language (SL and the cued speech (CS language which are two different languages used by the deaf community. Secondly, we present existing tools which employ SL and CS video processing and recognition for the automatic communication between deaf people and hearing people. Thirdly, we present the existing tools for reverse communication, from hearing people to deaf people that involve SL and CS video synthesis.

  1. Structural image and video understanding

    NARCIS (Netherlands)

    Lou, Z.

    2016-01-01

    In this thesis, we have discussed how to exploit the structures in several computer vision topics. The five chapters addressed five computer vision topics using the image structures. In chapter 2, we proposed a structural model to jointly predict the age, expression and gender of a face. By modeling

  2. Image Space and Time Interpolation for Video Navigation

    OpenAIRE

    2011-01-01

    English: The aim of image-based video navigation is essentially to achieve a continuous change in the viewpoint without the need of a complete camera coverage of the space of interest. By making use of image interpolation, the need for video hardware can be reduced drastically by replacing them, in the desired viewpoints, with virtual video cameras. In this work, based on previously published approaches, an algorithm for time and space image interpolation is developed with a video application...

  3. Real-time image and video processing

    CERN Document Server

    Kehtarnavaz, Nasser

    2006-01-01

    This book presents an overview of the guidelines and strategies for transitioning an image or video processing algorithm from a research environment into a real-time constrained environment. Such guidelines and strategies are scattered in the literature of various disciplines including image processing, computer engineering, and software engineering, and thus have not previously appeared in one place. By bringing these strategies into one place, the book is intended to serve the greater community of researchers, practicing engineers, industrial professionals, who are interested in taking an im

  4. Practical image and video processing using MATLAB

    CERN Document Server

    Marques, Oge

    2011-01-01

    "The book provides a practical introduction to the most important topics in image and video processing using MATLAB (and its Image Processing Toolbox) as a tool to demonstrate the most important techniques and algorithms. The contents are presented in a clear, technically accurate, objective way, with just enough mathematical detail. Most of the chapters are supported by figures, examples, illustrative problems, MATLAB scripts, suggestions for further reading, bibliographical references, useful Web sites, and exercises and computer projects to extend the understanding of their contents"--

  5. Image and video processing in the compressed domain

    CERN Document Server

    Mukhopadhyay, Jayanta

    2011-01-01

    As more images and videos are becoming available in compressed formats, researchers have begun designing algorithms for different image operations directly in their domains of representation, leading to faster computation and lower buffer requirements. Image and Video Processing in the Compressed Domain presents the fundamentals, properties, and applications of a variety of image transforms used in image and video compression. It illustrates the development of algorithms for processing images and videos in the compressed domain. Developing concepts from first principles, the book introduces po

  6. Automatic annotation of image and video using semantics

    Science.gov (United States)

    Yasaswy, A. R.; Manikanta, K.; Sri Vamshi, P.; Tapaswi, Shashikala

    2010-02-01

    The accumulation of large collections of digital images has created the need for efficient and intelligent schemes for content-based image retrieval. Our goal is to organize the contents semantically, according to meaningful categories. Automatic annotation is the process of automatically assigning descriptions to an image or video that describes the contents of the image or video. In this paper, we examine the problem of automatic captioning of multimedia containing round and square objects. On a given set of images and videos we were able to recognize round and square objects in the images with accuracy up to 80% and videos with accuracy up to 70%.

  7. Multimodal location estimation of videos and images

    CERN Document Server

    Friedland, Gerald

    2015-01-01

    This book presents an overview of the field of multimodal location estimation, i.e. using acoustic, visual, and/or textual cues to estimate the shown location of a video recording. The authors' sample research results in this field in a unified way integrating research work on this topic that focuses on different modalities, viewpoints, and applications. The book describes fundamental methods of acoustic, visual, textual, social graph, and metadata processing as well as multimodal integration methods used for location estimation. In addition, the text covers benchmark metrics and explores the limits of the technology based on a human baseline. ·         Discusses localization of multimedia data; ·         Examines fundamental methods of establishing location metadata for images and videos (other than GPS tagging); ·         Covers Data-Driven as well as Semantic Location Estimation.

  8. A novel Kalman filter based video image processing scheme for two-photon fluorescence microscopy

    Science.gov (United States)

    Sun, Wenqing; Huang, Xia; Li, Chunqiang; Xiao, Chuan; Qian, Wei

    2016-03-01

    Two-photon fluorescence microscopy (TPFM) is a perfect optical imaging equipment to monitor the interaction between fast moving viruses and hosts. However, due to strong unavoidable background noises from the culture, videos obtained by this technique are too noisy to elaborate this fast infection process without video image processing. In this study, we developed a novel scheme to eliminate background noises, recover background bacteria images and improve video qualities. In our scheme, we modified and implemented the following methods for both host and virus videos: correlation method, round identification method, tree-structured nonlinear filters, Kalman filters, and cell tracking method. After these procedures, most of noises were eliminated and host images were recovered with their moving directions and speed highlighted in the videos. From the analysis of the processed videos, 93% bacteria and 98% viruses were correctly detected in each frame on average.

  9. Single molecule dynamics in a virtual cell: a three-dimensional model that produces simulated fluorescence video-imaging data.

    Science.gov (United States)

    Mashanov, Gregory I

    2014-09-06

    The analysis of single molecule imaging experiments is complicated by the stochastic nature of single molecule events, by instrument noise and by the limited information which can be gathered about any individual molecule observed. Consequently, it is important to cross check experimental results using a model simulating single molecule dynamics (e.g. movements and binding events) in a virtual cell-like environment. The output of such a model should match the real data format allowing researchers to compare simulated results with the real experiments. The proposed model exploits the advantages of 'object-oriented' computing. First of all, the ability to create and manipulate a number of classes, each containing an arbitrary number of single molecule objects. These classes may include objects moving within the 'cytoplasm'; objects moving at the 'plasma membrane'; and static objects located inside the 'body'. The objects of a given class can interact with each other and/or with the objects of other classes according to their physical and chemical properties. Each model run generates a sequence of images, each containing summed images of all fluorescent objects emitting light under given illumination conditions with realistic levels of noise and emission fluctuations. The model accurately reproduces reported single molecule experiments and predicts the outcome of future experiments.

  10. Hiding image to video: A new approach of LSB replacement

    Directory of Open Access Journals (Sweden)

    Saurabh Singh,

    2010-12-01

    Full Text Available Steganography has become great area of interest for researchers as need for secure transaction of information is increasing day by day. Information may be text, image, audio or video. Steganography is a technique in which required information is hided in any other information such that the second information does not change significantly and it appears the same as original. This paper presents a novel approach of hiding image in a video. The proposed algorithm is replacing one LSB of each pixel in video frames. It becomes very difficult for intruder to guess that an image is hidden in the video as individual frames are very difficult to analyze in a video running at 30 frames per second. The process of analysis has been made more difficult by hiding each row of image pixels in multiple frames of the video, so intruder cannot even try to unhide image until he get full video.

  11. Mirror Image Video Artifact: An Under-Reported Digital Video-EEG Artifact.

    Science.gov (United States)

    Babcock, Michael A; Levis, William H; Bhatt, Amar B

    2017-01-01

    Synchronous video recording can be helpful in EEG recordings, especially in recognition of seizures and in rejection of artifacts. However, video recordings themselves are also subject to the risk of contamination by artifacts. We report a unique case in which a digital video artifact was identified, occurring during synchronous video-EEG recording, albeit independently of the EEG tracing itself. A synchronous digital video-EEG recording was performed on a 67-year-old male who presented in focal motor status epilepticus. During the initial review of the data, right-sided abnormalities on EEG apparently corresponded with (ipsilateral) right arm motor activity on video, suggesting a nonsensical anatomical localization. However, review of the patient's chart and discussion with the EEG technologist led to the recognition that the video data recorded a mirror image of the true findings of left arm motor activity. Review of the software settings led to the discovery that the video recording was inverted along the vertical axis, leading to mirror image video artifact. Recognition of this video artifact allowed for accurate interpretation of the study-that right hemispheric EEG abnormalities correlated appropriately with (contralateral) left arm twitching. Effective communication between the EEG reading physician, the treating team, and the EEG technologist is critical for recognition of such artifacts, for proper EEG interpretation, and for appropriate patient management. Mirror image video artifact affirms that bedside evaluation, astute technologists, and attentive EEG reading physicians remain important, even in the presence of video recording.

  12. Image and video restorations via nonlocal kernel regression.

    Science.gov (United States)

    Zhang, Haichao; Yang, Jianchao; Zhang, Yanning; Huang, Thomas S

    2013-06-01

    A nonlocal kernel regression (NL-KR) model is presented in this paper for various image and video restoration tasks. The proposed method exploits both the nonlocal self-similarity and local structural regularity properties in natural images. The nonlocal self-similarity is based on the observation that image patches tend to repeat themselves in natural images and videos, and the local structural regularity observes that image patches have regular structures where accurate estimation of pixel values via regression is possible. By unifying both properties explicitly, the proposed NL-KR framework is more robust in image estimation, and the algorithm is applicable to various image and video restoration tasks. In this paper, we apply the proposed model to image and video denoising, deblurring, and superresolution reconstruction. Extensive experimental results on both single images and realistic video sequences demonstrate that the proposed framework performs favorably with previous works both qualitatively and quantitatively.

  13. Video Field Studies with your Cell Phone

    DEFF Research Database (Denmark)

    Buur, Jacob; Fraser, Euan

    2010-01-01

    is monumental, that equipment is difficult to handle etc. This tutorial presents a lightweight entry into video field studies, using cheap devices like cell phones and portable webcams for informal shooting and simple computer handling for editing. E.g. how far can you get with an iPhone or a video capable i......, and in particular practitioners from smaller organizations are understandably nervous about embarking on video projects out of fear that it is difficult to get consent in the first place, that the ethics is difficult to handle, that video shooting makes the social relations awkward, that the editing task......Pod? Or with the GoPRO sports camera? Our approach has a strong focus on how to use video in design, rather than on the technical side. The goal is to engage design teams in meaningful discussions based on user empathy, rather than to produce beautiful videos. Basically it is a search for a minimalist way...

  14. Does Instructor's Image Size in Video Lectures Affect Learning Outcomes?

    Science.gov (United States)

    Pi, Z.; Hong, J.; Yang, J.

    2017-01-01

    One of the most commonly used forms of video lectures is a combination of an instructor's image and accompanying lecture slides as a picture-in-picture. As the image size of the instructor varies significantly across video lectures, and so do the learning outcomes associated with this technology, the influence of the instructor's image size should…

  15. Video Vortex reader II: moving images beyond YouTube

    NARCIS (Netherlands)

    Lovink, G.; Somers Miles, R.

    2011-01-01

    Video Vortex Reader II is the Institute of Network Cultures' second collection of texts that critically explore the rapidly changing landscape of online video and its use. With the success of YouTube ('2 billion views per day') and the rise of other online video sharing platforms, the moving image

  16. Video Vortex reader II: moving images beyond YouTube

    NARCIS (Netherlands)

    Lovink, G.; Somers Miles, R.

    2011-01-01

    Video Vortex Reader II is the Institute of Network Cultures' second collection of texts that critically explore the rapidly changing landscape of online video and its use. With the success of YouTube ('2 billion views per day') and the rise of other online video sharing platforms, the moving image h

  17. Dynamic Image Stitching for Panoramic Video

    Directory of Open Access Journals (Sweden)

    Jen-Yu Shieh

    2014-10-01

    Full Text Available The design of this paper is based on the Dynamic image titching for panoramic video. By utilizing OpenCV visual function data library and SIFT algorithm as the basis for presentation, this article brings forward Gaussian second differenced MoG which is processed basing on DoG Gaussian Difference Map to reduce order in synthesizing dynamic images and simplify the algorithm of the Gaussian pyramid structure. MSIFT matches with overlapping segmentation method to simplify the scope of feature extraction in order to enhance speed. And through this method traditional image synthesis can be improved without having to take lots of time in calculation and being limited by space and angle. This research uses four normal Webcams and two IPCAM coupled with several-wide angle lenses. By using wide-angle lenses to monitor over a wide range of an area and then by using image stitching panoramic effect is achieved. In terms of overall image application and control interface, Microsoft Visual Studio C# is adopted to a construct software interface. On a personal computer with 2.4-GHz CPU and 2-GB RAM and with the cameras fixed to it, the execution speed is three images per second, which reduces calculation time of the traditional algorithm.

  18. Image and video compression fundamentals, techniques, and applications

    CERN Document Server

    Joshi, Madhuri A; Dandawate, Yogesh H; Joshi, Kalyani R; Metkar, Shilpa P

    2014-01-01

    Image and video signals require large transmission bandwidth and storage, leading to high costs. The data must be compressed without a loss or with a small loss of quality. Thus, efficient image and video compression algorithms play a significant role in the storage and transmission of data.Image and Video Compression: Fundamentals, Techniques, and Applications explains the major techniques for image and video compression and demonstrates their practical implementation using MATLAB® programs. Designed for students, researchers, and practicing engineers, the book presents both basic principles

  19. Despeckle filtering for ultrasound imaging and video II selected applications

    CERN Document Server

    Loizou, Christos P

    2015-01-01

    In ultrasound imaging and video visual perception is hindered by speckle multiplicative noise that degrades the quality. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image/video segmentation, texture analysis and encoding in ultrasound imaging and video. The goal of the first book (book 1 of 2 books) was to introduce the problem of speckle in ultrasound image and video as well as the theoretical background, algorithmic steps, and the MatlabTM for the following group of despeckle filters:

  20. Multimodal interaction in image and video applications

    CERN Document Server

    Sappa, Angel D

    2013-01-01

    Traditional Pattern Recognition (PR) and Computer Vision (CV) technologies have mainly focused on full automation, even though full automation often proves elusive or unnatural in many applications, where the technology is expected to assist rather than replace the human agents. However, not all the problems can be automatically solved being the human interaction the only way to tackle those applications. Recently, multimodal human interaction has become an important field of increasing interest in the research community. Advanced man-machine interfaces with high cognitive capabilities are a hot research topic that aims at solving challenging problems in image and video applications. Actually, the idea of computer interactive systems was already proposed on the early stages of computer science. Nowadays, the ubiquity of image sensors together with the ever-increasing computing performance has open new and challenging opportunities for research in multimodal human interaction. This book aims to show how existi...

  1. VLSI-based Video Event Triggering for Image Data Compression

    Science.gov (United States)

    Williams, Glenn L.

    1994-01-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  2. Video indexing based on image and sound

    Science.gov (United States)

    Faudemay, Pascal; Montacie, Claude; Caraty, Marie-Jose

    1997-10-01

    Video indexing is a major challenge for both scientific and economic reasons. Information extraction can sometimes be easier from sound channel than from image channel. We first present a multi-channel and multi-modal query interface, to query sound, image and script through 'pull' and 'push' queries. We then summarize the segmentation phase, which needs information from the image channel. Detection of critical segments is proposed. It should speed-up both automatic and manual indexing. We then present an overview of the information extraction phase. Information can be extracted from the sound channel, through speaker recognition, vocal dictation with unconstrained vocabularies, and script alignment with speech. We present experiment results for these various techniques. Speaker recognition methods were tested on the TIMIT and NTIMIT database. Vocal dictation as experimented on newspaper sentences spoken by several speakers. Script alignment was tested on part of a carton movie, 'Ivanhoe'. For good quality sound segments, error rates are low enough for use in indexing applications. Major issues are the processing of sound segments with noise or music, and performance improvement through the use of appropriate, low-cost architectures or networks of workstations.

  3. Despeckle filtering for ultrasound imaging and video, v.I algorithms and software

    CERN Document Server

    Loizou, Christos P

    2015-01-01

    It is well known that speckle is a multiplicative noise that degrades image and video quality and the visual expert's evaluation in ultrasound imaging and video. This necessitates the need for robust despeckling image and video techniques for both routine

  4. Multiresolutional encoding and decoding in embedded image and video coders

    Science.gov (United States)

    Xiong, Zixiang; Kim, Beong-Jo; Pearlman, William A.

    1998-07-01

    We address multiresolutional encoding and decoding within the embedded zerotree wavelet (EZW) framework for both images and video. By varying a resolution parameter, one can obtain decoded images at different resolutions from one single encoded bitstream, which is already rate scalable for EZW coders. Similarly one can decode video sequences at different rates and different spatial and temporal resolutions from one bitstream. Furthermore, a layered bitstream can be generated with multiresolutional encoding, from which the higher resolution layers can be used to increase the spatial/temporal resolution of the images/video obtained from the low resolution layer. In other words, we have achieved full scalability in rate and partial scalability in space and time. This added spatial/temporal scalability is significant for emerging multimedia applications such as fast decoding, image/video database browsing, telemedicine, multipoint video conferencing, and distance learning.

  5. Spatio-temporal image inpainting for video applications

    Directory of Open Access Journals (Sweden)

    Voronin Viacheslav

    2017-01-01

    Full Text Available Video inpainting or completion is a vital video improvement technique used to repair or edit digital videos. This paper describes a framework for temporally consistent video completion. The proposed method allows to remove dynamic objects or restore missing or tainted regions presented in a video sequence by utilizing spatial and temporal information from neighboring scenes. Masking algorithm is used for detection of scratches or damaged portions in video frames. The algorithm iteratively performs the following operations: achieve frame; update the scene model; update positions of moving objects; replace parts of the frame occupied by the objects marked for remove by using a background model. In this paper, we extend an image inpainting algorithm based texture and structure reconstruction by incorporating an improved strategy for video. Our algorithm is able to deal with a variety of challenging situations which naturally arise in video inpainting, such as the correct reconstruction of dynamic textures, multiple moving objects and moving background. Experimental comparisons to state-of-the-art video completion methods demonstrate the effectiveness of the proposed approach. It is shown that the proposed spatio-temporal image inpainting method allows restoring a missing blocks and removing a text from the scenes on videos.

  6. Image Segmentation in Video Sequences Using Modified Background Subtraction

    Directory of Open Access Journals (Sweden)

    D. W. Chinchkhede

    2012-03-01

    Full Text Available In computer vision, "Background subtraction" is a technique for finding moving objects in a video sequences for example vehicle driving on a freeway. For to detect non stationary (dynamic objects, it is necessary to subtracting current image from a time-averaged background image. There are various background subtraction algorithms for detecting moving vehicles or any moving object(s like pedestrians in urban traffic video sequences. A crude approximation to the task of classifying each pixel on the frame of current image, locate slow-moving objects or in poor image qualities of videos and distinguish shadows from moving objects by using modified background subtraction method. While classifying each pixel on the frame of the current image, it is to be detect the moving object at foreground and background conditional environment that we can classify each pixel using a model of how that pixellooks when it is part of video frame classes. A mixture of Gaussians classification model for each pixel using an unsupervised technique is an efficient, incremental version of Expectation Maximization (EM is used for the purpose. Unlike standard image-averaging approach, this method automatically updates the mixture component for each video frame class according to likelihood of membership; hence slowmoving objects and poor image quality of videos are also being handled perfectly. Our approach identifies and eliminates shadows much more effectively than other techniques like thresholding.

  7. PIZZARO: Forensic analysis and restoration of image and video data.

    Science.gov (United States)

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences.

  8. Method and apparatus for reading meters from a video image

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, T.J.; Ferguson, J.J.

    1995-12-31

    A method and system enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.

  9. Half-Tone Video Images Of Drifting Sinusoidal Gratings

    Science.gov (United States)

    Mulligan, Jeffrey B.; Stone, Leland S.

    1991-01-01

    Digital technique for generation of slowly moving video image of sinusoidal grating avoids difficulty of transferring full image data from disk storage to image memory at conventional frame rates. Depends partly on trigonometric identity by which moving sinusoidal grating decomposed into two stationary patterns spatially and temporally modulated in quadrature. Makes motion appear smooth, even at speeds much less than one-tenth picture element per frame period. Applicable to digital video system in which image memory consists of at least 2 bits per picture element, and final brightness of picture element determined by contents of "lookup-table" memory programmed anew each frame period and indexed by coordinates of each picture element.

  10. Estimating the Video Registration Using Image Motions

    Directory of Open Access Journals (Sweden)

    N.Kannaiya Raja

    2012-07-01

    Full Text Available In this research, we consider the problems of registering multiple video sequences dynamic scenes which are not limited non rigid objects such as fireworks, blasting, high speed car moving taken from different vantage points. In this paper we propose a simple algorithm we can create different frames on particular videos moving for matching such complex scenes. Our algorithm does not require the cameras to be synchronized, and is not based on frame-by-frame or volume-by-volume registration. Instead, we model each video as the output of a linear dynamical system and transform the task of registering the video sequences to that of registering the parameters of the corresponding dynamical models. In this paper we use of a joint frame together to form distinct frame concurrently. The joint identification and the Jordan canonical form are not only applicable to the case of registering video sequences, but also to the entire genre of algorithms based on the dynamic texture model. We have also shown that out of all the possible choices for the method of identification and canonical form, the JID using JCF performs the best.

  11. Image and Video Quality Assessment Using Neural Network and SVM

    Institute of Scientific and Technical Information of China (English)

    DING Wenrui; TONG Yubing; ZHANG Qishan; YANG Dongkai

    2008-01-01

    An image and video quality assessment method was developed using neural network and support vector machines (SVM) with the peak signal to noise ratio (PSNR) and the structure similarity indexes used to describe image quality. The neural network was used to obtain the mapping functions between the objec-tive quality assessment indexes and subjective quality assessment. The SVM was used to classify the im-ages into different types which were accessed using different mapping functions. Video quality was as-sessed based on the quality of each frame in the video sequence with various weights to describe motion and scene changes in the video. The number of isolated points in the correlations of the image and video subjective and objective quality assessments was reduced by this method. Simulation results show that the method accurately accesses image quality. The monotonicity of the method for images is 6.94% higher than with the PSNR method, and the root mean square error is at least 35.90% higher than with the PSNR.

  12. Communicating pictures a course in image and video coding

    CERN Document Server

    Bull, David R

    2014-01-01

    Communicating Pictures starts with a unique historical perspective of the role of images in communications and then builds on this to explain the applications and requirements of a modern video coding system. It draws on the author's extensive academic and professional experience of signal processing and video coding to deliver a text that is algorithmically rigorous, yet accessible, relevant to modern standards, and practical. It offers a thorough grounding in visual perception, and demonstrates how modern image and video compression methods can be designed in order to meet the rate-quality performance levels demanded by today's applications, networks and users. With this book you will learn: Practical issues when implementing a codec, such as picture boundary extension and complexity reduction, with particular emphasis on efficient algorithms for transforms, motion estimators and error resilience Conflicts between conventional video compression, based on variable length coding and spatiotemporal prediction,...

  13. Indexing Film and Video Images for Storage and Retrieval.

    Science.gov (United States)

    Turner, James

    1994-01-01

    Discussion of indexing needs for film and video images focuses on appropriate access points for the storage and retrieval of individual shots which have not yet been included in a production. A study at the National Film Board of Canada is described that investigated ways to index non-art images. (18 references) (LRW)

  14. Video stabilization with sub-image phase correlation

    Institute of Scientific and Technical Information of China (English)

    Juanjuan Zhu; Baolong Guo

    2006-01-01

    @@ A fast video stabilization method is presented,which consists of sub-image phase correlation based global motion estimation,Kalman filtering based motion smoothing and motion modification based compensation.Global motion is decided using phase correlation in four sub-images.Then,the motion vectors are accumulated to be Kalman filtered for smoothing.The ordinal motion compensation is applied to each frame with modification to prevent error propagation.Experimental results show that this stabilization system can remove unwanted translational jitter of video sequences and follow intentional scan at real-time speed.

  15. Recent advances in intelligent image search and video retrieval

    CERN Document Server

    2017-01-01

    This book initially reviews the major feature representation and extraction methods and effective learning and recognition approaches, which have broad applications in the context of intelligent image search and video retrieval. It subsequently presents novel methods, such as improved soft assignment coding, Inheritable Color Space (InCS) and the Generalized InCS framework, the sparse kernel manifold learner method, the efficient Support Vector Machine (eSVM), and the Scale-Invariant Feature Transform (SIFT) features in multiple color spaces. Lastly, the book presents clothing analysis for subject identification and retrieval, and performance evaluation methods of video analytics for traffic monitoring. Digital images and videos are proliferating at an amazing speed in the fields of science, engineering and technology, media and entertainment. With the huge accumulation of such data, keyword searches and manual annotation schemes may no longer be able to meet the practical demand for retrieving relevant conte...

  16. Can social tagged images aid concept-based video search?

    NARCIS (Netherlands)

    Setz, A.T.; Snoek, C.G.M.

    2009-01-01

    This paper seeks to unravel whether commonly available social tagged images can be exploited as a training resource for concept-based video search. Since social tags are known to be ambiguous, overly personalized, and often error prone, we place special emphasis on the role of disambiguation. We pre

  17. Multiple-Instance Learning for Medical Image and Video Analysis.

    Science.gov (United States)

    Quellec, Gwenole; Cazuguel, Guy; Cochener, Beatrice; Lamard, Mathieu

    2017-01-10

    Multiple-Instance Learning (MIL) is a recent machine learning paradigm that is particularly well suited to Medical Image and Video Analysis (MIVA) tasks. Based solely on class labels assigned globally to images or videos, MIL algorithms learn to detect relevant patterns locally in images or videos. These patterns are then used for classification at a global level. Because supervision relies on global labels, manual segmentations are not needed to train MIL algorithms, unlike traditional Single-Instance Learning (SIL) algorithms. Consequently, these solutions are attracting increasing interest from the MIVA community: since the term was coined by Dietterich et al. in 1997, 73 research papers about MIL have been published in the MIVA literature. This paper reviews the existing strategies for modeling MIVA tasks as MIL problems, recommends generalpurpose MIL algorithms for each type of MIVA tasks and discusses MIVA-specific MIL algorithms. Various experiments performed in medical image and video datasets are compiled in order to back up these discussions. This meta-analysis shows that, besides being more convenient than SIL solutions, MIL algorithms are also more accurate in many cases. In other words, MIL is the ideal solution for many MIVA tasks. Recent trends are discussed and future directions are proposed for this emerging paradigm.

  18. Text Based Approach For Indexing And Retrieval Of Image And Video: A Review

    OpenAIRE

    2014-01-01

    Text data present in multimedia contain useful information for automatic annotation, indexing. Extracted information used for recognition of the overlay or scene text from a given video or image. The Extracted text can be used for retrieving the videos and images. In this paper, firstly, we are discussed the different techniques for text extraction from images and videos. Secondly, we are reviewed the techniques for indexing and retrieval of image and videos by using extracted text.

  19. Sub-component modeling for face image reconstruction in video communications

    Science.gov (United States)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  20. Evaluation of Skybox Video and Still Image products

    Science.gov (United States)

    d'Angelo, P.; Kuschk, G.; Reinartz, P.

    2014-11-01

    The SkySat-1 satellite lauched by Skybox Imaging on November 21 in 2013 opens a new chapter in civilian earth observation as it is the first civilian satellite to image a target in high definition panchromatic video for up to 90 seconds. The small satellite with a mass of 100 kg carries a telescope with 3 frame sensors. Two products are available: Panchromatic video with a resolution of around 1 meter and a frame size of 2560 × 1080 pixels at 30 frames per second. Additionally, the satellite can collect still imagery with a swath of 8 km in the panchromatic band, and multispectral images with 4 bands. Using super-resolution techniques, sub-meter accuracy is reached for the still imagery. The paper provides an overview of the satellite design and imaging products. The still imagery product consists of 3 stripes of frame images with a footprint of approximately 2.6 × 1.1 km. Using bundle block adjustment, the frames are registered, and their accuracy is evaluated. Image quality of the panchromatic, multispectral and pansharpened products are evaluated. The video product used in this evaluation consists of a 60 second gazing acquisition of Las Vegas. A DSM is generated by dense stereo matching. Multiple techniques such as pairwise matching or multi image matching are used and compared. As no ground truth height reference model is availble to the authors, comparisons on flat surface and compare differently matched DSMs are performed. Additionally, visual inspection of DSM and DSM profiles show a detailed reconstruction of small features and large skyscrapers.

  1. The importance of video editing in automated image analysis in studies of the cerebral cortex.

    Science.gov (United States)

    Terry, R D; Deteresa, R

    1982-03-01

    Editing of the video image in computerized image analysis is readily accomplished with the appropriate apparatus, but slows the assay very significantly. In dealing with the cerebral cortex, however video editing is of considerable importance in that cells are very often contiguous to one another or are partially superimposed, and this gives an erroneous measurement unless those cells are artificially separated. Also important is elimination of vascular cells from consideration by the automated counting apparatus. A third available mode of editing allows the filling-in of the cytoplasm of cell bodies which are not fully stained with sufficient intensity to be wholly detected. This study, which utilizes 23 samples, demonstrates that, in a given area of a histologic section of cerebral cortex, the number of small cells is greater and the number of large neurons is smaller with editing than without. In that not all cases follow this general pattern, inadequate editing may lead to significant errors on individual specimens as well as to the calculated mean. Video editing is therefore an essential part of the morphometric study of cerebral cortex by means of automated image analysis.

  2. Block-based embedded color image and video coding

    Science.gov (United States)

    Nagaraj, Nithin; Pearlman, William A.; Islam, Asad

    2004-01-01

    Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PSNR results indicate that CSPECK is among the best known color coders while the perceptual quality of reconstruction is superior than SPIHT and JPEG2000. We then propose a moving picture based coding system called Motion-SPECK with CSPECK as the core algorithm in an intra-based setting. Specifically, we demonstrate two modes of operation of Motion-SPECK, namely the constant-rate mode where every frame is coded at the same bit-rate and the constant-distortion mode, where we ensure the same quality for each frame. Results on well-known CIF sequences indicate that Motion-SPECK performs comparable to Motion-JPEG2000 while the visual quality of the sequence is in general superior. Both CSPECK and Motion-SPECK automatically inherit all the desirable features of SPECK such as embeddedness, low computational complexity, highly efficient performance, fast decoding and low dynamic memory requirements. The intended applications of Motion-SPECK would be high-end and emerging video applications such as High Quality Digital Video Recording System, Internet Video, Medical Imaging etc.

  3. Registration and recognition in images and videos

    CERN Document Server

    Battiato, Sebastiano; Farinella, Giovanni

    2014-01-01

    Computer vision is the science and technology of making machines that see. It is concerned with the theory, design and implementation of algorithms that can automatically process visual data to recognize objects, track and recover their shape and spatial layout. The International Computer Vision Summer School - ICVSS was established in 2007 to provide both an objective and clear overview and an in-depth analysis of the state-of-the-art  research in Computer Vision. The courses are delivered by world renowned experts in the field, from both academia and industry, and cover both theoretical and practical aspects of real Computer Vision problems.  The school is organized every year by University of Cambridge (Computer Vision and Robotics Group) and University of Catania (Image Processing Lab). Different topics are covered each year.This edited volume contains a selection of articles covering some of the talks and tutorials held during the last editions of the school. The chapters provide an in-depth overview o...

  4. On-board processing of video image sequences

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Chanrion, Olivier Arnaud; Forchhammer, Søren

    2008-01-01

    of the mission is to study transient luminous events (TLE) above severe thunderstorms: the sprites, jets and elves. Other atmospheric phenomena are also studied including aurora, gravity waves and meteors. As part of the ASIM Phase B study, on-board processing of data from the cameras is being developed...... and evaluated. On-board there are six video cameras each capturing images of 1024times1024 pixels of 12 bpp at a frame rate of 15 fps, thus totalling 1080 Mbits/s. In comparison the average downlink data rate for these images is projected to be 50 kbit/s. This calls for efficient on-board processing to select...... and compress the data. Algorithms for on-board processing of the image data are presented as well as evaluation of the performance. The main processing steps are event detection, image cropping and image compression. The on-board processing requirements are also evaluated....

  5. The Implementation of Mirror-Image Effect in MPEG-2 Compressed Video

    Institute of Scientific and Technical Information of China (English)

    NI Qiang; ZHOU Lei; ZHANG Wen-jun

    2005-01-01

    Straightforward techniques for spatial domain digital video editing (DVE) of compressed video via decompression and recompression are computationally expensive. In this paper, a novel algorithm was proposed for mirror-image special effect editing in compressed video without full frame decompression and motion estimation.The results show that with the reducing of computational complexity, the quality of edited video in compressed domain is still close to the quality of the edited video in uncompressed domain at the same bit rate.

  6. Computer simulation of orthognathic surgery with video imaging

    Science.gov (United States)

    Sader, Robert; Zeilhofer, Hans-Florian U.; Horch, Hans-Henning

    1994-04-01

    Patients with extreme jaw imbalance must often undergo operative corrections. The goal of therapy is to harmonize the stomatognathic system and an aesthetical correction of the face profile. A new procedure will be presented which supports the maxillo-facial surgeon in planning the operation and which also presents the patient the result of the treatment by video images. Once an x-ray has been digitized it is possible to produce individualized cephalometric analyses. Using a ceph on screen, all current orthognathic operations can be simulated, whereby the bony segments are moved according to given parameters, and a new soft tissue profile can be calculated. The profile of the patient is fed into the computer by way of a video system and correlated to the ceph. Using the simulated operation the computer calculates a new video image of the patient which presents the expected postoperative appearance. In studies of patients treated between 1987-91, 76 out of 121 patients were able to be evaluated. The deviation in profile change varied between .0 and 1.6mm. A side effect of the practical applications was an increase in patient compliance.

  7. Advanced methods for image registration applied to JET videos

    Energy Technology Data Exchange (ETDEWEB)

    Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Murari, Andrea [Consorzio RFX, Associazione EURATOM-ENEA per la Fusione, Padova (Italy); Gelfusa, Michela [Associazione EURATOM-ENEA – University of Rome “Tor Vergata”, Roma (Italy); Tiseanu, Ion; Zoita, Vasile [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Arnoux, Gilles [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon (United Kingdom)

    2015-10-15

    Graphical abstract: - Highlights: • Development of an image registration method for JET IR and fast visible cameras. • Method based on SIFT descriptors and coherent point drift points set registration technique. • Method able to deal with extremely noisy images and very low luminosity images. • Computation time compatible with the inter-shot analysis. - Abstract: The last years have witnessed a significant increase in the use of digital cameras on JET. They are routinely applied for imaging in the IR and visible spectral regions. One of the main technical difficulties in interpreting the data of camera based diagnostics is the presence of movements of the field of view. Small movements occur due to machine shaking during normal pulses while large ones may arise during disruptions. Some cameras show a correlation of image movement with change of magnetic field strength. For deriving unaltered information from the videos and for allowing correct interpretation an image registration method, based on highly distinctive scale invariant feature transform (SIFT) descriptors and on the coherent point drift (CPD) points set registration technique, has been developed. The algorithm incorporates a complex procedure for rejecting outliers. The method has been applied for vibrations correction to videos collected by the JET wide angle infrared camera and for the correction of spurious rotations in the case of the JET fast visible camera (which is equipped with an image intensifier). The method has proved to be able to deal with the images provided by this camera frequently characterized by low contrast and a high level of blurring and noise.

  8. Multilingual Artificial Text Extraction and Script Identification from Video Images

    Directory of Open Access Journals (Sweden)

    Akhtar Jamil

    2016-04-01

    Full Text Available This work presents a system for extraction and script identification of multilingual artificial text appearing in video images. As opposed to most of the existing text extraction systems which target textual occurrences in a particular script or language, we have proposed a generic multilingual text extraction system that relies on a combination of unsupervised and supervised techniques. The unsupervised approach is based on application of image analysis techniques which exploit the contrast, alignment and geometrical properties of text and identify candidate text regions in an image. Potential text regions are then validated by an Artificial Neural Network (ANN using a set of features computed from Gray Level Co-occurrence Matrices (GLCM. The script of the extracted text is finally identified using texture features based on Local Binary Patterns (LBP. The proposed system was evaluated on video images containing textual occurrences in five different languages including English, Urdu, Hindi, Chinese and Arabic. The promising results of the experimental evaluations validate the effectiveness of the proposed system for text extraction and script identification.

  9. Video multiple watermarking technique based on image interlacing using DWT.

    Science.gov (United States)

    Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  10. Refocusing images and videos with a conventional compact camera

    Science.gov (United States)

    Kang, Lai; Wu, Lingda; Wei, Yingmei; Song, Hanchen; Yang, Zheng

    2015-03-01

    Digital refocusing is an interesting and useful tool for generating dynamic depth-of-field (DOF) effects in many types of photography such as portraits and creative photography. Since most existing digital refocusing methods rely on four-dimensional light field captured by special precisely manufactured devices or a sequence of images captured by a single camera, existing systems are either expensive for wide practical use or incapable of handling dynamic scenes. We present a low-cost approach for refocusing high-resolution (up to 8 mega pixels) images and videos based on a single shot using an easy to build camera-mirror stereo system. Our proposed method consists of four main steps, namely system calibration, image rectification, disparity estimation, and refocusing rendering. The effectiveness of our proposed method has been evaluated extensively using both static and dynamic scenes with various depth ranges. Promising experimental results demonstrate that our method is able to simulate various controllable realistic DOF effects. To the best of our knowledge, our method is the first that allows one to refocus high-resolution images and videos of dynamic scenes captured by a conventional compact camera.

  11. Live-cell tracking using SIFT features in DIC microscopic videos.

    Science.gov (United States)

    Jiang, Richard M; Crookes, Danny; Luo, Nie; Davidson, Michael W

    2010-09-01

    In this paper, a novel motion-tracking scheme using scale-invariant features is proposed for automatic cell motility analysis in gray-scale microscopic videos, particularly for the live-cell tracking in low-contrast differential interference contrast (DIC) microscopy. In the proposed approach, scale-invariant feature transform (SIFT) points around live cells in the microscopic image are detected, and a structure locality preservation (SLP) scheme using Laplacian Eigenmap is proposed to track the SIFT feature points along successive frames of low-contrast DIC videos. Experiments on low-contrast DIC microscopic videos of various live-cell lines shows that in comparison with principal component analysis (PCA) based SIFT tracking, the proposed Laplacian-SIFT can significantly reduce the error rate of SIFT feature tracking. With this enhancement, further experimental results demonstrate that the proposed scheme is a robust and accurate approach to tackling the challenge of live-cell tracking in DIC microscopy.

  12. High Resolution Image Correspondences for Video Post-Production

    Directory of Open Access Journals (Sweden)

    Marcus Magnor

    Full Text Available We present an algorithm for estimating dense image correspondences. Our versatile approach lends itself to various tasks typical for video post-processing, including image morphing, optical flow estimation, stereo rectification, disparity/depth reconstruction, and baseline adjustment. We incorporate recent advances in feature matching, energy minimization, stereo vision, and data clustering into our approach. At the core of our correspondence estimation we use Efficient Belief Propagation for energy minimization. While state-of-the-art algorithms only work on thumbnail-sized images, our novel feature downsampling scheme in combination with a simple, yet efficient data term compression, can cope with high-resolution data. The incorporation of SIFT (Scale-Invariant Feature Transform features into data term computation further resolves matching ambiguities, making long-range correspondence estimation possible. We detect occluded areas by evaluating the correspondence symmetry, we further apply Geodesic matting to automatically determine plausible values in these regions.

  13. High Resolution Image Correspondences for Video Post-Production

    Directory of Open Access Journals (Sweden)

    Marcus Magnor

    2012-12-01

    Full Text Available We present an algorithm for estimating dense image correspondences. Our versatile approach lends itself to various tasks typical for video post-processing, including image morphing, optical flow estimation, stereo rectification, disparity/depth reconstruction, and baseline adjustment. We incorporate recent advances in feature matching, energy minimization, stereo vision, and data clustering into our approach. At the core of our correspondence estimation we use Efficient Belief Propagation for energy minimization. While state-of-the-art algorithms only work on thumbnail-sized images, our novel feature downsampling scheme in combination with a simple, yet efficient data term compression, can cope with high-resolution data. The incorporation of SIFT (Scale-Invariant Feature Transform features into data term computation further resolves matching ambiguities, making long-range correspondence estimation possible. We detect occluded areas by evaluating the correspondence symmetry, we further apply Geodesic matting to automatically determine plausible values in these regions.

  14. Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution - an application in higher education.

    Science.gov (United States)

    Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan

    2015-11-01

    To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study.

  15. Video Field Studies with your Cell Phone

    DEFF Research Database (Denmark)

    Buur, Jacob; Fraser, Euan

    2010-01-01

    Many researchers and practitioners in HCI, Interaction Design, Design Anthropology swear to video when dung field studies of potential users. This is due to the power of the media for capturing practices and contexts, conveying empathy, and engaging audiences. Newcomers to the field, and in parti......Many researchers and practitioners in HCI, Interaction Design, Design Anthropology swear to video when dung field studies of potential users. This is due to the power of the media for capturing practices and contexts, conveying empathy, and engaging audiences. Newcomers to the field...

  16. DCT Based Secret Image Hiding In Video Sequence

    Directory of Open Access Journals (Sweden)

    M. Suresh Kumar

    2014-08-01

    Full Text Available Internet which is ever more accessible to interference by not with authority people over the World. It is important to bring down a chance of Information being sensed while Transmitting is the major issue these days. To overcome these problems one of the solution is cryptography. There will be no solitude once it is decoded. So hiding data to make it confidential. Copyright is one of the ways for hiding data and it is security for digital media. Its significance and techniques used in executing hiding of data let us see in brief. The existing LSB modification technique as in this approach the bits are randomly distributes the bits of message in image so which will becomes complex for anonymous persons to extract original message information, it opens the gates for loosing important hidden information. Here hiding and extraction method is used for AVI (Audio Video Interleave. As Higher order coefficients maintains Secret message bits. The hidden information will be in the form of gray scale image pixel values. Grayscale value then converted into binary values .The resultant binary values will be assigned to the higher order coefficient values of DCT of AVI video frames. These experiments were successful. We can analyze the results using Mat lab simulation software.

  17. Image and Video Processing for Visually Handicapped People

    Directory of Open Access Journals (Sweden)

    Dimitrios Tzovaras

    2008-03-01

    Full Text Available This paper reviews the state of the art in the field of assistive devices for sight-handicapped people. It concentrates in particular on systems that use image and video processing for converting visual data into an alternate rendering modality that will be appropriate for a blind user. Such alternate modalities can be auditory, haptic, or a combination of both. There is thus the need for modality conversion, from the visual modality to another one; this is where image and video processing plays a crucial role. The possible alternate sensory channels are examined with the purpose of using them to present visual information to totally blind persons. Aids that are either already existing or still under development are then presented, where a distinction is made according to the final output channel. Haptic encoding is the most often used by means of either tactile or combined tactile/kinesthetic encoding of the visual data. Auditory encoding may lead to low-cost devices, but there is need to handle high information loss incurred when transforming visual data to auditory one. Despite a higher technical complexity, audio/haptic encoding has the advantage of making use of all available user's sensory channels.

  18. Image and Video Processing for Visually Handicapped People

    Directory of Open Access Journals (Sweden)

    Bologna Guido

    2007-01-01

    Full Text Available This paper reviews the state of the art in the field of assistive devices for sight-handicapped people. It concentrates in particular on systems that use image and video processing for converting visual data into an alternate rendering modality that will be appropriate for a blind user. Such alternate modalities can be auditory, haptic, or a combination of both. There is thus the need for modality conversion, from the visual modality to another one; this is where image and video processing plays a crucial role. The possible alternate sensory channels are examined with the purpose of using them to present visual information to totally blind persons. Aids that are either already existing or still under development are then presented, where a distinction is made according to the final output channel. Haptic encoding is the most often used by means of either tactile or combined tactile/kinesthetic encoding of the visual data. Auditory encoding may lead to low-cost devices, but there is need to handle high information loss incurred when transforming visual data to auditory one. Despite a higher technical complexity, audio/haptic encoding has the advantage of making use of all available user's sensory channels.

  19. Image and Video Indexing Using Networks of Operators

    Directory of Open Access Journals (Sweden)

    Jérôme Gensel

    2007-11-01

    Full Text Available This article presents a framework for the design of concept detection systems for image and video indexing. This framework integrates in a homogeneous way all the data and processing types. The semantic gap is crossed in a number of steps, each producing a small increase in the abstraction level of the handled data. All the data inside the semantic gap and on both sides included are seen as a homogeneous type called numcept and all the processing modules between the various numcepts are seen as a homogeneous type called operator. Concepts are extracted from the raw signal using networks of operators operating on numcepts. These networks can be represented as data-flow graphs and the introduced homogenizations allow fusing elements regardless of their nature. Low-level descriptors can be fused with intermediate of final concepts. This framework has been used to build a variety of indexing networks for images and videos and to evaluate many aspects of them. Using annotated corpora and protocols of the 2003 to 2006 TRECVID evaluation campaigns, the benefit brought by the use of individual features, the use of several modalities, the use of various fusion strategies, and the use of topologic and conceptual contexts was measured. The framework proved its efficiency for the design and evaluation of a series of network architectures while factorizing the training effort for common sub-networks.

  20. Control of Perceptual Image Quality Based on PID for Streaming Video

    Institute of Scientific and Technical Information of China (English)

    SONG Jian-xin

    2003-01-01

    Constant levels of perceptual quality of streaming video is what ideall users expect. In most cases, however, they receive time-varying levels of quality of video. In this paper, the author proposes a new control method of perceptual quality in variable bit rate video encoding for streaming video. The image quality calculation based on the perception of human visual systems is presented. Quantization properties of DCT coefficients are analyzed to control effectively. Quantization scale factors are ascertained based on the visual mask effect. A Proportional Integral Difference ( PID ) controller is used to control the image quality. Experimental results show that this method improves the perceptual quality uniformity of encoded video.

  1. Video rate spectral imaging using a coded aperture snapshot spectral imager.

    Science.gov (United States)

    Wagadarikar, Ashwin A; Pitsianis, Nikos P; Sun, Xiaobai; Brady, David J

    2009-04-13

    We have previously reported on coded aperture snapshot spectral imagers (CASSI) that can capture a full frame spectral image in a snapshot. Here we describe the use of CASSI for spectral imaging of a dynamic scene at video rate. We describe significant advances in the design of the optical system, system calibration procedures and reconstruction method. The new optical system uses a double Amici prism to achieve an in-line, direct view configuration, resulting in a substantial improvement in image quality. We describe NeAREst, an algorithm for estimating the instantaneous three-dimensional spatio-spectral data cube from CASSI's two-dimensional array of encoded and compressed measurements. We utilize CASSI's snapshot ability to demonstrate a spectral image video of multi-colored candles with live flames captured at 30 frames per second.

  2. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    Science.gov (United States)

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  3. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    Science.gov (United States)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  4. Video lensfree microscopy of 2D and 3D culture of cells

    Science.gov (United States)

    Allier, C. P.; Vinjimore Kesavan, S.; Coutard, J.-G.; Cioni, O.; Momey, F.; Navarro, F.; Menneteau, M.; Chalmond, B.; Obeid, P.; Haguet, V.; David-Watine, B.; Dubrulle, N.; Shorte, S.; van der Sanden, B.; Di Natale, C.; Hamard, L.; Wion, D.; Dolega, M. E.; Picollet-D'hahan, N.; Gidrol, X.; Dinten, J.-M.

    2014-03-01

    Innovative imaging methods are continuously developed to investigate the function of biological systems at the microscopic scale. As an alternative to advanced cell microscopy techniques, we are developing lensfree video microscopy that opens new ranges of capabilities, in particular at the mesoscopic level. Lensfree video microscopy allows the observation of a cell culture in an incubator over a very large field of view (24 mm2) for extended periods of time. As a result, a large set of comprehensive data can be gathered with strong statistics, both in space and time. Video lensfree microscopy can capture images of cells cultured in various physical environments. We emphasize on two different case studies: the quantitative analysis of the spontaneous network formation of HUVEC endothelial cells, and by coupling lensfree microscopy with 3D cell culture in the study of epithelial tissue morphogenesis. In summary, we demonstrate that lensfree video microscopy is a powerful tool to conduct cell assays in 2D and 3D culture experiments. The applications are in the realms of fundamental biology, tissue regeneration, drug development and toxicology studies.

  5. Super deep 3D images from a 3D omnifocus video camera.

    Science.gov (United States)

    Iizuka, Keigo

    2012-02-20

    When using stereographic image pairs to create three-dimensional (3D) images, a deep depth of field in the original scene enhances the depth perception in the 3D image. The omnifocus video camera has no depth of field limitations and produces images that are in focus throughout. By installing an attachment on the omnifocus video camera, real-time super deep stereoscopic pairs of video images were obtained. The deeper depth of field creates a larger perspective image shift, which makes greater demands on the binocular fusion of human vision. A means of reducing the perspective shift without harming the depth of field was found.

  6. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    Science.gov (United States)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalisedcross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  7. A comparison of continuous vs. discrete image models for probabilistic image and video retrieval

    NARCIS (Netherlands)

    Vries, A.P. de; Westerveld, T.H.W.

    2004-01-01

    The language modeling approach to retrieval is based on the philosophy that the language in a relevant document follows the same distribution as that in the query. This same philosophy can also be applied to content-based image and video retrieval, where the only difference lies in the definition of

  8. Spatially reduced image extraction from MPEG-2 video: fast algorithms and applications

    Science.gov (United States)

    Song, Junehwa; Yeo, Boon-Lock

    1997-12-01

    The MPEG-2 video standards are targeted for high-quality video broadcast and distribution, and are optimized for efficient storage and transmission. However, it is difficult to process MPEG-2 for video browsing and database applications without first decompressing the video. Yeo and Liu have proposed fast algorithms for the direct extraction of spatially reduced images from MPEG-1 video. Reduced images have been demonstrated to be effective for shot detection, shot browsing and editing, and temporal processing of video for video presentation and content annotation. In this paper, we develop new tools to handle the extra complexity in MPEG-2 video for extracting spatially reduced images. In particular, we propose new classes of discrete cosine transform (DCT) domain and DCT inverse motion compensation operations for handling the interlaced modes in the different frame types of MPEG-2, and design new and efficient algorithms for generating spatially reduced images of an MPEG-2 video. We also describe key video applications on the extracted reduced images.

  9. Human body motion capture from multi-image video sequences

    Science.gov (United States)

    D'Apuzzo, Nicola

    2003-01-01

    In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points

  10. Enhancement of Video Images Degraded by Turbid Water

    Science.gov (United States)

    1986-12-01

    I o o i^ipipP^^iW^ NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS ENHANCEMENT OF VIDEO IMAGES DEGRADED BY TURBID WATER by Jorge A...OUTPUT ARRAY » « • DOL = I.IHG DO M - 1 .IM6Y SALIDA (L.M) = 0 END DO END DO c c c c c c c TYPE «/OUTPUT ARRAY INITIALIZED’ CALL riCHECK...MM((J-1)»16>+1 IXX=LL+IX-1 IYY =MM+IY-1 SALIDA (IXX.IYY) = SALIDA (IXX.IYY>+INTE(IX.IY) TYPEMXX AND IYY =>’.IXX.IYY 50 ^&&J^i£aJ^^ ’» 100 c

  11. Accelerating Image Based Scientific Applications using Commodity Video Graphics Adapters

    Directory of Open Access Journals (Sweden)

    Randy P. Broussard

    2009-06-01

    Full Text Available The processing power available in current video graphics cards is approaching super computer levels. State-of-the-art graphical processing units (GPU boast of computational performance in the range of 1.0-1.1 trillion floating point operations per second (1.0-1.1 Teraflops. Making this processing power accessible to the scientific community would benefit many fields of research. This research takes a relatively computationally expensive image-based iris segmentation algorithm and hosts it on a GPU using the High Level Shader Language which is part of DirectX 9.0. The selected segmentation algorithm uses basic image processing techniques such as image inversion, value squaring, thresholding, dilation, erosion and a computationally intensive local kurtosis (fourth central moment calculation. Strengths and limitations of the DirectX rendering pipeline are discussed. The primary source of the graphical processing power, the pixel or fragment shader, is discussed in detail. Impressive acceleration results were obtained. The iris segmentation algorithm was accelerated by a factor of 40 over the highly optimized C++ version hosted on the computer's central processing unit. Some parts of the algorithm ran at speeds that were over 100 times faster than their C++ counterpart. GPU programming details and HLSL code samples are presented as part of the acceleration discussion.

  12. Energy Efficient Image/Video Data Transmission on Commercial Multi-Core Processors

    Directory of Open Access Journals (Sweden)

    Daihee Park

    2012-11-01

    Full Text Available In transmitting image/video data over Video Sensor Networks (VSNs, energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2~5 without compromising image/video quality.

  13. Heterogeneity image patch index and its application to consumer video summarization.

    Science.gov (United States)

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.

  14. Frequency Identification of Vibration Signals Using Video Camera Image Data

    Directory of Open Access Journals (Sweden)

    Chia-Hung Wu

    2012-10-01

    Full Text Available This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  15. Frequency identification of vibration signals using video camera image data.

    Science.gov (United States)

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-10-16

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  16. AUTOMATED VIDEO IMAGE MORPHOMETRY OF THE CORNEAL ENDOTHELIUM

    NARCIS (Netherlands)

    SIERTSEMA, JV; LANDESZ, M; VANDENBROM, H; VANRIJ, G

    1993-01-01

    The central corneal endothelium of 13 eyes in 13 subjects was visualized with a non-contact specular microscope. This report describes the computer-assisted morphometric analysis of enhanced digitized images, using a direct input by means of a frame grabber. The output consisted of mean cell area, c

  17. Quality Assessment of Adaptive Bitrate Videos using Image Metrics and Machine Learning

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Brunnström, Kjell

    2015-01-01

    Adaptive bitrate (ABR) streaming is widely used for distribution of videos over the internet. In this work, we investigate how well we can predict the quality of such videos using well-known image metrics, information about the bitrate levels, and a relatively simple machine learning method...

  18. Potential usefulness of a video printer for producing secondary images from digitized chest radiographs

    Science.gov (United States)

    Nishikawa, Robert M.; MacMahon, Heber; Doi, Kunio; Bosworth, Eric

    1991-05-01

    Communication between radiologists and clinicians could be improved if a secondary image (copy of the original image) accompanied the radiologic report. In addition, the number of lost original radiographs could be decreased, since clinicians would have less need to borrow films. The secondary image should be simple and inexpensive to produce, while providing sufficient image quality for verification of the diagnosis. We are investigating the potential usefulness of a video printer for producing copies of radiographs, i.e. images printed on thermal paper. The video printer we examined (Seikosha model VP-3500) can provide 64 shades of gray. It is capable of recording images up to 1,280 pixels by 1,240 lines and can accept any raster-type video signal. The video printer was characterized in terms of its linearity, contrast, latitude, resolution, and noise properties. The quality of video-printer images was also evaluated in an observer study using portable chest radiographs. We found that observers could confirm up to 90 of the reported findings in the thorax using video- printer images, when the original radiographs were of high quality. The number of verified findings was diminished when high spatial resolution was required (e.g. detection of a subtle pneumothorax) or when a low-contrast finding was located in the mediastinal area or below the diaphragm (e.g. nasogastric tubes).

  19. Objectification of perceptual image quality for mobile video

    Science.gov (United States)

    Lee, Seon-Oh; Sim, Dong-Gyu

    2011-06-01

    This paper presents an objective video quality evaluation method for quantifying the subjective quality of digital mobile video. The proposed method aims to objectify the subjective quality by extracting edgeness and blockiness parameters. To evaluate the performance of the proposed algorithms, we carried out subjective video quality tests with the double-stimulus continuous quality scale method and obtained differential mean opinion score values for 120 mobile video clips. We then compared the performance of the proposed methods with that of existing methods in terms of the differential mean opinion score with 120 mobile video clips. Experimental results showed that the proposed methods were approximately 10% better than the edge peak signal-to-noise ratio of the J.247 method in terms of the Pearson correlation.

  20. What do we do with all this video? Better understanding public engagement for image and video annotation

    Science.gov (United States)

    Wiener, C.; Miller, A.; Zykov, V.

    2016-12-01

    Advanced robotic vehicles are increasingly being used by oceanographic research vessels to enable more efficient and widespread exploration of the ocean, particularly the deep ocean. With cutting-edge capabilities mounted onto robotic vehicles, data at high resolutions is being generated more than ever before, enabling enhanced data collection and the potential for broader participation. For example, high resolution camera technology not only improves visualization of the ocean environment, but also expands the capacity to engage participants remotely through increased use of telepresence and virtual reality techniques. Schmidt Ocean Institute is a private, non-profit operating foundation established to advance the understanding of the world's oceans through technological advancement, intelligent observation and analysis, and open sharing of information. Telepresence-enabled research is an important component of Schmidt Ocean Institute's science research cruises, which this presentation will highlight. Schmidt Ocean Institute is one of the only research programs that make their entire underwater vehicle dive series available online, creating a collection of video that enables anyone to follow deep sea research in real time. We encourage students, educators and the general public to take advantage of freely available dive videos. Additionally, other SOI-supported internet platforms, have engaged the public in image and video annotation activities. Examples of these new online platforms, which utilize citizen scientists to annotate scientific image and video data will be provided. This presentation will include an introduction to SOI-supported video and image tagging citizen science projects, real-time robot tracking, live ship-to-shore communications, and an array of outreach activities that enable scientists to interact with the public and explore the ocean in fascinating detail.

  1. Video object's behavior analyzing based on motion history image and hidden markov model

    Institute of Scientific and Technical Information of China (English)

    Meng Fanfeng; Qu Zhenshen; Zeng Qingshuang; Li li

    2009-01-01

    A novel method was proposed, which extracted video object's track and analyzed video object's behavior. Firstly, this method tracked the video object based on motion history image, and obtained the coordinate-based track sequence and orientation-based track sequence of the video object. Then the proposed hidden markov model (HMM) based algorithm was used to analyze the behavior of video object with the track sequence as input. Experimental results on traffic object show that this method can achieve the statistics of a mass of traffic objects' behavior efficiently, can acquire the reasonable velocity behavior curve of traffic object, and can recognize traffic object's various behaviors accurately. It provides a base for further research on video object behavior.

  2. 17 CFR 232.304 - Graphic, image, audio and video material.

    Science.gov (United States)

    2010-04-01

    ... delivered to investors and others is deemed part of the electronic filing and subject to the civil liability..., image, audio or video material, they are not subject to the civil liability and anti-fraud provisions...

  3. Field methods to measure surface displacement and strain with the Video Image Correlation method

    Science.gov (United States)

    Maddux, Gary A.; Horton, Charles M.; Mcneill, Stephen R.; Lansing, Matthew D.

    1994-01-01

    The objective of this project was to develop methods and application procedures to measure displacement and strain fields during the structural testing of aerospace components using paint speckle in conjunction with the Video Image Correlation (VIC) system.

  4. Compact Video Microscope Imaging System Implemented in Colloid Studies

    Science.gov (United States)

    McDowell, Mark

    2002-01-01

    Long description Photographs showing fiber-optic light source, microscope and charge-coupled discharge (CCD) camera head connected to camera body, CCD camera body feeding data to image acquisition board in PC, and Cartesian robot controlled via PC board. The Compact Microscope Imaging System (CMIS) is a diagnostic tool with intelligent controls for use in space, industrial, medical, and security applications. CMIS can be used in situ with a minimum amount of user intervention. This system can scan, find areas of interest in, focus on, and acquire images automatically. Many multiple-cell experiments require microscopy for in situ observations; this is feasible only with compact microscope systems. CMIS is a miniature machine vision system that combines intelligent image processing with remote control. The software also has a user-friendly interface, which can be used independently of the hardware for further post-experiment analysis. CMIS has been successfully developed in the SML Laboratory at the NASA Glenn Research Center and adapted for use for colloid studies and is available for telescience experiments. The main innovations this year are an improved interface, optimized algorithms, and the ability to control conventional full-sized microscopes in addition to compact microscopes. The CMIS software-hardware interface is being integrated into our SML Analysis package, which will be a robust general-purpose image-processing package that can handle over 100 space and industrial applications.

  5. Simultaneous compression and encryption of closely resembling images: application to video sequences and polarimetric images.

    Science.gov (United States)

    Aldossari, M; Alfalou, A; Brosseau, C

    2014-09-22

    This study presents and validates an optimized method of simultaneous compression and encryption designed to process images with close spectra. This approach is well adapted to the compression and encryption of images of a time-varying scene but also to static polarimetric images. We use the recently developed spectral fusion method [Opt. Lett.35, 1914-1916 (2010)] to deal with the close resemblance of the images. The spectral plane (containing the information to send and/or to store) is decomposed in several independent areas which are assigned according a specific way. In addition, each spectrum is shifted in order to minimize their overlap. The dual purpose of these operations is to optimize the spectral plane allowing us to keep the low- and high-frequency information (compression) and to introduce an additional noise for reconstructing the images (encryption). Our results show that not only can the control of the spectral plane enhance the number of spectra to be merged, but also that a compromise between the compression rate and the quality of the reconstructed images can be tuned. We use a root-mean-square (RMS) optimization criterion to treat compression. Image encryption is realized at different security levels. Firstly, we add a specific encryption level which is related to the different areas of the spectral plane, and then, we make use of several random phase keys. An in-depth analysis at the spectral fusion methodology is done in order to find a good trade-off between the compression rate and the quality of the reconstructed images. Our new proposal spectral shift allows us to minimize the image overlap. We further analyze the influence of the spectral shift on the reconstructed image quality and compression rate. The performance of the multiple-image optical compression and encryption method is verified by analyzing several video sequences and polarimetric images.

  6. Single-channel stereoscopic video imaging modality based on transparent rotating deflector.

    Science.gov (United States)

    Radfar, Edalat; Jang, Won Hyuk; Freidoony, Leila; Park, Jihoon; Kwon, Kichul; Jung, Byungjo

    2015-10-19

    In this study, we developed a single-channel stereoscopic video imaging modality based on a transparent rotating deflector (TRD). Sequential two-dimensional (2D) left and right images were obtained through the TRD synchronized with a camera, and the components of the imaging modality were controlled by a microcontroller unit. The imaging modality was characterized by evaluating the stereoscopic video image generation, rotation of the TRD, heat generation by the stepping motor, and image quality and its stability in terms of the structural similarity index. The degree of depth perception was estimated and subjective analysis was performed to evaluate the depth perception improvement. The results show that the single-channel stereoscopic video imaging modality may: 1) overcome some limitations of conventional stereoscopic video imaging modalities; 2) be a potential economical compact stereoscopic imaging modality if the system components can be miniaturized; 3) be easily integrated into current 2D optical imaging modalities to produce a stereoscopic image; and 4) be applied to various medical and industrial fields.

  7. Automatic classification of images with appendiceal orifice in colonoscopy videos.

    Science.gov (United States)

    Cao, Yu; Liu, Danyu; Tavanapong, Wallapak; Wong, Johnny; Oh, JungHwan; de Groen, Piet C

    2006-01-01

    Colonoscopy is an endoscopic technique that allows a physician to inspect the inside of the human colon. In current practice, videos captured from colonoscopic procedures are not routinely stored for either manual or automated post-procedure analysis. In this paper, we introduce new algorithms for automated detection of the presence of the shape of the opening of the appendix in a colonoscopy video frame. The appearance of the appendix in colonoscopy videos indicates traversal of the colon, which is an important measurement for evaluating the quality of colonoscopic procedures. The proposed techniques are valuable for (1) establishment of an effective content-based retrieval system to facilitate endoscopic research and education; and (2) assessment and improvement of the procedural skills of endoscopists, both in training and practice.

  8. The compressed average image intensity metric for stereoscopic video quality assessment

    Science.gov (United States)

    Wilczewski, Grzegorz

    2016-09-01

    The following article depicts insights towards design, creation and testing of a genuine metric designed for a 3DTV video quality evaluation. The Compressed Average Image Intensity (CAII) mechanism is based upon stereoscopic video content analysis, setting its core feature and functionality to serve as a versatile tool for an effective 3DTV service quality assessment. Being an objective type of quality metric it may be utilized as a reliable source of information about the actual performance of a given 3DTV system, under strict providers evaluation. Concerning testing and the overall performance analysis of the CAII metric, the following paper presents comprehensive study of results gathered across several testing routines among selected set of samples of stereoscopic video content. As a result, the designed method for stereoscopic video quality evaluation is investigated across the range of synthetic visual impairments injected into the original video stream.

  9. Image and video compression for multimedia engineering fundamentals, algorithms, and standards

    CERN Document Server

    Shi, Yun Q

    2008-01-01

    Part I: Fundamentals Introduction Quantization Differential Coding Transform Coding Variable-Length Coding: Information Theory Results (II) Run-Length and Dictionary Coding: Information Theory Results (III) Part II: Still Image Compression Still Image Coding: Standard JPEG Wavelet Transform for Image Coding: JPEG2000 Nonstandard Still Image Coding Part III: Motion Estimation and Compensation Motion Analysis and Motion Compensation Block Matching Pel-Recursive Technique Optical Flow Further Discussion and Summary on 2-D Motion Estimation Part IV: Video Compression Fundam

  10. Video Surveillance of Epilepsy Patients using Color Image Processing

    DEFF Research Database (Denmark)

    Bager, Gitte; Vilic, Kenan; Alving, Jørgen

    2007-01-01

    This report introduces a method for tracking of patients under video surveillance based on a marker system. The patients are not restricted in their movements, which requires a tracking system that can overcome non-ideal scenes e.g. occlusions, very fast movements, lightning issues and other movi...

  11. Video surveillance of epilepsy patients using color image processing

    DEFF Research Database (Denmark)

    Bager, Gitte; Vilic, Kenan; Vilic, Adnan

    2014-01-01

    This paper introduces a method for tracking patients under video surveillance based on a color marker system. The patients are not restricted in their movements, which requires a tracking system that can overcome non-ideal scenes e.g. occlusions, very fast movements, lighting issues and other mov...

  12. On-board processing of video image sequences

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Chanrion, Olivier Arnaud; Forchhammer, Søren

    2008-01-01

    The ldquoatmosphere-space interactions monitorrdquo (ASIM) is a payload to be mounted on one of the external platforms of the Columbus module of the International Space Station (ISS). The instruments include six video cameras, six photometers and one X-ray detector. The main scientific objective...

  13. Correction of spatially varying image and video motion blur using a hybrid camera.

    Science.gov (United States)

    Tai, Yu-Wing; Du, Hao; Brown, Michael S; Lin, Stephen

    2010-06-01

    We describe a novel approach to reduce spatially varying motion blur in video and images using a hybrid camera system. A hybrid camera is a standard video camera that is coupled with an auxiliary low-resolution camera sharing the same optical path but capturing at a significantly higher frame rate. The auxiliary video is temporally sharper but at a lower resolution, while the lower frame-rate video has higher spatial resolution but is susceptible to motion blur. Our deblurring approach uses the data from these two video streams to reduce spatially varying motion blur in the high-resolution camera with a technique that combines both deconvolution and super-resolution. Our algorithm also incorporates a refinement of the spatially varying blur kernels to further improve results. Our approach can reduce motion blur from the high-resolution video as well as estimate new high-resolution frames at a higher frame rate. Experimental results on a variety of inputs demonstrate notable improvement over current state-of-the-art methods in image/video deblurring.

  14. Evaluation of image qualities on the international standard video-conferencing.

    Science.gov (United States)

    Shiotsuki, Hiroyuki; Okada, Yoshikazu; Ogushi, Yoichi; Tsutsumi, Yutaka; Kuwahira, Ichiro; Kawai, Naoki; Yamauchi, Kazunobu

    2003-12-01

    International standard-based video-conferencing systems are widely used in telemedicine activities worldwide. Through our experiences with these systems, it is apparent that the image quality is high enough to conduct educational and conferential sessions. However, because the terminals are intended for common use, we evaluated their qualities. After having an international standard system evaluated by a general practitioner using medical images, we prepared non-medical graphic images to examine the characteristics of the video-conferencing equipment. ROC (Receiver Operation Characteristic) analysis was employed for the evaluation. We concluded that the international standard video-conferencing systems are of sufficient quality for medical presentations, and that their interactivity and the use of proper software will aid the understanding of images in specific medical areas.

  15. Achieving real-time capsule endoscopy (CE) video visualization through panoramic imaging

    Science.gov (United States)

    Yi, Steven; Xie, Jean; Mui, Peter; Leighton, Jonathan A.

    2013-02-01

    In this paper, we mainly present a novel and real-time capsule endoscopy (CE) video visualization concept based on panoramic imaging. Typical CE videos run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. To date, there is no commercially available tool capable of providing stabilized and processed CE video that is easy to analyze in real time. The burden on physicians' disease finding efforts is thus big. In fact, since the CE camera sensor has a limited forward looking view and low image frame rate (typical 2 frames per second), and captures very close range imaging on the GI tract surface, it is no surprise that traditional visualization method based on tracking and registration often fails to work. This paper presents a novel concept for real-time CE video stabilization and display. Instead of directly working on traditional forward looking FOV (field of view) images, we work on panoramic images to bypass many problems facing traditional imaging modalities. Methods on panoramic image generation based on optical lens principle leading to real-time data visualization will be presented. In addition, non-rigid panoramic image registration methods will be discussed.

  16. Endoscopic video-autofluorescence imaging followed by narrow band imaging for detecting early neoplasia in Barrett's esophagus

    NARCIS (Netherlands)

    M.A. Kara; F.P. Peters; P. Fockens; F.J.W. ten Kate; J.J.G.H.M. Bergman

    2006-01-01

    Background: Video-autofluorescence imaging (AFI) and narrow band imaging (NBI) are new endoscopic techniques that may improve the detection of high-grade intraepithelial neoplasia (HGIN) in Barrett's esophagus (BE). AFI improves the detection of lesions but may give false-positive findings. NBI allo

  17. [Development of a video image system for wireless capsule endoscopes based on DSP].

    Science.gov (United States)

    Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua

    2008-02-01

    A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.

  18. Experimental design and analysis of JND test on coded image/video

    Science.gov (United States)

    Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay

    2015-09-01

    The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.

  19. Viral video: Live imaging of virus-host encounters

    Science.gov (United States)

    Son, Kwangmin; Guasto, Jeffrey S.; Cubillos-Ruiz, Andres; Chisholm, Sallie W.; Sullivan, Matthew B.; Stocker, Roman

    2014-11-01

    Viruses are non-motile infectious agents that rely on Brownian motion to encounter and subsequently adsorb to their hosts. Paradoxically, the viral adsorption rate is often reported to be larger than the theoretical limit imposed by the virus-host encounter rate, highlighting a major gap in the experimental quantification of virus-host interactions. Here we present the first direct quantification of the viral adsorption rate, obtained using live imaging of individual host cells and viruses for thousands of encounter events. The host-virus pair consisted of Prochlorococcus MED4, a 800 nm small non-motile bacterium that dominates photosynthesis in the oceans, and its virus PHM-2, a myovirus that has a 80 nm icosahedral capsid and a 200 nm long rigid tail. We simultaneously imaged hosts and viruses moving by Brownian motion using two-channel epifluorescent microscopy in a microfluidic device. This detailed quantification of viral transport yielded a 20-fold smaller adsorption efficiency than previously reported, indicating the need for a major revision in infection models for marine and likely other ecosystems.

  20. Registering aerial video images using the projective constraint.

    Science.gov (United States)

    Jackson, Brian P; Goshtasby, A Ardeshir

    2010-03-01

    To separate object motion from camera motion in an aerial video, consecutive frames are registered at their planar background. Feature points are selected in consecutive frames and those that belong to the background are identified using the projective constraint. Corresponding background feature points are then used to register and align the frames. By aligning video frames at the background and knowing that objects move against the background, a means to detect and track moving objects is provided. Only scenes with planar background are considered in this study. Experimental results show improvement in registration accuracy when using the projective constraint to determine the registration parameters as opposed to finding the registration parameters without the projective constraint.

  1. Computer Vision Tools for Finding Images and Video Sequences.

    Science.gov (United States)

    Forsyth, D. A.

    1999-01-01

    Computer vision offers a variety of techniques for searching for pictures in large collections of images. Appearance methods compare images based on the overall content of the image using certain criteria. Finding methods concentrate on matching subparts of images, defined in a variety of ways, in hope of finding particular objects. These ideas…

  2. Computer Vision Tools for Finding Images and Video Sequences.

    Science.gov (United States)

    Forsyth, D. A.

    1999-01-01

    Computer vision offers a variety of techniques for searching for pictures in large collections of images. Appearance methods compare images based on the overall content of the image using certain criteria. Finding methods concentrate on matching subparts of images, defined in a variety of ways, in hope of finding particular objects. These ideas…

  3. Change Detection in Uav Video Mosaics Combining a Feature Based Approach and Extended Image Differencing

    Science.gov (United States)

    Saur, Günter; Krüger, Wolfgang

    2016-06-01

    Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.

  4. VQone MATLAB toolbox: A graphical experiment builder for image and video quality evaluations: VQone MATLAB toolbox.

    Science.gov (United States)

    Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka

    2016-03-01

    This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page (http://www.helsinki.fi/psychology/groups/visualcognition/).

  5. A professional and cost effective digital video editing and image storage system for the operating room.

    Science.gov (United States)

    Scollato, A; Perrini, P; Benedetto, N; Di Lorenzo, N

    2007-06-01

    We propose an easy-to-construct digital video editing system ideal to produce video documentation and still images. A digital video editing system applicable to many video sources in the operating room is described in detail. The proposed system has proved easy to use and permits one to obtain videography quickly and easily. Mixing different streams of video input from all the devices in use in the operating room, the application of filters and effects produces a final, professional end-product. Recording on a DVD provides an inexpensive, portable and easy-to-use medium to store or re-edit or tape at a later time. From stored videography it is easy to extract high-quality, still images useful for teaching, presentations and publications. In conclusion digital videography and still photography can easily be recorded by the proposed system, producing high-quality video recording. The use of firewire ports provides good compatibility with next-generation hardware and software. The high standard of quality makes the proposed system one of the lowest priced products available today.

  6. Technical report. Video imaging of ethidium bromide-stained DNA gels with surface UV illumination.

    Science.gov (United States)

    Solioz, M

    1994-06-01

    We describe here the use of surface UV illumination to record ethidium bromide-stained DNA gels with a video camera. This mode of illumination allows the use of a standard video camera equipped with a red filter and results in a high signal strength. The assembly of a low-cost video system on this basis is described. It uses the public domain software called Image on a Macintosh computer and PostScript laser printer or a thermal printer to generate hard copies. The setup is sensitive enough to detect 500 pg of DNA on an ethidium bromide-stained DNA gel. The UV illumination method described here can also greatly improve the sensitivity of existing video recording equipment.

  7. Video Outside Versus Video Inside the Web: Do Media Setting and Image Size Have an Impact on the Emotion-Evoking Potential of Video?

    Science.gov (United States)

    Verleur, Ria; Verhagen, Plon W.

    To explore the educational potential of video-evoked affective responses in a Web-based environment, the question was raised whether video in a Web-based environment is experienced differently from video in a traditional context. An experiment was conducted that studied the affect-evoking power of video segments in a window on a computer screen…

  8. 基于Matlab的无人机侦察视频处理的GUI设计%GUI Video and Image Processing of UAV Reconnaissance Videos Based on Matlab

    Institute of Scientific and Technical Information of China (English)

    穆武第; 张广政; 王东

    2012-01-01

    In view of the problem of image dithering and out of focuse of UAV reconnaissance videos, a UAV ' s reconnaissance video processing GUI package based on Matlab video and image processing blocksets and image processing toolbox is proposed. The interface can process reconnaissance sequences successfully for video stabilization, estimating of focus, target tracking and matching, and key frame extraction. Simulation results indicate that the proposed GUI is promising in the field of video and image processing concerning UAV reconnaissance videos.%针对无人机侦察视频序列抖动剧烈、聚焦困难、目标匹配难等特点,借助Matlab平台,构建了1个实现视频稳定、聚集评估和跟踪匹配等功能的无人机侦察视频处理模块.该模块能有效降低无人机侦察视频序列的抖动特性,并实现了目标聚集评估、跟踪及关键帧提取等一体化功能,在无人机侦察视频处理领域具有良好的应用前景.

  9. Image and Video based double watermark extraction spread spectrum watermarking in low variance region

    Directory of Open Access Journals (Sweden)

    Mriganka Gogoi

    2013-07-01

    Full Text Available Digital watermarking plays a very important role in copyright protection. It is one of the techniques which are used for safeguarding the origins of the image, audio and video by protecting it against Piracy. This paper proposes a low variance based spread spectrum watermarking for image and video in which the watermark is obtained twice in the receiver. The watermark to be added is a binary image of comparatively smaller size than the Cover Image. Cover Image is divided into number of 8x8 blocks and transform into frequency domain using Discrete Cosine Transform. A gold sequence is added as well as subtracted in each block for each watermark bit. In most cases, researchers has generally used algorithms for extracting single watermark and also it is seen that finding the location of the distorted bit of the watermark due to some attacks is one of the most challenging task. However, in this paper the same watermark is embedded as well as extracted twice with gold code without much distortion of the image and comparing these two watermarks will help in finding the distorted bit. Another feature is that as this algorithm is based on embedding of watermark in low variance region, therefore proper extraction of the watermark is obtained at a lesser modulating factor. The proposed algorithm is very much useful in applications like real-time broad casting, image and video authentication and secure camera system. The experimental results show that the watermarking technique is robust against various attacks.

  10. Video and image retrieval beyond the cognitive level: the needs and possibilities

    Science.gov (United States)

    Hanjalic, Alan

    2001-01-01

    The worldwide research efforts in the are of image and video retrieval have concentrated so far on increasing the efficiency and reliability of extracting the elements of image and video semantics and so on improving the search and retrieval performance at the cognitive level of content abstraction. At this abstraction level, the user is searching for 'factual' or 'objective' content such as image showing a panorama of San Francisco, an outdoor or an indoor image, a broadcast news report on a defined topic, a movie dialog between the actors A and B or the parts of a basketball game showing fast breaks, steals and scores. These efforts, however, do not address the retrieval applications at the so-called affective level of content abstraction where the 'ground truth' is not strictly defined. Such applications are, for instance, those where subjectivity of the user plays the major role, e.g. the task of retrieving all images that the user 'likes most', and those that are based on 'recognizing emotions' in audiovisual data. Typical examples are searching for all images that 'radiate happiness', identifying all 'sad' movie fragments and looking for the 'romantic landscapes', 'sentimental' movie segments, 'movie highlights' or 'most exciting' moments of a sport event. This paper discusses the needs and possibilities for widening the current scope of research in the area of image and video search and retrieval in order to enable applications at the affective level of content abstraction.

  11. The ImageNet Shuffle: Reorganized Pre-training for Video Event Detection

    NARCIS (Netherlands)

    P. Mettes; D.C. Koelma; C.G.M. Snoek

    2016-01-01

    This paper strives for video event detection using a representation learned from deep convolutional neural networks. Different from the leading approaches, who all learn from the 1,000 classes defined in the ImageNet Large Scale Visual Recognition Challenge, we investigate how to leverage the comple

  12. Artifact reduction of compressed images and video combining adaptive fuzzy filtering and directional anisotropic diffusion

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Forchhammer, Søren; Korhonen, Jari

    2011-01-01

    Fuzzy filtering is one of the recently developed methods for reducing distortion in compressed images and video. In this paper, we combine the powerful anisotropic diffusion equations with fuzzy filtering in order to reduce the impact of artifacts. Based on the directional nature of the blocking ...

  13. Modeling the Color Image and Video Quality on Liquid Crystal Displays with Backlight Dimming

    DEFF Research Database (Denmark)

    Korhonen, Jari; Mantel, Claire; Burini, Nino

    2013-01-01

    Objective image and video quality metrics focus mostly on the digital representation of the signal. However, the display characteristics are also essential for the overall Quality of Experience (QoE). In this paper, we use a model of a backlight dimming system for Liquid Crystal Display (LCD...

  14. Geometric Distortion in Image and Video Watermarking. Robustness and Perceptual Quality Impact

    NARCIS (Netherlands)

    Setyawan, I.

    2004-01-01

    The main focus of this thesis is the problem of geometric distortion in image and video watermarking. In this thesis we discuss the two aspects of the geometric distortion problem, namely the watermark desynchronization aspect and the perceptual quality assessment aspect. Furthermore, this thesis al

  15. Sequential error concealment for video/images by weighted template matching

    DEFF Research Database (Denmark)

    Koloda, Jan; Østergaard, Jan; Jensen, Søren Holdt;

    2012-01-01

    In this paper we propose a novel spatial error concealment algorithm for video and images based on convex optimization. Block-based coding schemes in packet loss environment are considered. Missing macro blocks are sequentially reconstructed by filling them with a weighted set of templates...

  16. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    Directory of Open Access Journals (Sweden)

    Zhuowen Lv

    2015-01-01

    Full Text Available Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach.

  17. Operational prediction of rip currents using numerical model and nearshore bathymetry from video images

    Science.gov (United States)

    Sembiring, L.; Van Ormondt, M.; Van Dongeren, A. R.; Roelvink, J. A.

    2017-07-01

    Rip currents are one of the most dangerous coastal hazards for swimmers. In order to minimize the risk, a coastal operational-process based-model system can be utilized in order to provide forecast of nearshore waves and currents that may endanger beach goers. In this paper, an operational model for rip current prediction by utilizing nearshore bathymetry obtained from video image technique is demonstrated. For the nearshore scale model, XBeach1 is used with which tidal currents, wave induced currents (including the effect of the wave groups) can be simulated simultaneously. Up-to-date bathymetry will be obtained using video images technique, cBathy 2. The system will be tested for the Egmond aan Zee beach, located in the northern part of the Dutch coastline. This paper will test the applicability of bathymetry obtained from video technique to be used as input for the numerical modelling system by comparing simulation results using surveyed bathymetry and model results using video bathymetry. Results show that the video technique is able to produce bathymetry converging towards the ground truth observations. This bathymetry validation will be followed by an example of operational forecasting type of simulation on predicting rip currents. Rip currents flow fields simulated over measured and modeled bathymetries are compared in order to assess the performance of the proposed forecast system.

  18. Efficient super-resolution image reconstruction applied to surveillance video captured by small unmanned aircraft systems

    Science.gov (United States)

    He, Qiang; Schultz, Richard R.; Chu, Chee-Hung Henry

    2008-04-01

    The concept surrounding super-resolution image reconstruction is to recover a highly-resolved image from a series of low-resolution images via between-frame subpixel image registration. In this paper, we propose a novel and efficient super-resolution algorithm, and then apply it to the reconstruction of real video data captured by a small Unmanned Aircraft System (UAS). Small UAS aircraft generally have a wingspan of less than four meters, so that these vehicles and their payloads can be buffeted by even light winds, resulting in potentially unstable video. This algorithm is based on a coarse-to-fine strategy, in which a coarsely super-resolved image sequence is first built from the original video data by image registration and bi-cubic interpolation between a fixed reference frame and every additional frame. It is well known that the median filter is robust to outliers. If we calculate pixel-wise medians in the coarsely super-resolved image sequence, we can restore a refined super-resolved image. The primary advantage is that this is a noniterative algorithm, unlike traditional approaches based on highly-computational iterative algorithms. Experimental results show that our coarse-to-fine super-resolution algorithm is not only robust, but also very efficient. In comparison with five well-known super-resolution algorithms, namely the robust super-resolution algorithm, bi-cubic interpolation, projection onto convex sets (POCS), the Papoulis-Gerchberg algorithm, and the iterated back projection algorithm, our proposed algorithm gives both strong efficiency and robustness, as well as good visual performance. This is particularly useful for the application of super-resolution to UAS surveillance video, where real-time processing is highly desired.

  19. Lidar-Incorporated Traffic Sign Detection from Video Log Images of Mobile Mapping System

    Science.gov (United States)

    Li, Y.; Fan, J.; Huang, Y.; Chen, Z.

    2016-06-01

    Mobile Mapping System (MMS) simultaneously collects the Lidar points and video log images in a scenario with the laser profiler and digital camera. Besides the textural details of video log images, it also captures the 3D geometric shape of point cloud. It is widely used to survey the street view and roadside transportation infrastructure, such as traffic sign, guardrail, etc., in many transportation agencies. Although many literature on traffic sign detection are available, they only focus on either Lidar or imagery data of traffic sign. Based on the well-calibrated extrinsic parameters of MMS, 3D Lidar points are, the first time, incorporated into 2D video log images to enhance the detection of traffic sign both physically and visually. Based on the local elevation, the 3D pavement area is first located. Within a certain distance and height of the pavement, points of the overhead and roadside traffic signs can be obtained according to the setup specification of traffic signs in different transportation agencies. The 3D candidate planes of traffic signs are then fitted using the RANSAC plane-fitting of those points. By projecting the candidate planes onto the image, Regions of Interest (ROIs) of traffic signs are found physically with the geometric constraints between laser profiling and camera imaging. The Random forest learning of the visual color and shape features of traffic signs is adopted to validate the sign ROIs from the video log images. The sequential occurrence of a traffic sign among consecutive video log images are defined by the geometric constraint of the imaging geometry and GPS movement. Candidate ROIs are predicted in this temporal context to double-check the salient traffic sign among video log images. The proposed algorithm is tested on a diverse set of scenarios on the interstate highway G-4 near Beijing, China under varying lighting conditions and occlusions. Experimental results show the proposed algorithm enhances the rate of detecting

  20. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    Science.gov (United States)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  1. Study of the Video Monitoring System Image Recognition Solutions Based on Mathematic models

    Directory of Open Access Journals (Sweden)

    Peilong Xu

    2013-01-01

    Full Text Available objective: Through establishment a set of image recognition system based on mathematic models, to develop a auto alarm solution for the video monitoring system. Methods: compare the images the video monitoring system collected according to the time sequences. Then after binaryzation and wave filtering, the images were converted into numerical values using autocorrelation function, and the alarm threshold value was confirmed by experiences. Results: Through experiments, the change ratios of the two images before and after image processing were inversely proportional to the autocorrelation function. When the function value is less than 0.8, it indicates that there is an object volumes larger than 1m3 has invaded into 15m distances, and when the function value is less than 0.6, it indicates that there is an object volumes larger than 1m3 has invaded into 30m distances. Conclusion: Through calculation of autocorrelation functions, auto alarm for the images collected by video monitoring system could be effectively realized.

  2. Principal components null space analysis for image and video classification.

    Science.gov (United States)

    Vaswani, Namrata; Chellappa, Rama

    2006-07-01

    We present a new classification algorithm, principal component null space analysis (PCNSA), which is designed for classification problems like object recognition where different classes have unequal and nonwhite noise covariance matrices. PCNSA first obtains a principal components subspace (PCA space) for the entire data. In this PCA space, it finds for each class "i," an Mi-dimensional subspace along which the class' intraclass variance is the smallest. We call this subspace an approximate null space (ANS) since the lowest variance is usually "much smaller" than the highest. A query is classified into class "i" if its distance from the class' mean in the class' ANS is a minimum. We derive upper bounds on classification error probability of PCNSA and use these expressions to compare classification performance of PCNSA with that of subspace linear discriminant analysis (SLDA). We propose a practical modification of PCNSA called progressive-PCNSA that also detects "new" (untrained classes). Finally, we provide an experimental comparison of PCNSA and progressive PCNSA with SLDA and PCA and also with other classification algorithms-linear SVMs, kernel PCA, kernel discriminant analysis, and kernel SLDA, for object recognition and face recognition under large pose/expression variation. We also show applications of PCNSA to two classification problems in video--an action retrieval problem and abnormal activity detection.

  3. Using smart phone video to supplement communication of radiology imaging in a neurosurgical unit: technical note.

    Science.gov (United States)

    Shivapathasundram, Ganeshwaran; Heckelmann, Michael; Sheridan, Mark

    2012-04-01

    The use of smart phones within medicine continues to grow at the same rate as mobile phone technology continues to evolve. One use of smart phones within medicine is in the transmission of radiological images to consultant neurosurgeons who are off-site in an emergency setting. In our unit, this has allowed quick, efficient, and safe communication between consultant neurosurgeon and trainees, aiding in rapid patient assessment and management in emergency situations. To describe a new means of smart phone technology use in the neurosurgical setting, where the video application of smart phones allows transfer of a whole series of patient neuroimaging via multimedia messaging service to off-site consultant neurosurgeons. METHOD/TECHNIQUE: Using the video application of smart phones, a 30-second video of an entire series of patient neuroimaging was transmitted to consultant neurosurgeons. With this information, combined with a clinical history, accurate management decisions were made. This technique has been used on a number of emergency situations in our unit to date. Thus far, the imaging received by consultants has been a very useful adjunct to the clinical information provided by the on-site trainee, and has helped expedite management of patients. While the aim should always be for the specialist neurosurgeon to review the imaging in person, in emergency settings, this is not always possible, and we feel that this technique of smart phone video is a very useful means for rapid communication with neurosurgeons.

  4. Background Extraction Method Based on Block Histogram Analysis for Video Image

    Institute of Scientific and Technical Information of China (English)

    Li Hua; Peng Qiang

    2005-01-01

    A novel method of histogram analysis for background extraction in video image is proposed, which is derived from the pixelbased histogram analysis. Not only the statistical property of pixels between temporal frames, but also the correlation of local pixels in a single frame is exploited in this method. When carrying out histogram analysis for background extraction, the proposed method is not based on a single pixel but on a 2×2 block that has much less computational quantities and can extract a sound background image from video sequence simultaneously. A comparative experiment between the proposed method and the pixel-based histogram analysis shows that the proposed method has a faster speed in background extraction and the obtained background image is better in quantity.

  5. Comparison of Image Transform-Based Features for Visual Speech Recognition in Clean and Corrupted Videos

    Directory of Open Access Journals (Sweden)

    Ji Ming

    2008-03-01

    Full Text Available We present results of a study into the performance of a variety of different image transform-based feature types for speaker-independent visual speech recognition of isolated digits. This includes the first reported use of features extracted using a discrete curvelet transform. The study will show a comparison of some methods for selecting features of each feature type and show the relative benefits of both static and dynamic visual features. The performance of the features will be tested on both clean video data and also video data corrupted in a variety of ways to assess each feature type's robustness to potential real-world conditions. One of the test conditions involves a novel form of video corruption we call jitter which simulates camera and/or head movement during recording.

  6. Face Recognition from Still Images to Video Sequences: A Local-Feature-Based Framework

    Directory of Open Access Journals (Sweden)

    Chen Shaokang

    2011-01-01

    Full Text Available Although automatic faces recognition has shown success for high-quality images under controlled conditions, for video-based recognition it is hard to attain similar levels of performance. We describe in this paper recent advances in a project being undertaken to trial and develop advanced surveillance systems for public safety. In this paper, we propose a local facial feature based framework for both still image and video-based face recognition. The evaluation is performed on a still image dataset LFW and a video sequence dataset MOBIO to compare 4 methods for operation on feature: feature averaging (Avg-Feature, Mutual Subspace Method (MSM, Manifold to Manifold Distance (MMS, and Affine Hull Method (AHM, and 4 methods for operation on distance on 3 different features. The experimental results show that Multi-region Histogram (MRH feature is more discriminative for face recognition compared to Local Binary Patterns (LBP and raw pixel intensity. Under the limitation on a small number of images available per person, feature averaging is more reliable than MSM, MMD, and AHM and is much faster. Thus, our proposed framework—averaging MRH feature is more suitable for CCTV surveillance systems with constraints on the number of images and the speed of processing.

  7. High performance computational integral imaging system using multi-view video plus depth representation

    Science.gov (United States)

    Shi, Shasha; Gioia, Patrick; Madec, Gérard

    2012-12-01

    Integral imaging is an attractive auto-stereoscopic three-dimensional (3D) technology for next-generation 3DTV. But its application is obstructed by poor image quality, huge data volume and high processing complexity. In this paper, a new computational integral imaging (CII) system using multi-view video plus depth (MVD) representation is proposed to solve these problems. The originality of this system lies in three aspects. Firstly, a particular depth-image-based rendering (DIBR) technique is used in encoding process to exploit the inter-view correlation between different sub-images (SIs). Thereafter, the same DIBR method is applied in the display side to interpolate virtual SIs and improve the reconstructed 3D image quality. Finally, a novel parallel group projection (PGP) technique is proposed to simplify the reconstruction process. According to experimental results, the proposed CII system improves compression efficiency and displayed image quality, while reducing calculation complexity. [Figure not available: see fulltext.

  8. Coastal morphodynamic features/patterns analisys through a video-based system and image processing

    Science.gov (United States)

    Santos, Fábio; Pais-Barbosa, Joaquim; Teodoro, Ana C.; Gonçalves, Hernâni; Baptista, Paolo; Moreira, António; Veloso-Gomes, Fernando; Taveira-Pinto, Francisco; Gomes-Costa, Paulo; Lopes, Vítor; Neves-Santos, Filipe

    2012-10-01

    The Portuguese coastline, like many other worldwide coastlines, is often submitted to several types of extreme events resulting in erosion, thus, acquisition of high quality field measurements has become a common concern. The nearshore survey systems have been traditionally based on in situ measurements or in the use of satellite or aircraft mounted remote sensing systems. As an alternative, video-monitoring systems proved to be an economic and efficient way to collect useful and continuous data, and to document extreme events. In this context, is under development the project MoZCo (Advanced Methodologies and Techniques Development for Coastal Zone Monitoring), which intends to develop and implement monitoring techniques for the coastal zone based on a low cost video monitoring system. The pilot study area is Ofir beach (north of Portugal), a critical coastal area. In the beginning of this project (2010) a monitoring video station was developed, collecting snapshots and 10 minutes videos every hour. In order to process the data, several video image processing algorithms were implemented in Matlab®, allowing achieve the main video-monitoring system products, such as, the shoreline detection. An algorithm based on image processing techniques was developed, using the HSV color space, the idea is to select a study and a sample area, containing pixels associated with dry and wet regions, over which a thresholding and some morphological operators are applied. After comparing the results with manual digitalization, promising results were achieved despite the method's simplicity, which is in continuous development in order to optimize the results.

  9. A novel rain removal technology based on video image

    Science.gov (United States)

    Liu, Shuo; Piao, Yan

    2016-11-01

    Due to the effect of bad weather conditions, it often conducts visual distortions on images for outdoor vision systems. Rain is one specific example of bad weather. Generally, rain streak is small and falls at high velocity. Traditional rain removal methods often cause blued visual effect. In addition, there is high time complexity. Moreover, some rain streaks are still in the de-rained image. Based on the characteristics of rain streak, a novel rain removal technology is proposed. The proposed method is not only removing the rain streak effectively, but also retaining much detail information. The experiments show that the proposed method outperform traditional rain removal methods. It can be widely used in intelligent traffic, civilian surveillance and national security so on.

  10. Efficient Watermarking Technique for Digital Media (Images and Videos

    Directory of Open Access Journals (Sweden)

    Chirag Sharma

    2012-05-01

    Full Text Available In This Paper we are going to purpose an efficient Watermarking Technique for Digital Media Content Protection and Copyright Protection. Watermarking is a technique to embed hidden andunnoticeable signal into digital media in such a way that if an intruder wants to copy it, he can be caught on the basis of Copyright protection and Ownership Identification. There are many Techniques that are available to watermark the data, In our purposal we are discussing DWT Technique which is most robust to attacks rather than LSB for the protection of Digital Images. We will try to find the Quality loss after the addition of watermark after applying various attacks on Watermarked Image, the more the quality loss will be there lesser will be the efficiency of Watermarking. There will be Many Factors that can effect the quality of the Images after the addition of Watermarking that are discussed in Later Section. The Creating on GUI and Implementation of our purposed Algorithm will be realized using MATLAB.

  11. Automatic Polyp Detection in Pillcam Colon 2 Capsule Images and Videos: Preliminary Feasibility Report

    Directory of Open Access Journals (Sweden)

    Pedro N. Figueiredo

    2011-01-01

    Full Text Available Background. The aim of this work is to present an automatic colorectal polyp detection scheme for capsule endoscopy. Methods. PillCam COLON2 capsule-based images and videos were used in our study. The database consists of full exam videos from five patients. The algorithm is based on the assumption that the polyps show up as a protrusion in the captured images and is expressed by means of a P-value, defined by geometrical features. Results. Seventeen PillCam COLON2 capsule videos are included, containing frames with polyps, flat lesions, diverticula, bubbles, and trash liquids. Polyps larger than 1 cm express a P-value higher than 2000, and 80% of the polyps show a P-value higher than 500. Diverticula, bubbles, trash liquids, and flat lesions were correctly interpreted by the algorithm as nonprotruding images. Conclusions. These preliminary results suggest that the proposed geometry-based polyp detection scheme works well, not only by allowing the detection of polyps but also by differentiating them from nonprotruding images found in the films.

  12. A method for analyzing on-line video images of crystallization at high-solid concentrations

    Institute of Scientific and Technical Information of China (English)

    Jian Wan; Cai Y.Ma; Xue Z.Wang

    2008-01-01

    Recent research has demonstrated that on-line video imaging is a very promising technique for monitoring crystallization processes. The bottleneck in applying the technique for real-time closed-loop control is considered as image analysis that needs to be robust, fast and able to handle varied image qualities due to temporal variations of operating conditions such as mixing and solid concentrations. Image analysis at high-solid concentrations turns out to be extremely challenging because crystals tend to overlap or attach to each other and the boundaries between the crystals are usually ambiguous. This paper presents an image segmentation algorithm that can effectively deal with images taken at high-solid concentrations. The method segments crystals attached to each other along the mostly related concave points on the contours of crystal blocks. The detailed procedure is introduced with application to crystallization of L-glutamic acid in a hot-stage reactor.

  13. Video Object Tracking in Neural Axons with Fluorescence Microscopy Images

    Directory of Open Access Journals (Sweden)

    Liang Yuan

    2014-01-01

    tracking. In this paper, we describe two automated tracking methods for analyzing neurofilament movement based on two different techniques: constrained particle filtering and tracking-by-detection. First, we introduce the constrained particle filtering approach. In this approach, the orientation and position of a particle are constrained by the axon’s shape such that fewer particles are necessary for tracking neurofilament movement than object tracking techniques based on generic particle filtering. Secondly, a tracking-by-detection approach to neurofilament tracking is presented. For this approach, the axon is decomposed into blocks, and the blocks encompassing the moving neurofilaments are detected by graph labeling using Markov random field. Finally, we compare two tracking methods by performing tracking experiments on real time-lapse image sequences of neurofilament movement, and the experimental results show that both methods demonstrate good performance in comparison with the existing approaches, and the tracking accuracy of the tracing-by-detection approach is slightly better between the two.

  14. Survey of Region-Based Text Extraction Techniques for Efficient Indexing of Image/Video Retrieval

    Directory of Open Access Journals (Sweden)

    Samabia Tehsin

    2014-11-01

    Full Text Available With the dramatic increase in multimedia data, escalating trend of internet, and amplifying use of image/video capturing devices; content based indexing and text extraction is gaining more and more importance in research community. In the last decade, many techniques for text extraction are reported in the literature. Methodologies of text extraction from images/videos is generally comprises of text detection and localization, text tracking, text segmentation and optical character recognition (OCR. This paper intends to highlight the contributions and limitations of text detection, localization and tracking phases. The problem is exigent due to variations in the font styles, size and color, text orientations, animations and backgrounds. The paper can serve as the beacon-house for the novice researchers of the text extraction community.

  15. An Image/Video Self-Description Scheme for MPEG-7

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    In this paper, we propose an self-describing scheme forinteroperable image /video content descriptions, which can accommodate the objective of MPEG-7 standard. The objective of this standard is to maximize content-focused multimedia applications. In order to provide full interoperability and flexibility, we use the eXtensible Markup Language(XML) to express the self-describing scheme. We will demonstrate the flexibility and efficiency of our self-describing scheme.

  16. The effect of music video clips on adolescent boys' body image, mood, and schema activation.

    Science.gov (United States)

    Mulgrew, Kate E; Volcevski-Kostas, Diana; Rendell, Peter G

    2014-01-01

    There is limited research that has examined experimentally the effects of muscular images on adolescent boys' body image, with no research specifically examining the effects of music television. The aim of the current study was to examine the effects of viewing muscular and attractive singers in music video clips on early, mid, and late adolescent boys' body image, mood, and schema activation. Participants were 180 boys in grade 7 (mean age = 12.73 years), grade 9 (mean age = 14.40 years) or grade 11 (mean age = 16.15 years) who completed pre- and post-test measures of mood and body satisfaction after viewing music videos containing male singers of muscular or average appearance. They also completed measures of schema activation and social comparison after viewing the clips. The results showed that the boys who viewed the muscular clips reported poorer upper body satisfaction, lower appearance satisfaction, lower happiness, and more depressive feelings compared to boys who viewed the clips depicting singers of average appearance. There was no evidence of increased appearance schema activation but the boys who viewed the muscular clips did report higher levels of social comparison to the singers. The results suggest that music video clips are a powerful form of media in conveying information about the male ideal body shape and that negative effects are found in boys as young as 12 years.

  17. High-speed digital video imaging system to record cardiac action potentials

    Science.gov (United States)

    Mishima, Akira; Arafune, Tatsuhiko; Masamune, Ken; Sakuma, Ichiro; Dohi, Takeyoshi; Shibata, Nitaro; Honjo, Haruo; Kodama, Itsuo

    2001-01-01

    A new digital video imaging system was developed and its performance was evaluated to analyze the spiral wave dynamics during polymorphic ventricular tachycardia (PVT) with high spatio-temporal resolution (1 ms, 0.1 mm). The epicardial surface of isolated rabbit heart stained with di- 4-ANEPPS was illuminated by 72 high-power bluish-green light emitting diodes (BGLED: (lambda) 0 500 nm, 10mw). The emitted fluorescence image (256x256 pixels) passing through a long-pass filter ((lambda) c 660nm) was monitored by a high-speed digital video camera recorder (FASTCAM-Ultima- UV3, Photron) at 1125 fps. The data stored in DRAM were processed by PC for background subtraction. 2D images of excitation wave and single-pixel action potentials at target sites during PVT induced by DC shocks (S2: 10 ms, 20 V) were displayed for 4.5 s. The wave form quality is high enough to observe phase 0 upstroke and to identify repolarization timing. Membrane potentials at the center of spiral were characterized by double-peak or oscillatory depolarization. Singular points during PVT were obtained from isophase mapping. Our new digital video-BGLED system has an advantage over previous ones for more accurate and longer time action potential analysis during spiral wave reentry.

  18. A low-cost, high-resolution, video-rate imaging optical radar

    Energy Technology Data Exchange (ETDEWEB)

    Sackos, J.T.; Nellums, R.O.; Lebien, S.M.; Diegert, C.F. [Sandia National Labs., Albuquerque, NM (United States); Grantham, J.W.; Monson, T. [Air Force Research Lab., Eglin AFB, FL (United States)

    1998-04-01

    Sandia National Laboratories has developed a unique type of portable low-cost range imaging optical radar (laser radar or LADAR). This innovative sensor is comprised of an active floodlight scene illuminator and an image intensified CCD camera receiver. It is a solid-state device (no moving parts) that offers significant size, performance, reliability, and simplicity advantages over other types of 3-D imaging sensors. This unique flash LADAR is based on low cost, commercially available hardware, and is well suited for many government and commercial uses. This paper presents an update of Sandia`s development of the Scannerless Range Imager technology and applications, and discusses the progress that has been made in evolving the sensor into a compact, low, cost, high-resolution, video rate Laser Dynamic Range Imager.

  19. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): Data from 2000 (NODC Accession 0000728)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2000 at 23 sites, some of which had multiple depths. Estimates of substrate...

  20. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): Data from 2003 (NODC Accession 0001732)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2003 at 15 sites, some of which had multiple depths. Estimates of substrate...

  1. Video transect images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): data from year 1999 (NODC Accession 0000671)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (JPEG files) from CRAMP surveys taken in 1999 at 26 sites, some of which had multiple depths. Estimates of substrate...

  2. Video transect images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): data from 2002 (NODC Accession 0000961)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2002 at 23 sites, some of which had multiple depths. Estimates of substrate...

  3. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP):Data from 2003 (NODC Accession 0001732)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2003 at 15 sites, some of which had multiple depths. Estimates of substrate...

  4. Video transect images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): data from 2000 (NODC Accession 0000728)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2000 at 23 sites, some of which had multiple depths. Estimates of substrate...

  5. Video Transect Images from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP): Data from 2002 (NODC Accession 0000961)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (TIF files) from CRAMP surveys taken in 2002 at 23 sites, some of which had multiple depths. Estimates of substrate...

  6. Video Transect Images (1999) from the Hawaii Coral Reef Assessment and Monitoring Program (CRAMP) (NODC Accession 0000671)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of video transect images (JPEG files) from CRAMP surveys taken in 1999 at 26 sites, some of which had multiple depths. Estimates of substrate...

  7. A real-time remote video streaming platform for ultrasound imaging.

    Science.gov (United States)

    Ahmadi, Mehdi; Gross, Warren J; Kadoury, Samuel; Ahmadi, Mehdi; Gross, Warren J; Kadoury, Samuel; Kadoury, Samuel; Gross, Warren J; Ahmadi, Mehdi

    2016-08-01

    Ultrasound is a viable imaging technology in remote and resources-limited areas. Ultrasonography is a user-dependent skill which depends on a high degree of training and hands-on experience. However, there is a limited number of skillful sonographers located in remote areas. In this work, we aim to develop a real-time video streaming platform which allows specialist physicians to remotely monitor ultrasound exams. To this end, an ultrasound stream is captured and transmitted through a wireless network into remote computers, smart-phones and tablets. In addition, the system is equipped with a camera to track the position of the ultrasound probe. The main advantage of our work is using an open source platform for video streaming which gives us more control over streaming parameters than the available commercial products. The transmission delays of the system are evaluated for several ultrasound video resolutions and the results show that ultrasound videos close to the high-definition (HD) resolution can be received and displayed on an Android tablet with the delay of 0.5 seconds which is acceptable for accurate real-time diagnosis.

  8. High-Performance Motion Estimation for Image Sensors with Video Compression

    Directory of Open Access Journals (Sweden)

    Weizhi Xu

    2015-08-01

    Full Text Available It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME. Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME.

  9. Active millimeter-wave video rate imaging with a staring 120-element microbolometer array

    Science.gov (United States)

    Luukanen, Arttu; Miller, Aaron J.; Grossman, Erich N.

    2004-08-01

    Passive indoors imaging of weapons concealed under clothing poses a formidable challenge for millimeter-wave imagers due to the sub-picowatt signal levels present in the scene. Moreover, video-rate imaging requires a large number of pixels, which leads to a very complex and expensive front end for the imager. To meet the concealed weapons detection challenge, our approach uses a low cost pulsed-noise source as an illuminator and an array of room-temperature antenna-coupled microbolometers as the detectors. The reflected millimeter-wave power is detected by the bolometers, gated, integrated and amplified by audio-frequency amplifiers, and after digitization, displayed in real time on a PC display. We present recently acquired videos obtained with the 120-element array, and comprehensively describe the performance characteristics of the array in terms of sensitivity, optical efficiency, uniformity and spatial resolution. Our results show that active imaging with antenna-coupled microbolometers can yield imagery comparable to that obtained with systems using MMIC amplifiers but with a cost per pixel that is orders of magnitude lower.

  10. CNN intelligent early warning for apple skin lesion image acquired by infrared video sensors

    Institute of Scientific and Technical Information of China (English)

    谭文学

    2016-01-01

    Video sensors and agricultural IoT ( internet of things) have been widely used in the informa-tionalized orchards.In order to realize intelligent-unattended early warning for disease-pest, this pa-per presents convolutional neural network ( CNN) early warning for apple skin lesion image, which is real-time acquired by infrared video sensor.More specifically, as to skin lesion image, a suite of processing methods is devised to simulate the disturbance of variable orientation and light condition which occurs in orchards.It designs a method to recognize apple pathologic images based on CNN, and formulates a self-adaptive momentum rule to update CNN parameters.For example, a series of experiments are carried out on the recognition of fruit lesion image of apple trees for early warning. The results demonstrate that compared with the shallow learning algorithms and other involved, well-known deep learning methods, the recognition accuracy of the proposal is up to 96.08%, with a fairly quick convergence, and it also presents satisfying smoothness and stableness after conver-gence.In addition, statistics on different benchmark datasets prove that it is fairly effective to other image patterns concerned.

  11. Automatic Rotation Recovery Algorithm for Accurate Digital Image and Video Watermarks Extraction

    Directory of Open Access Journals (Sweden)

    Nasr addin Ahmed Salem Al-maweri

    2016-11-01

    Full Text Available Research in digital watermarking has evolved rapidly in the current decade. This evolution brought various different methods and algorithms for watermarking digital images and videos. Introduced methods in the field varies from weak to robust according to how tolerant the method is implemented to keep the existence of the watermark in the presence of attacks. Rotation attacks applied to the watermarked media is one of the serious attacks which many, if not most, algorithms cannot survive. In this paper, a new automatic rotation recovery algorithm is proposed. This algorithm can be plugged to any image or video watermarking algorithm extraction component. The main job for this method is to detect the geometrical distortion happens to the watermarked image/images sequence; recover the distorted scene to its original state in a blind and automatic way and then send it to be used by the extraction procedure. The work is limited to have a recovery process to zero padded rotations for now, cropped images after rotation is left as future work. The proposed algorithm is tested on top of extraction component. Both recovery accuracy and the extracted watermarks accuracy showed high performance level.

  12. Integration of Video Images and CAD Wireframes for 3d Object Localization

    Science.gov (United States)

    Persad, R. A.; Armenakis, C.; Sohn, G.

    2012-07-01

    The tracking of moving objects from single images has received widespread attention in photogrammetric computer vision and considered to be at a state of maturity. This paper presents a model-driven solution for localizing moving objects detected from monocular, rotating and zooming video images in a 3D reference frame. To realize such a system, the recovery of 2D to 3D projection parameters is essential. Automatic estimation of these parameters is critical, particularly for pan-tilt-zoom (PTZ) surveillance cameras where parameters change spontaneously upon camera motion. In this work, an algorithm for automated parameter retrieval is proposed. This is achieved by matching linear features between incoming images from video sequences and simple geometric 3D CAD wireframe models of man-made structures. The feature matching schema uses a hypothesis-verify optimization framework referred to as LR-RANSAC. This novel method improves the computational efficiency of the matching process in comparison to the standard RANSAC robust estimator. To demonstrate the applicability and performance of the method, experiments have been performed on indoor and outdoor image sequences under varying conditions with lighting changes and occlusions. Reliability of the matching algorithm has been analyzed by comparing the automatically determined camera parameters with ground truth (GT). Dependability of the retrieved parameters for 3D localization has also been assessed by comparing the difference between 3D positions of moving image objects estimated using the LR-RANSAC-derived parameters and those computed using GT parameters.

  13. Comparison of ultrasound imaging and video otoscopy with cross-sectional imaging for the diagnosis of canine otitis media.

    Science.gov (United States)

    Classen, J; Bruehschwein, A; Meyer-Lindenberg, A; Mueller, R S

    2016-11-01

    Ultrasound imaging (US) of the tympanic bulla (TB) for diagnosis of canine otitis media (OM) is less expensive and less invasive than cross-sectional imaging techniques including computed tomography (CT) and magnetic resonance imaging (MRI). Video otoscopy (VO) is used to clean inflamed ears. The objective of this study was to investigate the diagnostic value of US and VO in OM using cross-sectional imaging as the reference standard. Client owned dogs with clinical signs of OE and/or OM were recruited for the study. Physical, neurological, otoscopic and otic cytological examinations were performed on each dog and both TB were evaluated using US with an 8 MHz micro convex probe, cross-sectional imaging (CT or MRI) and VO. Of 32 dogs enrolled, 24 had chronic otitis externa (OE; five also had clinical signs of OM), four had acute OE without clinical signs of OM, and four had OM without OE. Ultrasound imaging was positive in three of 14 ears, with OM identified on cross-sectional imaging. One US was false positive. Sensitivity, specificity, positive and negative predictive values and accuracy of US were 21%, 98%, 75%, 81% and 81%, respectively. The corresponding values of VO were 91%, 98%, 91%, 98% and 97%, respectively. Video otoscopy could not identify OM in one case, while in another case, although the tympanum was ruptured, the CT was negative. Ultrasound imaging should not replace cross-sectional imaging for the diagnosis of canine OM, but can be helpful, and VO was much more reliable than US. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. The temporomandibular joint in video motion--noninvasive image techniques to present the functional anatomy.

    Science.gov (United States)

    Kordass, B

    1999-01-01

    The presentation of the functional anatomy of the temporomandibular joint (TMJ) is involved with difficulties if dynamic aspects are to be of prime interest, and it should be demonstrated with the highest resolution. Usually noninvasive techniques like MRI and sonography are available for presenting functionality of the temporomandibular joint in video motion. Such images reflect the functional anatomy much better than single pictures of figures could do. In combination with computer aided records of the condyle movements the video motion of MR and sonographical images represent tools for better understanding the relationships between functional or dysfunctional patterns and the morphological or dysmorphological shape and structure of the temporomandibular joint. The possibilities of such tools will be explained and discussed in detail relating, in addition, to loading effects caused by transmitted occlusal pressure onto the joint compartments. If pressure occurs the condyle slides mainly more or less retrocranially whereas the articular disc takes up a more displaced position and a deformed shape. In a few extreme cases the disc prolapses out of the joint space. These video pictures offer new aspects for the diagnosis of the disc-condyle stability and can also be used for explicit educational programs on the complex dysfunction-dysmorphology-relationship of temporomandibular diseases.

  15. Evaluating Cell Processes, Quality, and Biomarkers in Pluripotent Stem Cells Using Video Bioinformatics.

    Science.gov (United States)

    Zahedi, Atena; On, Vincent; Lin, Sabrina C; Bays, Brett C; Omaiye, Esther; Bhanu, Bir; Talbot, Prue

    2016-01-01

    There is a foundational need for quality control tools in stem cell laboratories engaged in basic research, regenerative therapies, and toxicological studies. These tools require automated methods for evaluating cell processes and quality during in vitro passaging, expansion, maintenance, and differentiation. In this paper, an unbiased, automated high-content profiling toolkit, StemCellQC, is presented that non-invasively extracts information on cell quality and cellular processes from time-lapse phase-contrast videos. Twenty four (24) morphological and dynamic features were analyzed in healthy, unhealthy, and dying human embryonic stem cell (hESC) colonies to identify those features that were affected in each group. Multiple features differed in the healthy versus unhealthy/dying groups, and these features were linked to growth, motility, and death. Biomarkers were discovered that predicted cell processes before they were detectable by manual observation. StemCellQC distinguished healthy and unhealthy/dying hESC colonies with 96% accuracy by non-invasively measuring and tracking dynamic and morphological features over 48 hours. Changes in cellular processes can be monitored by StemCellQC and predictions can be made about the quality of pluripotent stem cell colonies. This toolkit reduced the time and resources required to track multiple pluripotent stem cell colonies and eliminated handling errors and false classifications due to human bias. StemCellQC provided both user-specified and classifier-determined analysis in cases where the affected features are not intuitive or anticipated. Video analysis algorithms allowed assessment of biological phenomena using automatic detection analysis, which can aid facilities where maintaining stem cell quality and/or monitoring changes in cellular processes are essential. In the future StemCellQC can be expanded to include other features, cell types, treatments, and differentiating cells.

  16. Video-rate imaging of microcirculation with single-exposure oblique back-illumination microscopy

    Science.gov (United States)

    Ford, Tim N.; Mertz, Jerome

    2013-06-01

    Oblique back-illumination microscopy (OBM) is a new technique for simultaneous, independent measurements of phase gradients and absorption in thick scattering tissues based on widefield imaging. To date, OBM has been used with sequential camera exposures, which reduces temporal resolution, and can produce motion artifacts in dynamic samples. Here, a variation of OBM that allows single-exposure operation with wavelength multiplexing and image splitting with a Wollaston prism is introduced. Asymmetric anamorphic distortion induced by the prism is characterized and corrected in real time using a graphics-processing unit. To demonstrate the capacity of single-exposure OBM to perform artifact-free imaging of blood flow, video-rate movies of microcirculation in ovo in the chorioallantoic membrane of the developing chick are presented. Imaging is performed with a high-resolution rigid Hopkins lens suitable for endoscopy.

  17. A METHOD OF IMAGE QUALITY ASSESSMENT FOR COMPRESSIVE SAMPLING VIDEO TRANSMISSION

    Institute of Scientific and Technical Information of China (English)

    Chen Shouning; Zheng Baoyu; Li Jing

    2012-01-01

    Based on compressive sampling transmission model,we demonstrate here a method of quality evaluation for the reconstruction images,which is promising for the transmission of unstructured signal with reduced dimension.By this method,the auxiliary information of the recovery image quality is obtained as a feedback to control number of measurements from compressive sampling video stream.Therefore,the number of measurements can be easily derived at the condition of the absence of information sparsity,and the recovery image quality is effectively improved.Theoretical and experimental results show that this algorithm can estimate the quality of images effectively and is in well consistency with the traditional objective evaluation algorithm.

  18. Video Retrieval using Histogram and Sift Combined with Graph-based Image Segmentation

    Directory of Open Access Journals (Sweden)

    Bui Ngo Da Thao

    2012-01-01

    Full Text Available Problem statement: Content-Based Video Retrieval (CBVR is still an open hard problem because of the semantic gap between low-level features and high-level features, largeness of database, keyframe’s content, choosing feature.In this study we introduce a new approach for this problem based on Scale-Invariant Feature Transform (SIFT feature, a new metric and an object retrieval method. Conclusion/Recommendations: Our algorithm is built on a Content-Based Image Retrieval (CBIR method in which the keyframe database includes keyframes detected from video database by using our shot detection method. Experiments show that the approach of our algorithmhas fairly high accuracy.

  19. A Novel Method to Get Super-Resolution Images from Low-Resolution Compressed Video

    Institute of Scientific and Technical Information of China (English)

    ZHOU Liang; ZHU Xiu-chang

    2005-01-01

    In order to resolve the problems of discontented restoration effect and confined applying scope which exist in the current compressed video restoration algorithms, a novel method to get super-resolution images from low-resolution compressed video is proposed in this paper. At first, a uniform model is presented and the restoration problem in the Bayesian framework is formulated under the MAP criterion, then the focus is put on the hybrid motion-compensation and transform coding schemes, at last the methods of getting the parameters are provided. The results of the simulation clearly demonstrate that our method not only has the properties of finer vision effect and wider applying scope, but also performs better than those of current classical algorithms in the aspects of Peak Signal Noise Ratio (PSNR) under the basis of the same condition.

  20. Towards Realising Secure and Efficient Image and Video Processing Applications on Quantum Computers

    Directory of Open Access Journals (Sweden)

    Abdullah M. Iliyasu

    2013-07-01

    Full Text Available Exploiting the promise of security and efficiency that quantum computing offers, the basic foundations leading to commercial applications for quantum image processing are proposed. Two mathematical frameworks and algorithms to accomplish the watermarking of quantum images, authentication of ownership of already watermarked images and recovery of their unmarked versions on quantum computers are proposed. Encoding the images as 2n-sized normalised Flexible Representation of Quantum Images (FRQI states, with n-qubits and 1-qubit dedicated to capturing the respective information about the colour and position of every pixel in the image respectively, the proposed algorithms utilise the flexibility inherent to the FRQI representation, in order to confine the transformations on an image to any predetermined chromatic or spatial (or a combination of both content of the image as dictated by the watermark embedding, authentication or recovery circuits. Furthermore, by adopting an apt generalisation of the criteria required to realise physical quantum computing hardware, three standalone components that make up the framework to prepare, manipulate and recover the various contents required to represent and produce movies on quantum computers are also proposed. Each of the algorithms and the mathematical foundations for their execution were simulated using classical (i.e., conventional or non-quantum computing resources, and their results were analysed alongside other longstanding classical computing equivalents. The work presented here, combined together with the extensions suggested, provide the basic foundations towards effectuating secure and efficient classical-like image and video processing applications on the quantum-computing framework.

  1. Lock-in-detection-free line-scan stimulated Raman scattering microscopy for near video-rate Raman imaging.

    Science.gov (United States)

    Wang, Zi; Zheng, Wei; Huang, Zhiwei

    2016-09-01

    We report on the development of a unique lock-in-detection-free line-scan stimulated Raman scattering microscopy technique based on a linear detector with a large full well capacity controlled by a field-programmable gate array (FPGA) for near video-rate Raman imaging. With the use of parallel excitation and detection scheme, the line-scan SRS imaging at 20 frames per second can be acquired with a ∼5-fold lower excitation power density, compared to conventional point-scan SRS imaging. The rapid data communication between the FPGA and the linear detector allows a high line-scanning rate to boost the SRS imaging speed without the need for lock-in detection. We demonstrate this lock-in-detection-free line-scan SRS imaging technique using the 0.5 μm polystyrene and 1.0 μm poly(methyl methacrylate) beads mixed in water, as well as living gastric cancer cells.

  2. Screen-imaging guidance using a modified portable video macroscope for middle cerebral artery occlusion

    Institute of Scientific and Technical Information of China (English)

    Xingbao Zhu; Xinghua Pan; Junli Luo; Yun Liu; Guolong Chen; Song Liu; Qiangjin Ruan; Xunding Deng; Dianchun Wang; Quanshui Fan

    2012-01-01

    The use of operating microscopes is limited by the focal length. Surgeons using these instruments cannot simultaneously view and access the surgical field and must choose one or the other. The longer focal length (more than 1 000 mm) of an operating telescope permits a position away from the operating field, above the surgeon and out of the field of view. This gives the telescope an advantage over an operating microscope. We developed a telescopic system using screen-imaging guidance and a modified portable video macroscope constructed from a Computar MLH-10 × macro lens, a DFK-21AU04 USB CCD Camera and a Dell laptop computer as monitor screen. This system was used to establish a middle cerebral artery occlusion model in rats. Results showed that magnification of the modified portable video macroscope was appropriate (5-20 ×) even though the Computar MLH-10 × macro lens was placed 800 mm away from the operating field rather than at the specified working distance of 152.4 mm with a zoom of 1-40 ×. The screen-imaging telescopic technique was clear, life-like, stereoscopic and matched the actual operation. Screen-imaging guidance led to an accurate, smooth, minimally invasive and comparatively easy surgical procedure. Success rate of the model establishment evaluated by neurological function using the modified neurological score system was 74.07%. There was no significant difference in model establishment time, sensorimotor deficit and infarct volume percentage. Our findings indicate that the telescopic lens is effective in the screen surgical operation mode referred to as "long distance observation and short distance operation" and that screen-imaging guidance using an modified portable video macroscope can be utilized for the establishment of a middle cerebral artery occlusion model and micro-neurosurgery.

  3. Screen-imaging guidance using a modified portable video macroscope for middle cerebral artery occlusion.

    Science.gov (United States)

    Zhu, Xingbao; Luo, Junli; Liu, Yun; Chen, Guolong; Liu, Song; Ruan, Qiangjin; Deng, Xunding; Wang, Dianchun; Fan, Quanshui; Pan, Xinghua

    2012-04-25

    The use of operating microscopes is limited by the focal length. Surgeons using these instruments cannot simultaneously view and access the surgical field and must choose one or the other. The longer focal length (more than 1 000 mm) of an operating telescope permits a position away from the operating field, above the surgeon and out of the field of view. This gives the telescope an advantage over an operating microscope. We developed a telescopic system using screen-imaging guidance and a modified portable video macroscope constructed from a Computar MLH-10 × macro lens, a DFK-21AU04 USB CCD Camera and a Dell laptop computer as monitor screen. This system was used to establish a middle cerebral artery occlusion model in rats. Results showed that magnification of the modified portable video macroscope was appropriate (5-20 ×) even though the Computar MLH-10 × macro lens was placed 800 mm away from the operating field rather than at the specified working distance of 152.4 mm with a zoom of 1-40 ×. The screen-imaging telescopic technique was clear, life-like, stereoscopic and matched the actual operation. Screen-imaging guidance led to an accurate, smooth, minimally invasive and comparatively easy surgical procedure. Success rate of the model establishment evaluated by neurological function using the modified neurological score system was 74.07%. There was no significant difference in model establishment time, sensorimotor deficit and infarct volume percentage. Our findings indicate that the telescopic lens is effective in the screen surgical operation mode referred to as "long distance observation and short distance operation" and that screen-imaging guidance using an modified portable video macroscope can be utilized for the establishment of a middle cerebral artery occlusion model and micro-neurosurgery.

  4. A flexible software architecture for scalable real-time image and video processing applications

    Science.gov (United States)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  5. Reflection imaging in the millimeter-wave range using a video-rate terahertz camera

    Science.gov (United States)

    Marchese, Linda E.; Terroux, Marc; Doucet, Michel; Blanchard, Nathalie; Pancrati, Ovidiu; Dufour, Denis; Bergeron, Alain

    2016-05-01

    The ability of millimeter waves (1-10 mm, or 30-300 GHz) to penetrate through dense materials, such as leather, wool, wood and gyprock, and to also transmit over long distances due to low atmospheric absorption, makes them ideal for numerous applications, such as body scanning, building inspection and seeing in degraded visual environments. Current drawbacks of millimeter wave imaging systems are they use single detector or linear arrays that require scanning or the two dimensional arrays are bulky, often consisting of rather large antenna-couple focal plane arrays (FPAs). Previous work from INO has demonstrated the capability of its compact lightweight camera, based on a 384 x 288 microbolometer pixel FPA with custom optics for active video-rate imaging at wavelengths of 118 μm (2.54 THz), 432 μm (0.69 THz), 663 μm (0.45 THz), and 750 μm (0.4 THz). Most of the work focused on transmission imaging, as a first step, but some preliminary demonstrations of reflection imaging at these were also reported. In addition, previous work also showed that the broadband FPA remains sensitive to wavelengths at least up to 3.2 mm (94 GHz). The work presented here demonstrates the ability of the INO terahertz camera for reflection imaging at millimeter wavelengths. Snapshots taken at video rates of objects show the excellent quality of the images. In addition, a description of the imaging system that includes the terahertz camera and different millimeter sources is provided.

  6. Study of a prototype high quantum efficiency thick scintillation crystal video-electronic portal imaging device.

    Science.gov (United States)

    Samant, Sanjiv S; Gopal, Arun

    2006-08-01

    Image quality in portal imaging suffers significantly from the loss in contrast and spatial resolution that results from the excessive Compton scatter associated with megavoltage x rays. In addition, portal image quality is further reduced due to the poor quantum efficiency (QE) of current electronic portal imaging devices (EPIDs). Commercial video-camera-based EPIDs or VEPIDs that utilize a thin phosphor screen in conjunction with a metal buildup plate to convert the incident x rays to light suffer from reduced light production due to low QE (quantum efficiency (DQE). A theoretical expression of DQE(0) was developed to be used as a predictive model to propose improvements in the optics associated with the light detection. The prototype TSC provides DQE(0)=0.02 with its current imaging geometry, which is an order of magnitude greater than that for commercial VEPID systems and comparable to flat-panel imaging systems. Following optimization in the imaging geometry and the use of a high-end, cooled charge-coupled-device (CCD) camera system, the performance of the TSC is expected to improve even further. Based on our theoretical model, the expected DQE(0)=0.12 for the TSC system with the proposed improvements, which exceeds the performance of current flat-panel EPIDs. The prototype TSC provides high quality imaging even at subMU exposures (typical imaging dose is 0.2 MU per image), which offers the potential for daily patient localization imaging without increasing the weekly dose to the patient. Currently, the TSC is capable of limited frame-rate fluoroscopy for intratreatment visualization of patient motion at approximately 3 frames/second, since the achievable frame rate is significantly reduced by the limitations of the camera-control processor. With optimized processor control, the TSC is expected to be capable of intratreatment imaging exceeding 10 frames/second to monitor patient motion.

  7. Video and thermal imaging system for monitoring interiors of high temperature reaction vessels

    Science.gov (United States)

    Saveliev, Alexei V [Chicago, IL; Zelepouga, Serguei A [Hoffman Estates, IL; Rue, David M [Chicago, IL

    2012-01-10

    A system and method for real-time monitoring of the interior of a combustor or gasifier wherein light emitted by the interior surface of a refractory wall of the combustor or gasifier is collected using an imaging fiber optic bundle having a light receiving end and a light output end. Color information in the light is captured with primary color (RGB) filters or complimentary color (GMCY) filters placed over individual pixels of color sensors disposed within a digital color camera in a BAYER mosaic layout, producing RGB signal outputs or GMCY signal outputs. The signal outputs are processed using intensity ratios of the primary color filters or the complimentary color filters, producing video images and/or thermal images of the interior of the combustor or gasifier.

  8. On-the-fly learning for visual search of large-scale image and video datasets.

    Science.gov (United States)

    Chatfield, Ken; Arandjelović, Relja; Parkhi, Omkar; Zisserman, Andrew

    The objective of this work is to visually search large-scale video datasets for semantic entities specified by a text query. The paradigm we explore is constructing visual models for such semantic entities on-the-fly, i.e. at run time, by using an image search engine to source visual training data for the text query. The approach combines fast and accurate learning and retrieval, and enables videos to be returned within seconds of specifying a query. We describe three classes of queries, each with its associated visual search method: object instances (using a bag of visual words approach for matching); object categories (using a discriminative classifier for ranking key frames); and faces (using a discriminative classifier for ranking face tracks). We discuss the features suitable for each class of query, for example Fisher vectors or features derived from convolutional neural networks (CNNs), and how these choices impact on the trade-off between three important performance measures for a real-time system of this kind, namely: (1) accuracy, (2) memory footprint, and (3) speed. We also discuss and compare a number of important implementation issues, such as how to remove 'outliers' in the downloaded images efficiently, and how to best obtain a single descriptor for a face track. We also sketch the architecture of the real-time on-the-fly system. Quantitative results are given on a number of large-scale image and video benchmarks (e.g.  TRECVID INS, MIRFLICKR-1M), and we further demonstrate the performance and real-world applicability of our methods over a dataset sourced from 10,000 h of unedited footage from BBC News, comprising 5M+ key frames.

  9. Compression of compound images and video for enabling rich media in embedded systems

    Science.gov (United States)

    Said, Amir

    2004-01-01

    It is possible to improve the features supported by devices with embedded systems by increasing the processor computing power, but this always results in higher costs, complexity, and power consumption. An interesting alternative is to use the growing networking infrastructures to do remote processing and visualization, with the embedded system mainly responsible for communications and user interaction. This enables devices to behave as if much more "intelligent" to users, at very low costs and power. In this article we explain how compression can make some of these solutions more bandwidth-efficient, enabling devices to simply decompress very rich graphical information and user interfaces that had been rendered elsewhere. The mixture of natural images and video with text, graphics, and animations simultaneously in the same frame is called compound video. We present a new method for compression of compound images and video, which is able to efficiently identify the different components during compression, and use an appropriate coding method. Our system uses lossless compression for graphics and text, and, on natural images and highly detailed parts, it uses lossy compression with dynamically varying quality. Since it was designed for embedded systems with very limited resources, and it has small executable size, and low complexity for classification, compression and decompression. Other compression methods (e.g., MPEG) can do the same, but are very inefficient for compound content. High-level graphics languages can be bandwidth-efficient, but are much less reliable (e.g., supporting Asian fonts), and are many orders of magnitude more complex. Numerical tests show the very significant gains in compression achieved by these systems.

  10. Determination of quasi-static microaccelerations onboard a satellite using video images of moving objects

    Science.gov (United States)

    Levtov, V. L.; Romanov, V. V.; Boguslavsky, A. A.; Sazonov, V. V.; Sokolov, S. M.; Glotov, Yu. N.

    2009-12-01

    A space experiment aimed at determination of quasi-static microaccelerations onboard an artificial satellite of the Earth using video images of the objects executing free motion is considered. The experiment was carried out onboard the Foton M-3 satellite. Several pellets moved in a cubic box fixed on the satellite’s mainframe and having two transparent adjacent walls. Their motion was photographed by a digital video camera. The camera was installed facing one of the transparent walls; a mirror was placed at an angle to another transparent wall. Such an optical system allowed us to have in a single frame two images of the pellets from differing viewpoints. The motion of the pellets was photographed on time intervals lasting 96 s. Pauses between these intervals were also equal to 96 s. A special processing of a separate image allowed us to determine coordinates of the pellet centers in the camera’s coordinate system. The sequence of frames belonging to a continuous interval of photography was processed in the following way. The time dependence of each coordinate of every pellet was approximated by a second degree polynomial using the least squares method. The coefficient of squared time is equal to a half of the corresponding microacceleration component. As has been shown by processing made, the described method of determination of quasi-static microaccelerations turned out to be sufficiently sensitive and accurate.

  11. Performance measure of image and video quality assessment algorithms: subjective root-mean-square error

    Science.gov (United States)

    Nuutinen, Mikko; Virtanen, Toni; Häkkinen, Jukka

    2016-03-01

    Evaluating algorithms used to assess image and video quality requires performance measures. Traditional performance measures (e.g., Pearson's linear correlation coefficient, Spearman's rank-order correlation coefficient, and root mean square error) compare quality predictions of algorithms to subjective mean opinion scores (mean opinion score/differential mean opinion score). We propose a subjective root-mean-square error (SRMSE) performance measure for evaluating the accuracy of algorithms used to assess image and video quality. The SRMSE performance measure takes into account dispersion between observers. The other important property of the SRMSE performance measure is its measurement scale, which is calibrated to units of the number of average observers. The results of the SRMSE performance measure indicate the extent to which the algorithm can replace the subjective experiment (as the number of observers). Furthermore, we have presented the concept of target values, which define the performance level of the ideal algorithm. We have calculated the target values for all sample sets of the CID2013, CVD2014, and LIVE multiply distorted image quality databases.The target values and MATLAB implementation of the SRMSE performance measure are available on the project page of this study.

  12. Applying GA for Optimizing the User Query in Image and Video Retrieval

    Directory of Open Access Journals (Sweden)

    Ehsan Lotfi

    2014-12-01

    Full Text Available In an information retrieval system, the query can be made by user sketch. The new method presented here, optimizes the user sketch and applies the optimized query to retrieval the information. This optimization may be used in Content-Based Image Retrieval (CBIR and Content-Based Video Retrieval (CBVR which is based on trajectory extraction. To optimize the retrieval process, one stage of retrieval is performed by the user sketch. The retrieval criterion is based on the proposed distance metric from the user query. Retrieved answers are considered as the primary population for evolutionary optimization. The optimized query may be achieved through reproducing and minimizing the proposed measurement by using Genetic algorithm (GA. The optimized query could then be used for the retrieval of concepts from a given Data Base (DB. The proposed algorithms are evaluated for trajectory retrieval from urban traffic surveillance video and image retrieval from a DB. Practical implementations have demonstrated the high efficiency of this system in trajectory retrieval and image indexing.

  13. Performance characterization of image and video analysis systems at Siemens Corporate Research

    Science.gov (United States)

    Ramesh, Visvanathan; Jolly, Marie-Pierre; Greiffenhagen, Michael

    2000-06-01

    There has been a significant increase in commercial products using imaging analysis techniques to solve real-world problems in diverse fields such as manufacturing, medical imaging, document analysis, transportation and public security, etc. This has been accelerated by various factors: more advanced algorithms, the availability of cheaper sensors, and faster processors. While algorithms continue to improve in performance, a major stumbling block in translating improvements in algorithms to faster deployment of image analysis systems is the lack of characterization of limits of algorithms and how they affect total system performance. The research community has realized the need for performance analysis and there have been significant efforts in the last few years to remedy the situation. Our efforts at SCR have been on statistical modeling and characterization of modules and systems. The emphasis is on both white-box and black box methodologies to evaluate and optimize vision systems. In the first part of this paper we review the literature on performance characterization and then provide an overview of the status of research in performance characterization of image and video understanding systems. The second part of the paper is on performance evaluation of medical image segmentation algorithms. Finally, we highlight some research issues in performance analysis in medical imaging systems.

  14. Overview of image processing tools to extract physical information from JET videos

    Science.gov (United States)

    Craciunescu, T.; Murari, A.; Gelfusa, M.; Tiseanu, I.; Zoita, V.; EFDA Contributors, JET

    2014-11-01

    In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the

  15. SVD-based quality metric for image and video using machine learning.

    Science.gov (United States)

    Narwaria, Manish; Lin, Weisi

    2012-04-01

    We study the use of machine learning for visual quality evaluation with comprehensive singular value decomposition (SVD)-based visual features. In this paper, the two-stage process and the relevant work in the existing visual quality metrics are first introduced followed by an in-depth analysis of SVD for visual quality assessment. Singular values and vectors form the selected features for visual quality assessment. Machine learning is then used for the feature pooling process and demonstrated to be effective. This is to address the limitations of the existing pooling techniques, like simple summation, averaging, Minkowski summation, etc., which tend to be ad hoc. We advocate machine learning for feature pooling because it is more systematic and data driven. The experiments show that the proposed method outperforms the eight existing relevant schemes. Extensive analysis and cross validation are performed with ten publicly available databases (eight for images with a total of 4042 test images and two for video with a total of 228 videos). We use all publicly accessible software and databases in this study, as well as making our own software public, to facilitate comparison in future research.

  16. Color Image and Video Compression Based on Direction Adaptive Partitioned Discrete Wavelet Transform

    Directory of Open Access Journals (Sweden)

    K. Dinesh

    2012-12-01

    Full Text Available The main objective of this study is to use the Direction-Adaptive Partitioned Block Transform (DA-PBT for compressing the color images and videos. It is same as the direction-adaptive block transform but it also have an additional direction-adaptive block partitioning to improve energy concentration. The selection of a directional mode determines the transform direction that provides directional basis functions. It reduces complexity and more efficient coefficient ordering for entropy coding. For image coding, the DA-PBT significantly outperforms the directional DCT. As a block transform, the DA-PBT can be directly incorporated into the prediction-based video coding standards to work with the block-based intra prediction as well as the block-based motion-compensated interframe prediction. The performance of the DA-PBT is compared with the 2D-DCT by using the Peak-Signal-to- Noise Ratio (PSNR and Compression Ratio (CR. The experimental results shows that the DA-PBT performs well than the 2D-DCT.

  17. Overview on Selective Encryption of Image and Video: Challenges and Perspectives

    Directory of Open Access Journals (Sweden)

    Massoudi A

    2008-01-01

    Full Text Available In traditional image and video content protection schemes, called fully layered, the whole content is first compressed. Then, the compressed bitstream is entirely encrypted using a standard cipher (DES, AES, IDEA, etc.. The specific characteristics of this kind of data (high-transmission rate with limited bandwidth make standard encryption algorithms inadequate. Another limitation of fully layered systems consists of altering the whole bitstream syntax which may disable some codec functionalities. Selective encryption is a new trend in image and video content protection. It consists of encrypting only a subset of the data. The aim of selective encryption is to reduce the amount of data to encrypt while preserving a sufficient level of security. This computation saving is very desirable especially in constrained communications (real-time networking, high-definition delivery, and mobile communications with limited computational power devices. In addition, selective encryption allows preserving some codec functionalities such as scalability. This tutorial is intended to give an overview on selective encryption algorithms. The theoretical background of selective encryption, potential applications, challenges, and perspectives is presented.

  18. Realization of a video-rate distributed aperture millimeter-wave imaging system using optical upconversion

    Science.gov (United States)

    Schuetz, Christopher; Martin, Richard; Dillon, Thomas; Yao, Peng; Mackrides, Daniel; Harrity, Charles; Zablocki, Alicia; Shreve, Kevin; Bonnett, James; Curt, Petersen; Prather, Dennis

    2013-05-01

    Passive imaging using millimeter waves (mmWs) has many advantages and applications in the defense and security markets. All terrestrial bodies emit mmW radiation and these wavelengths are able to penetrate smoke, fog/clouds/marine layers, and even clothing. One primary obstacle to imaging in this spectrum is that longer wavelengths require larger apertures to achieve the resolutions desired for many applications. Accordingly, lens-based focal plane systems and scanning systems tend to require large aperture optics, which increase the achievable size and weight of such systems to beyond what can be supported by many applications. To overcome this limitation, a distributed aperture detection scheme is used in which the effective aperture size can be increased without the associated volumetric increase in imager size. This distributed aperture system is realized through conversion of the received mmW energy into sidebands on an optical carrier. This conversion serves, in essence, to scale the mmW sparse aperture array signals onto a complementary optical array. The side bands are subsequently stripped from the optical carrier and recombined to provide a real time snapshot of the mmW signal. Using this technique, we have constructed a real-time, video-rate imager operating at 75 GHz. A distributed aperture consisting of 220 upconversion channels is used to realize 2.5k pixels with passive sensitivity. Details of the construction and operation of this imager as well as field testing results will be presented herein.

  19. Cherenkov Video Imaging Allows for the First Visualization of Radiation Therapy in Real Time

    Energy Technology Data Exchange (ETDEWEB)

    Jarvis, Lesley A., E-mail: Lesley.a.jarvis@hitchcock.org [Department of Medicine, Geisel School of Medicine at Dartmouth College, Hanover, New Hampshire (United States); Norris Cotton Cancer Center at the Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire (United States); Zhang, Rongxiao [Department of Physics and Astronomy, Dartmouth College, Hanover, New Hampshire (United States); Gladstone, David J. [Department of Medicine, Geisel School of Medicine at Dartmouth College, Hanover, New Hampshire (United States); Norris Cotton Cancer Center at the Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire (United States); Jiang, Shudong [Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire (United States); Hitchcock, Whitney [Geisel School of Medicine at Dartmouth College, Hanover, New Hampshire (United States); Friedman, Oscar D.; Glaser, Adam K.; Jermyn, Michael [Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire (United States); Pogue, Brian W. [Department of Physics and Astronomy, Dartmouth College, Hanover, New Hampshire (United States); Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire (United States)

    2014-07-01

    Purpose: To determine whether Cherenkov light imaging can visualize radiation therapy in real time during breast radiation therapy. Methods and Materials: An intensified charge-coupled device (CCD) camera was synchronized to the 3.25-μs radiation pulses of the clinical linear accelerator with the intensifier set × 100. Cherenkov images were acquired continuously (2.8 frames/s) during fractionated whole breast irradiation with each frame an accumulation of 100 radiation pulses (approximately 5 monitor units). Results: The first patient images ever created are used to illustrate that Cherenkov emission can be visualized as a video during conditions typical for breast radiation therapy, even with complex treatment plans, mixed energies, and modulated treatment fields. Images were generated correlating to the superficial dose received by the patient and potentially the location of the resulting skin reactions. Major blood vessels are visible in the image, providing the potential to use these as biological landmarks for improved geometric accuracy. The potential for this system to detect radiation therapy misadministrations, which can result from hardware malfunction or patient positioning setup errors during individual fractions, is shown. Conclusions: Cherenkoscopy is a unique method for visualizing surface dose resulting in real-time quality control. We propose that this system could detect radiation therapy errors in everyday clinical practice at a time when these errors can be corrected to result in improved safety and quality of radiation therapy.

  20. Interaction between High-Level and Low-Level Image Analysis for Semantic Video Object Extraction

    Directory of Open Access Journals (Sweden)

    Andrea Cavallaro

    2004-06-01

    Full Text Available The task of extracting a semantic video object is split into two subproblems, namely, object segmentation and region segmentation. Object segmentation relies on a priori assumptions, whereas region segmentation is data-driven and can be solved in an automatic manner. These two subproblems are not mutually independent, and they can benefit from interactions with each other. In this paper, a framework for such interaction is formulated. This representation scheme based on region segmentation and semantic segmentation is compatible with the view that image analysis and scene understanding problems can be decomposed into low-level and high-level tasks. Low-level tasks pertain to region-oriented processing, whereas the high-level tasks are closely related to object-level processing. This approach emulates the human visual system: what one “sees” in a scene depends on the scene itself (region segmentation as well as on the cognitive task (semantic segmentation at hand. The higher-level segmentation results in a partition corresponding to semantic video objects. Semantic video objects do not usually have invariant physical properties and the definition depends on the application. Hence, the definition incorporates complex domain-specific knowledge and is not easy to generalize. For the specific implementation used in this paper, motion is used as a clue to semantic information. In this framework, an automatic algorithm is presented for computing the semantic partition based on color change detection. The change detection strategy is designed to be immune to the sensor noise and local illumination variations. The lower-level segmentation identifies the partition corresponding to perceptually uniform regions. These regions are derived by clustering in an N-dimensional feature space, composed of static as well as dynamic image attributes. We propose an interaction mechanism between the semantic and the region partitions which allows to

  1. Image ranking in video sequences using pairwise image comparisons and temporal smoothing

    CSIR Research Space (South Africa)

    Burke, Michael

    2016-12-01

    Full Text Available theoretic approaches to novelty detection have been proposed previously [2], but these are typically measure and data dependent. For example, ranking images using entropy is unlikely to flag images of interest to humans, as images with high texture content... its interest value. Novelty detection is relatively well studied and detailed survey papers can be found on the topic [2]. However, information of interest to an end-user not only includes unique observations (novelty), but also observations...

  2. Visual Perception Based Objective Stereo Image Quality Assessment for 3D Video Communication

    Directory of Open Access Journals (Sweden)

    Gangyi Jiang

    2014-04-01

    Full Text Available Stereo image quality assessment is a crucial and challenging issue in 3D video communication. One of major difficulties is how to weigh binocular masking effect. In order to establish the assessment mode more in line with the human visual system, Watson model is adopted, which defines visibility threshold under no distortion composed of contrast sensitivity, masking effect and error in this study. As a result, we propose an Objective Stereo Image Quality Assessment method (OSIQA, organically combining a new Left-Right view Image Quality Assessment (LR-IQA metric and Depth Perception Image Quality Assessment (DP-IQA metric. The new LR-IQA metric is first given to calculate the changes of perception coefficients in each sub-band utilizing Watson model and human visual system after wavelet decomposition of left and right images in stereo image pair, respectively. Then, a concept of absolute difference map is defined to describe abstract differential value between the left and right view images and the DP-IQA metric is presented to measure structure distortion of the original and distorted abstract difference maps through luminance function, error sensitivity and contrast function. Finally, an OSIQA metric is generated by using multiplicative fitting of the LR-IQA and DP-IQA metrics based on weighting. Experimental results shows that the proposed method are highly correlated with human visual judgments (Mean Opinion Score and the correlation coefficient and monotony are more than 0.92 under five types of distortions such as Gaussian blur, Gaussian noise, JP2K compression, JPEG compression and H.264 compression.

  3. The motion analysis of fire video images based on moment features and flicker frequency

    Institute of Scientific and Technical Information of China (English)

    LI Jin; FONG N. K.; CHOW W. K.; WONG L.T.; LU Puyi; XU Dian-guo

    2004-01-01

    In this paper, motion analysis methods based on the moment features and flicker frequency features for early fire flame from ordinary CCD video camera were proposed, and in order to describe the changing of flame and disturbance of non-flame phenomena further more, the average changing pixel number of the first-order moments of consecutive flames has been defined in the moment analysis as well. The first-order moments of all kinds of flames used in our experiments present irregularly flickering, and their average changing pixel numbers of first-order moments are greater than fire-like disturbances. For the analysis of flicker frequency of flame, which is extracted and calculated in spatial domain, and therefore it is computational simple and fast. The method of extracting flicker frequency from video images is not affected by the catalogues of combustion material and distance. In experiments, we adopted two kinds of flames, i. e. , fixed flame and movable flame. Many comparing and disturbing experiments were done and verified that the methods can be used as criteria for early fire detection.

  4. Visualization of glucagon secretion from pancreatic α cells by bioluminescence video microscopy: Identification of secretion sites in the intercellular contact regions.

    Science.gov (United States)

    Yokawa, Satoru; Suzuki, Takahiro; Inouye, Satoshi; Inoh, Yoshikazu; Suzuki, Ryo; Kanamori, Takao; Furuno, Tadahide; Hirashima, Naohide

    2017-04-15

    We have firstly visualized glucagon secretion using a method of video-rate bioluminescence imaging. The fusion protein of proglucagon and Gaussia luciferase (PGCG-GLase) was used as a reporter to detect glucagon secretion and was efficiently expressed in mouse pancreatic α cells (αTC1.6) using a preferred human codon-optimized gene. In the culture medium of the cells expressing PGCG-GLase, luminescence activity determined with a luminometer was increased with low glucose stimulation and KCl-induced depolarization, as observed for glucagon secretion. From immunochemical analyses, PGCG-GLase stably expressed in clonal αTC1.6 cells was correctly processed and released by secretory granules. Luminescence signals of the secreted PGCG-GLase from the stable cells were visualized by video-rate bioluminescence microscopy. The video images showed an increase in glucagon secretion from clustered cells in response to stimulation by KCl. The secretory events were observed frequently at the intercellular contact regions. Thus, the localization and frequency of glucagon secretion might be regulated by cell-cell adhesion.

  5. Video-mosaicking of in vivo reflectance confocal microscopy images for noninvasive examination of skin lesion (Conference Presentation)

    Science.gov (United States)

    Kose, Kivanc; Gou, Mengran; Yelamos, Oriol; Cordova, Miguel A.; Rossi, Anthony; Nehal, Kishwer S.; Camps, Octavia I.; Dy, Jennifer G.; Brooks, Dana H.; Rajadhyaksha, Milind

    2017-02-01

    In this report we describe a computer vision based pipeline to convert in-vivo reflectance confocal microscopy (RCM) videos collected with a handheld system into large field of view (FOV) mosaics. For many applications such as imaging of hard to access lesions, intraoperative assessment of MOHS margins, or delineation of lesion margins beyond clinical borders, raster scan based mosaicing techniques have clinically significant limitations. In such cases, clinicians often capture RCM videos by freely moving a handheld microscope over the area of interest, but the resulting videos lose large-scale spatial relationships. Videomosaicking is a standard computational imaging technique to register, and stitch together consecutive frames of videos into large FOV high resolution mosaics. However, mosaicing RCM videos collected in-vivo has unique challenges: (i) tissue may deform or warp due to physical contact with the microscope objective lens, (ii) discontinuities or "jumps" between consecutive images and motion blur artifacts may occur, due to manual operation of the microscope, and (iii) optical sectioning and resolution may vary between consecutive images due to scattering and aberrations induced by changes in imaging depth and tissue morphology. We addressed these challenges by adapting or developing new algorithmic methods for videomosaicking, specifically by modeling non-rigid deformations, followed by automatically detecting discontinuities (cut locations) and, finally, applying a data-driven image stitching approach that fully preserves resolution and tissue morphologic detail without imposing arbitrary pre-defined boundaries. We will present example mosaics obtained by clinical imaging of both melanoma and non-melanoma skin cancers. The ability to combine freehand mosaicing for handheld microscopes with preserved cellular resolution will have high impact application in diverse clinical settings, including low-resource healthcare systems.

  6. Towards an automated analysis of video-microscopy images of fungal morphogenesis

    Directory of Open Access Journals (Sweden)

    Diéguez-Uribeondo, Javier

    2005-06-01

    Full Text Available Fungal morphogenesis is an exciting field of cell biology and several mathematical models have been developed to describe it. These models require experimental evidences to be corroborated and, therefore, there is a continuous search for new microscopy and image analysis techniques. In this work, we have used a Canny-edge-detector based technique to automate the generation of hyphal profiles and calculation of morphogenetic parameters such as diameter, elongation rates and hyphoid fitness. The results show that the data obtained with this technique are similar to published data generated with manualbased tracing techniques and that have been carried out on the same species or genus. Thus, we show that application of edge detector-based technique to hyphal growth represents an efficient and accurate method to study hyphal morphogenesis. This represents the first step towards an automated analysis of videomicroscopy images of fungal morphogenesis.La morfogénesis de los hongos es un área de estudio de gran relevancia en la biología celular y en la que se han desarrollado varios modelos matemáticos. Los modelos matemáticos de procesos biológicos precisan de pruebas experimentales que apoyen y corroboren las predicciones teóricas y, por este motivo, existe una búsqueda continua de nuevas técnicas de microscopía y análisis de imágenes para su aplicación en el estudio del crecimiento celular. En este trabajo hemos utilizado una técnica basada en un detector de contornos llamado “Canny-edge-detectorâ€� con el objetivo de automatizar la generación de perfiles de hifas y el cálculo de parámetros morfogenéticos, tales como: el diámetro, la velocidad de elongación y el ajuste con el perfil hifoide, es decir, el perfil teórico de las hifas de los hongos. Los resultados obtenidos son similares a los datos publicados a partir de técnicas manuales de trazado de contornos, generados en la misma especie y género. De esta manera

  7. Three-dimensional imaging applications in Earth Sciences using video data acquired from an unmanned aerial vehicle

    Science.gov (United States)

    McLeod, Tara

    For three dimensional (3D) aerial images, unmanned aerial vehicles (UAVs) are cheaper to operate and easier to fly than the typical manned craft mounted with a laser scanner. This project explores the feasibility of using 2D video images acquired with a UAV and transforming them into 3D point clouds. The Aeryon Scout -- a quad-copter micro UAV -- flew two missions: the first at York University Keele campus and the second at the Canadian Wollastonite Mine Property. Neptec's ViDAR software was used to extract 3D information from the 2D video using structure from motion. The resulting point clouds were sparsely populated, yet captured vegetation well. They were used successfully to measure fracture orientation in rock walls. Any improvement in the video resolution would cascade through the processing and improve the overall results.

  8. An adaptive fuzzy filter for coding artifacts removal in video and image

    Institute of Scientific and Technical Information of China (English)

    WU Jing; YE Xiu-qing; GU Wei-kang

    2007-01-01

    This paper proposes a new adaptive post-filtering algorithm to remove coding artifacts in block-based video coder. The proposed method concentrates on blocking and ringing artifacts removal. For de-blocking, the blocking strength is identified to determine the filtering range, and the maximum quantization parameter of the image is used to adapt the 1D fuzzy filter. For de-ringing, besides the edge detection, a complementary ringing detection method is proposed to locate the neglected ringing blocks, and the gradient threshold is adopted to adjust the parameter of 2D fuzzy filter. Experiments are performed on the MPEG-4 sequences. Compared with other methods, the proposed one achieves better detail preservation and artifacts removal performance with lower computational cost.

  9. Advances in EEG: home video telemetry, high frequency oscillations and electrical source imaging.

    Science.gov (United States)

    Patel, Anjla C; Thornton, Rachel C; Mitchell, Tejal N; Michell, Andrew W

    2016-10-01

    Over the last two decades, technological advances in electroencephalography (EEG) have allowed us to extend its clinical utility for the evaluation of patients with epilepsy. This article reviews three main areas in which substantial advances have been made in the diagnosis and pre-surgical planning of patients with epilepsy. Firstly, the development of small portable video-EEG systems have allowed some patients to record their attacks at home, thereby improving diagnosis, with consequent substantial healthcare and economic implications. Secondly, in specialist centres carrying out epilepsy surgery, there has been considerable interest in whether bursts of very high frequency EEG activity can help to determine the regions of the brain likely to be generating the seizures. Identification of these discharges, initially only recorded from intracranial electrodes, may thus allow better surgical planning and improve surgical outcomes. Finally we discuss the contribution of electrical source imaging in the pre-surgical evaluation of patients with focal epilepsy, and its prospects for the future.

  10. Video Image Block-matching Motion Estimation Algorithm Based on Two-step Search

    Institute of Scientific and Technical Information of China (English)

    Wei-qi JIN; Yan CHEN; Ling-xue WANG; Bin LIU; Chong-liang LIU; Ya-zhong SHEN; Gui-qing ZHANG

    2010-01-01

    Aiming at the shortcoming that certain existing blocking-matching algorithms, such as full search, three-step search, and diamond search algorithms, usually can not keep a good balance between high accuracy and low computational complexity, a block-matching motion estimation algorithm based on two-step search is proposed in this paper. According to the fact that the gray values of adjacent pixels will not vary fast, the algorithm employs an interlaced search pattern in the search window to estimate the motion vector of the object-block. Simulation and actual experiments demonstrate that the proposed algorithm greatly outperforms the well-known three-step search and diamond search algorithms, no matter the motion vector is large or small. Compared with the full search algorithm, the proposed one achieves similar performance but requires much less computation, therefore, the algorithm is well qualified for real-time video image processing.

  11. Bioluminescent system for dynamic imaging of cell and animal behavior

    Energy Technology Data Exchange (ETDEWEB)

    Hara-Miyauchi, Chikako [Department of Physiology, Keio University School of Medicine, Tokyo 160-8582 (Japan); Laboratory for Cell Function Dynamics, Brain Science Institute, RIKEN, Saitama 351-0198 (Japan); Department of Biophysics and Biochemistry, Graduate School of Health Care Sciences, Tokyo Medical and Dental University, Tokyo 113-8510 (Japan); Tsuji, Osahiko [Department of Physiology, Keio University School of Medicine, Tokyo 160-8582 (Japan); Department of Orthopedic Surgery, Keio University School of Medicine, Tokyo 160-8582 (Japan); Hanyu, Aki [Division of Biochemistry, The Cancer Institute of the Japanese Foundation for Cancer Research, Tokyo 135-8550 (Japan); Okada, Seiji [Department of Advanced Medical Initiatives, Faculty of Medical Sciences, Kyushu University, Fukuoka 812-8582 (Japan); Yasuda, Akimasa [Department of Physiology, Keio University School of Medicine, Tokyo 160-8582 (Japan); Department of Orthopedic Surgery, Keio University School of Medicine, Tokyo 160-8582 (Japan); Fukano, Takashi [Laboratory for Cell Function Dynamics, Brain Science Institute, RIKEN, Saitama 351-0198 (Japan); Akazawa, Chihiro [Department of Biophysics and Biochemistry, Graduate School of Health Care Sciences, Tokyo Medical and Dental University, Tokyo 113-8510 (Japan); Nakamura, Masaya [Department of Orthopedic Surgery, Keio University School of Medicine, Tokyo 160-8582 (Japan); Imamura, Takeshi [Department of Molecular Medicine for Pathogenesis, Ehime University Graduate School of Medicine, Toon, Ehime 791-0295 (Japan); Core Research for Evolutional Science and Technology, The Japan Science and Technology Corporation, Tokyo 135-8550 (Japan); Matsuzaki, Yumi [Department of Physiology, Keio University School of Medicine, Tokyo 160-8582 (Japan); Okano, Hirotaka James, E-mail: hjokano@jikei.ac.jp [Department of Physiology, Keio University School of Medicine, Tokyo 160-8582 (Japan); Division of Regenerative Medicine Jikei University School of Medicine, Tokyo 150-8461 (Japan); and others

    2012-03-09

    Highlights: Black-Right-Pointing-Pointer We combined a yellow variant of GFP and firefly luciferase to make ffLuc-cp156. Black-Right-Pointing-Pointer ffLuc-cp156 showed improved photon yield in cultured cells and transgenic mice. Black-Right-Pointing-Pointer ffLuc-cp156 enabled video-rate bioluminescence imaging of freely-moving animals. Black-Right-Pointing-Pointer ffLuc-cp156 mice enabled tracking real-time drug delivery in conscious animals. -- Abstract: The current utility of bioluminescence imaging is constrained by a low photon yield that limits temporal sensitivity. Here, we describe an imaging method that uses a chemiluminescent/fluorescent protein, ffLuc-cp156, which consists of a yellow variant of Aequorea GFP and firefly luciferase. We report an improvement in photon yield by over three orders of magnitude over current bioluminescent systems. We imaged cellular movement at high resolution including neuronal growth cones and microglial cell protrusions. Transgenic ffLuc-cp156 mice enabled video-rate bioluminescence imaging of freely moving animals, which may provide a reliable assay for drug distribution in behaving animals for pre-clinical studies.

  12. Video Image Analysis of Turbulent Buoyant Jets Using a Novel Laboratory Apparatus

    Science.gov (United States)

    Crone, T. J.; Colgan, R. E.; Ferencevych, P. G.

    2012-12-01

    Turbulent buoyant jets play an important role in the transport of heat and mass in a variety of environmental settings on Earth. Naturally occurring examples include the discharges from high-temperature seafloor hydrothermal vents and from some types of subaerial volcanic eruptions. Anthropogenic examples include flows from industrial smokestacks and the flow from the damaged well after the Deepwater Horizon oil leak of 2010. Motivated by a desire to find non-invasive methods for measuring the volumetric flow rates of turbulent buoyant jets, we have constructed a laboratory apparatus that can generate these types of flows with easily adjustable nozzle velocities and fluid densities. The jet fluid comprises a variable mixture of nitrogen and carbon dioxide gas, which can be injected at any angle with respect to the vertical into the quiescent surrounding air. To make the flow visible we seed the jet fluid with a water fog generated by an array of piezoelectric diaphragms oscillating at ultrasonic frequencies. The system can generate jets that have initial densities ranging from approximately 2-48% greater than the ambient air. We obtain independent estimates of the volumetric flow rates using well-calibrated rotameters, and collect video image sequences for analysis at frame rates up to 120 frames per second using a machine vision camera. We are using this apparatus to investigate several outstanding problems related to the physics of these flows and their analysis using video imagery. First, we are working to better constrain several theoretical parameters that describe the trajectory of these flows when their initial velocities are not parallel to the buoyancy force. The ultimate goal of this effort is to develop well-calibrated methods for establishing volumetric flow rates using trajectory analysis. Second, we are working to refine optical plume velocimetry (OPV), a non-invasive technique for estimating flow rates using temporal cross-correlation of image

  13. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization.

    Science.gov (United States)

    Kedzierski, Michal; Delis, Paulina

    2016-06-23

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°-90° ( φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles.

  14. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization

    Directory of Open Access Journals (Sweden)

    Michal Kedzierski

    2016-06-01

    Full Text Available The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs, especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°–90° ( φ or ω and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles.

  15. Analyzing Structure and Function of Vascularization in Engineered Bone Tissue by Video-Rate Intravital Microscopy and 3D Image Processing.

    Science.gov (United States)

    Pang, Yonggang; Tsigkou, Olga; Spencer, Joel A; Lin, Charles P; Neville, Craig; Grottkau, Brian

    2015-10-01

    Vascularization is a key challenge in tissue engineering. Three-dimensional structure and microcirculation are two fundamental parameters for evaluating vascularization. Microscopic techniques with cellular level resolution, fast continuous observation, and robust 3D postimage processing are essential for evaluation, but have not been applied previously because of technical difficulties. In this study, we report novel video-rate confocal microscopy and 3D postimage processing techniques to accomplish this goal. In an immune-deficient mouse model, vascularized bone tissue was successfully engineered using human bone marrow mesenchymal stem cells (hMSCs) and human umbilical vein endothelial cells (HUVECs) in a poly (D,L-lactide-co-glycolide) (PLGA) scaffold. Video-rate (30 FPS) intravital confocal microscopy was applied in vitro and in vivo to visualize the vascular structure in the engineered bone and the microcirculation of the blood cells. Postimage processing was applied to perform 3D image reconstruction, by analyzing microvascular networks and calculating blood cell viscosity. The 3D volume reconstructed images show that the hMSCs served as pericytes stabilizing the microvascular network formed by HUVECs. Using orthogonal imaging reconstruction and transparency adjustment, both the vessel structure and blood cells within the vessel lumen were visualized. Network length, network intersections, and intersection densities were successfully computed using our custom-developed software. Viscosity analysis of the blood cells provided functional evaluation of the microcirculation. These results show that by 8 weeks, the blood vessels in peripheral areas function quite similarly to the host vessels. However, the viscosity drops about fourfold where it is only 0.8 mm away from the host. In summary, we developed novel techniques combining intravital microscopy and 3D image processing to analyze the vascularization in engineered bone. These techniques have broad

  16. Dynamic imaging of cell-free and cell-associated viral capture in mature dendritic cells.

    Science.gov (United States)

    Izquierdo-Useros, Nuria; Esteban, Olga; Rodriguez-Plata, Maria T; Erkizia, Itziar; Prado, Julia G; Blanco, Julià; García-Parajo, Maria F; Martinez-Picado, Javier

    2011-12-01

    Dendritic cells (DCs) capture human immunodeficiency virus (HIV) through a non-fusogenic mechanism that enables viral transmission to CD4(+) T cells, contributing to in vivo viral dissemination. Although previous studies have provided important clues to cell-free viral capture by mature DCs (mDCs), dynamic and kinetic insight on this process is still missing. Here, we used three-dimensional video microscopy and single-particle tracking approaches to dynamically dissect both cell-free and cell-associated viral capture by living mDCs. We show that cell-free virus capture by mDCs operates through three sequential phases: virus binding through specific determinants expressed in the viral particle, polarized or directional movements toward concrete regions of the cell membrane and virus accumulation in a sac-like structure where trapped viral particles display a hindered diffusive behavior. Moreover, real-time imaging of cell-associated viral transfer to mDCs showed a similar dynamics to that exhibited by cell-free virus endocytosis leading to viral accumulation in compartments. However, cell-associated HIV type 1 transfer to mDCs was the most effective pathway, boosted throughout enhanced cellular contacts with infected CD4(+) T cells. Our results suggest that in lymphoid tissues, mDC viral uptake could occur either by encountering cell-free or cell-associated virus produced by infected cells generating the perfect scenario to promote HIV pathogenesis and impact disease progression.

  17. Mapping preictal and ictal haemodynamic networks using video-electroencephalography and functional imaging.

    Science.gov (United States)

    Chaudhary, Umair J; Carmichael, David W; Rodionov, Roman; Thornton, Rachel C; Bartlett, Phillipa; Vulliemoz, Serge; Micallef, Caroline; McEvoy, Andrew W; Diehl, Beate; Walker, Matthew C; Duncan, John S; Lemieux, Louis

    2012-12-01

    Ictal patterns on scalp-electroencephalography are often visible only after propagation, therefore rendering localization of the seizure onset zone challenging. We hypothesized that mapping haemodynamic changes before and during seizures using simultaneous video-electroencephalography and functional imaging will improve the localization of the seizure onset zone. Fifty-five patients with ≥2 refractory focal seizures/day, and who had undergone long-term video-electroencephalography monitoring were included in the study. 'Preictal' (30 s immediately preceding the electrographic seizure onset) and ictal phases, 'ictal-onset'; 'ictalestablished' and 'late ictal', were defined based on the evolution of the electrographic pattern and clinical semiology. The functional imaging data were analysed using statistical parametric mapping to map ictal phase-related haemodynamic changes consistent across seizures. The resulting haemodynamic maps were overlaid on co-registered anatomical scans, and the spatial concordance with the presumed and invasively defined seizure onset zone was determined. Twenty patients had typical seizures during functional imaging. Seizures were identified on video-electroencephalography in 15 of 20, on electroencephalography alone in two and on video alone in three patients. All patients showed significant ictal-related haemodynamic changes. In the six cases that underwent invasive evaluation, the ictal-onset phase-related maps had a degree of concordance with the presumed seizure onset zone for all patients. The most statistically significant haemodynamic cluster within the presumed seizure onset zone was between 1.1 and 3.5 cm from the invasively defined seizure onset zone, which was resected in two of three patients undergoing surgery (Class I post-surgical outcome) and was not resected in one patient (Class III post-surgical outcome). In the remaining 14 cases, the ictal-onset phase-related maps had a degree of concordance with the presumed

  18. A comprehensive of transforms, Gabor filter and k-means clustering for text detection in images and video

    Directory of Open Access Journals (Sweden)

    V.N. Manjunath Aradhya

    2016-07-01

    Full Text Available The present paper presents one of the efficient approaches toward multilingual text detection for video indexing. In this paper, we propose a method for detecting textlocated in varying and complex background in images/video. The present approach comprises four stages: In the first stage, combination of wavelet transform and Gabor filter is applied. By applying single level 2D wavelet decomposition with Gabor Filter, the intrinsic features comprising sharpen edges and texture features of an input image are obtained. In the second stage, the resultant Gabor image is classified using k-means clustering algorithm. In the third stage, morphological operations are performed on clustered pixels. Then a concept of linked list approach is used to build a true textline sequence of connected components. In the final stage, wavelet entropy of an input image is measured by signifying the complexity of unsteady signals corresponding to the position of textline sequence of connected components in leading to determine the true text region of an input image. The performance of the approach is exhibited by presenting promising experimental results for 101 video images, standard ICDAR 2003 Scene Trial Test dataset, ICDAR 2013 dataset and on our own collected South Indian Language dataset.

  19. A Review on Video/Image Authentication and Tamper Detection Techniques

    Science.gov (United States)

    Parmar, Zarna; Upadhyay, Saurabh

    2013-02-01

    With the innovations and development in sophisticated video editing technology and a wide spread of video information and services in our society, it is becoming increasingly significant to assure the trustworthiness of video information. Therefore in surveillance, medical and various other fields, video contents must be protected against attempt to manipulate them. Such malicious alterations could affect the decisions based on these videos. A lot of techniques are proposed by various researchers in the literature that assure the authenticity of video information in their own way. In this paper we present a brief survey on video authentication techniques with their classification. These authentication techniques are generally classified into following categories: digital signature based techniques, watermark based techniques, and other authentication techniques.

  20. Thermal image analysis of plastic deformation and fracture behavior by a thermo-video measurement system

    Science.gov (United States)

    Ohbuchi, Yoshifumi; Sakamoto, Hidetoshi; Nagatomo, Nobuaki

    2016-12-01

    The visualization of the plastic region and the measurement of its size are necessary and indispensable to evaluate the deformation and fracture behavior of a material. In order to evaluate the plastic deformation and fracture behavior in a structural member with some flaws, the authors paid attention to the surface temperature which is generated by plastic strain energy. The visualization of the plastic deformation was developed by analyzing the relationship between the extension of the plastic deformation range and the surface temperature distribution, which was obtained by an infrared thermo-video system. Furthermore, FEM elasto-plastic analysis was carried out with the experiment, and the effectiveness of this non-contact measurement system of the plastic deformation and fracture process by a thermography system was discussed. The evaluation method using an infrared imaging device proposed in this research has a feature which does not exist in the current evaluation method, i.e. the heat distribution on the surface of the material has been measured widely by noncontact at 2D at high speed. The new measuring technique proposed here can measure the macroscopic plastic deformation distribution on the material surface widely and precisely as a 2D image, and at high speed, by calculation from the heat generation and the heat propagation distribution.

  1. Color, Scale, and Rotation Independent Multiple License Plates Detection in Videos and Still Images

    Directory of Open Access Journals (Sweden)

    Narasimha Reddy Soora

    2016-01-01

    Full Text Available Most of the existing license plate (LP detection systems have shown significant development in the processing of the images, with restrictions related to environmental conditions and plate variations. With increased mobility and internationalization, there is a need to develop a universal LP detection system, which can handle multiple LPs of many countries and any vehicle, in an open environment and all weather conditions, having different plate variations. This paper presents a novel LP detection method using different clustering techniques based on geometrical properties of the LP characters and proposed a new character extraction method, for noisy/missed character components of the LP due to the presence of noise between LP characters and LP border. The proposed method detects multiple LPs from an input image or video, having different plate variations, under different environmental and weather conditions because of the geometrical properties of the set of characters in the LP. The proposed method is tested using standard media-lab and Application Oriented License Plate (AOLP benchmark LP recognition databases and achieved the success rates of 97.3% and 93.7%, respectively. Results clearly indicate that the proposed approach is comparable to the previously published papers, which evaluated their performance on publicly available benchmark LP databases.

  2. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... questions Clinical Studies Publications Catalog Photos and Images Spanish Language Information Dilated Exam Grants and Funding Extramural ... Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video ...

  3. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... questions Clinical Studies Publications Catalog Photos and Images Spanish Language Information Grants and Funding Extramural Research Division ... Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video ...

  4. Analysis and segmentation of images in case of solving problems of detecting and tracing objects on real-time video

    Science.gov (United States)

    Ezhova, Kseniia; Fedorenko, Dmitriy; Chuhlamov, Anton

    2016-04-01

    The article deals with the methods of image segmentation based on color space conversion, and allow the most efficient way to carry out the detection of a single color in a complex background and lighting, as well as detection of objects on a homogeneous background. The results of the analysis of segmentation algorithms of this type, the possibility of their implementation for creating software. The implemented algorithm is very time-consuming counting, making it a limited application for the analysis of the video, however, it allows us to solve the problem of analysis of objects in the image if there is no dictionary of images and knowledge bases, as well as the problem of choosing the optimal parameters of the frame quantization for video analysis.

  5. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    Directory of Open Access Journals (Sweden)

    Nakamura Satoshi

    2004-01-01

    Full Text Available We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  6. A Brief Study of Video Encryption Algorithms

    Directory of Open Access Journals (Sweden)

    Pranali Pasalkar,

    2015-02-01

    Full Text Available Video is a set of images .Video encryption is encrypting those set of images .Thus video encryption is simply hiding your video from prying eyes .Video monitoring has always been in concerned .Multimedia security is very important for multimedia commerce on Internet such as video on demand and Real time video multicast. There are various video encryption algorithm. All have some kind of weakness .In this paper classification of various existing algorithm, its advantages and disadvantages is discussed.

  7. Live cell imaging of in vitro human trophoblast syncytialization.

    Science.gov (United States)

    Wang, Rui; Dang, Yan-Li; Zheng, Ru; Li, Yue; Li, Weiwei; Lu, Xiaoyin; Wang, Li-Juan; Zhu, Cheng; Lin, Hai-Yan; Wang, Hongmei

    2014-06-01

    Human trophoblast syncytialization, a process of cell-cell fusion, is one of the most important yet least understood events during placental development. Investigating the fusion process in a placenta in vivo is very challenging given the complexity of this process. Application of primary cultured cytotrophoblast cells isolated from term placentas and BeWo cells derived from human choriocarcinoma formulates a biphasic strategy to achieve the mechanism of trophoblast cell fusion, as the former can spontaneously fuse to form the multinucleated syncytium and the latter is capable of fusing under the treatment of forskolin (FSK). Live-cell imaging is a powerful tool that is widely used to investigate many physiological or pathological processes in various animal models or humans; however, to our knowledge, the mechanism of trophoblast cell fusion has not been reported using a live- cell imaging manner. In this study, a live-cell imaging system was used to delineate the fusion process of primary term cytotrophoblast cells and BeWo cells. By using live staining with Hoechst 33342 or cytoplasmic dyes or by stably transfecting enhanced green fluorescent protein (EGFP) and DsRed2-Nuc reporter plasmids, we observed finger-like protrusions on the cell membranes of fusion partners before fusion and the exchange of cytoplasmic contents during fusion. In summary, this study provides the first video recording of the process of trophoblast syncytialization. Furthermore, the various live-cell imaging systems used in this study will help to yield molecular insights into the syncytialization process during placental development. © 2014 by the Society for the Study of Reproduction, Inc.

  8. All-optical video-image encryption with enforced security level using independent component analysis

    Science.gov (United States)

    Alfalou, A.; Mansour, A.

    2007-10-01

    In the last two decades, wireless communications have been introduced in various applications. However, the transmitted data can be, at any moment, intercepted by non-authorized people. That could explain why data encryption and secure transmission have gained enormous popularity. In order to secure data transmission, we should pay attention to two aspects: transmission rate and encryption security level. In this paper, we address these two aspects by proposing a new video-image transmission scheme. This new system consists in using the advantage of optical high transmission rate and some powerful signal processing tools to secure the transmitted data. The main idea of our approach is to secure transmitted information at two levels: at the classical level by using an adaptation of standard optical techniques and at a second level (spatial diversity) by using independent transmitters. In the second level, a hacker would need to intercept not only one channel but all of them in order to retrieve information. At the receiver, we can easily apply ICA algorithms to decrypt the received signals and retrieve information.

  9. Behavior and identification of ephemeral sand dunes at the backshore zone using video images

    Directory of Open Access Journals (Sweden)

    PEDRO V. GUIMARÃES

    2016-01-01

    Full Text Available ABSTRACT The backshore zone is transitional environment strongly affected by ocean, air and sand movements. On dissipative beaches, the formation of ephemeral dunes over the backshore zone plays significant contribution in the beach morphodynamics and sediment budget. The aim of this work is to describe a novel method to identify ephemeral dunes in the backshore region and to discuss their morphodynamic behavior. The beach morphology is identified using Argus video imagery, which reveals the behavior of morphologies at Cassino Beach, Rio Grande do Sul, Brasil. Daily images from 2005 to 2007, topographic profiles, meteorological data, and sedimentological parameters were used to determine the frequency and pervasiveness of these features on the backshore. Results indicated that coastline orientation relative to the dominant NE and E winds and the dissipative morphological beach state favored aeolian sand transport towards the backshore. Prevailing NE winds increase sand transportation to the backshore, resulting in the formation of barchans, transverse, and barchanoid-linguiod dunes. Precipitation inhibits aeolian transport and ephemeral dune formation and maintains the existing morphologies during strong SE and SW winds, provided the storm surge is not too high.

  10. Behavior and identification of ephemeral sand dunes at the backshore zone using video images.

    Science.gov (United States)

    Guimarães, Pedro V; Pereira, Pedro S; Calliari, Lauro J; Ellis, Jean T

    2016-09-01

    The backshore zone is transitional environment strongly affected by ocean, air and sand movements. On dissipative beaches, the formation of ephemeral dunes over the backshore zone plays significant contribution in the beach morphodynamics and sediment budget. The aim of this work is to describe a novel method to identify ephemeral dunes in the backshore region and to discuss their morphodynamic behavior. The beach morphology is identified using Argus video imagery, which reveals the behavior of morphologies at Cassino Beach, Rio Grande do Sul, Brasil. Daily images from 2005 to 2007, topographic profiles, meteorological data, and sedimentological parameters were used to determine the frequency and pervasiveness of these features on the backshore. Results indicated that coastline orientation relative to the dominant NE and E winds and the dissipative morphological beach state favored aeolian sand transport towards the backshore. Prevailing NE winds increase sand transportation to the backshore, resulting in the formation of barchans, transverse, and barchanoid-linguiod dunes. Precipitation inhibits aeolian transport and ephemeral dune formation and maintains the existing morphologies during strong SE and SW winds, provided the storm surge is not too high.

  11. Comparison of Inter-Observer Variability and Diagnostic Performance of the Fifth Edition of BI-RADS for Breast Ultrasound of Static versus Video Images.

    Science.gov (United States)

    Youk, Ji Hyun; Jung, Inkyung; Yoon, Jung Hyun; Kim, Sung Hun; Kim, You Me; Lee, Eun Hye; Jeong, Sun Hye; Kim, Min Jung

    2016-09-01

    Our aim was to compare the inter-observer variability and diagnostic performance of the Breast Imaging Reporting and Data System (BI-RADS) lexicon for breast ultrasound of static and video images. Ninety-nine breast masses visible on ultrasound examination from 95 women 19-81 y of age at five institutions were enrolled in this study. They were scheduled to undergo biopsy or surgery or had been stable for at least 2 y of ultrasound follow-up after benign biopsy results or typically benign findings. For each mass, representative long- and short-axis static ultrasound images were acquired; real-time long- and short-axis B-mode video images through the mass area were separately saved as cine clips. Each image was reviewed independently by five radiologists who were asked to classify ultrasound features according to the fifth edition of the BI-RADS lexicon. Inter-observer variability was assessed using kappa (κ) statistics. Diagnostic performance on static and video images was compared using the area under the receiver operating characteristic curve. No significant difference was found in κ values between static and video images for all descriptors, although κ values of video images were higher than those of static images for shape, orientation, margin and calcifications. After receiver operating characteristic curve analysis, the video images (0.83, range: 0.77-0.87) had higher areas under the curve than the static images (0.80, range: 0.75-0.83; p = 0.08). Inter-observer variability and diagnostic performance of video images was similar to that of static images on breast ultrasonography according to the new edition of BI-RADS.

  12. Extraction of Benthic Cover Information from Video Tows and Photographs Using Object-Based Image Analysis

    Science.gov (United States)

    Estomata, M. T. L.; Blanco, A. C.; Nadaoka, K.; Tomoling, E. C. M.

    2012-07-01

    Mapping benthic cover in deep waters comprises a very small proportion of studies in the field of research. Majority of benthic cover mapping makes use of satellite images and usually, classification is carried out only for shallow waters. To map the seafloor in optically deep waters, underwater videos and photos are needed. Some researchers have applied this method on underwater photos, but made use of different classification methods such as: Neural Networks, and rapid classification via down sampling. In this study, accurate bathymetric data obtained using a multi-beam echo sounder (MBES) was attempted to be used as complementary data with the underwater photographs. Due to the absence of a motion reference unit (MRU), which applies correction to the data gathered by the MBES, accuracy of the said depth data was compromised. Nevertheless, even with the absence of accurate bathymetric data, object-based image analysis (OBIA), which used rule sets based on information such as shape, size, area, relative distance, and spectral information, was still applied. Compared to pixel-based classifications, OBIA was able to classify more specific benthic cover types other than coral and sand, such as rubble and fish. Through the use of rule sets on area, less than or equal to 700 pixels for fish and between 700 to 10,000 pixels for rubble, as well as standard deviation values to distinguish texture, fish and rubble were identified. OBIA produced benthic cover maps that had higher overall accuracy, 93.78±0.85%, as compared to pixel-based methods that had an average accuracy of only 87.30±6.11% (p-value = 0.0001, α = 0.05).

  13. EXTRACTION OF BENTHIC COVER INFORMATION FROM VIDEO TOWS AND PHOTOGRAPHS USING OBJECT-BASED IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    M. T. L. Estomata

    2012-07-01

    Full Text Available Mapping benthic cover in deep waters comprises a very small proportion of studies in the field of research. Majority of benthic cover mapping makes use of satellite images and usually, classification is carried out only for shallow waters. To map the seafloor in optically deep waters, underwater videos and photos are needed. Some researchers have applied this method on underwater photos, but made use of different classification methods such as: Neural Networks, and rapid classification via down sampling. In this study, accurate bathymetric data obtained using a multi-beam echo sounder (MBES was attempted to be used as complementary data with the underwater photographs. Due to the absence of a motion reference unit (MRU, which applies correction to the data gathered by the MBES, accuracy of the said depth data was compromised. Nevertheless, even with the absence of accurate bathymetric data, object-based image analysis (OBIA, which used rule sets based on information such as shape, size, area, relative distance, and spectral information, was still applied. Compared to pixel-based classifications, OBIA was able to classify more specific benthic cover types other than coral and sand, such as rubble and fish. Through the use of rule sets on area, less than or equal to 700 pixels for fish and between 700 to 10,000 pixels for rubble, as well as standard deviation values to distinguish texture, fish and rubble were identified. OBIA produced benthic cover maps that had higher overall accuracy, 93.78±0.85%, as compared to pixel-based methods that had an average accuracy of only 87.30±6.11% (p-value = 0.0001, α = 0.05.

  14. Instruction document on multimedia formats:optimal accessibility of audio, video and images

    NARCIS (Netherlands)

    Folmer, E.J.A.; Wams, N.; Knubben, B.

    2010-01-01

    We increasingly express ourselves through multimedia. Internet traffic already consists for the most part of audio and video. A variety of formats are used for this purpose, often without due consideration. This document provides a background for choices that can be made for making video and audio a

  15. Enhancing Perceived Quality of Compressed Images and Video with Anisotropic Diffusion and Fuzzy Filtering

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Korhonen, Jari; Forchhammer, Søren

    2013-01-01

    video sequences. For the video sequences, different filters are applied to luminance (Y) and chrominance (U,V) components. The performance of the proposed method has been compared against several other methods by using different objective quality metrics and a subjective comparison study. Both objective...

  16. AFM imaging of fenestrated liver sinusoidal endothelial cells.

    Science.gov (United States)

    Braet, F; Wisse, E

    2012-12-01

    Each microscope with its dedicated sample preparation technique provides the investigator with a specific set of data giving an instrument-determined (or restricted) insight into the structure and function of a tissue, a cell or parts thereof. Stepwise improvements in existing techniques, both instrumental and preparative, can sometimes cross barriers in resolution and image quality. Of course, investigators get really excited when completely new principles of microscopy and imaging are offered in promising new instruments, such as the AFM. The present paper summarizes a first phase of studies on the thin endothelial cells of the liver. It describes the preparation-dependent differences in AFM imaging of these cells after isolation. Special point of interest concerned the dynamics of the fenestrae, thought to filter lipid-carrying particles during their transport from the blood to the liver cells. It also describes the attempts to image the details of these cells when alive in cell cultures. It explains what physical conditions, mainly contributed to the scanning stylus, are thought to play a part in the limitations in imaging these cells. The AFM also offers promising specifications to those interested in cell surface details, such as membrane-associated structures, receptors, coated pits, cellular junctions and molecular aggregations or domains. The AFM also offers nano-manipulation possibilities, strengths and elasticity measurements, force interactions, affinity measurements, stiffness and other physical aspects of membranes and cytoskeleton. The potential for molecular approaches is there. New developments in cantilever construction and computer software promise to bring real time video imaging to the AFM. Home made accessories for the first generation of AFM are now commodities in commercial instruments and make the life of the AFM microscopist easier. Also, the combination of different microscopies, such as AFM and TEM, or AFM and SEM find their way to the

  17. Short term exposure to attractive and muscular singers in music video clips negatively affects men's body image and mood.

    Science.gov (United States)

    Mulgrew, K E; Volcevski-Kostas, D

    2012-09-01

    Viewing idealized images has been shown to reduce men's body satisfaction; however no research has examined the impact of music video clips. This was the first study to examine the effects of exposure to muscular images in music clips on men's body image, mood and cognitions. Ninety men viewed 5 min of clips containing scenery, muscular or average-looking singers, and completed pre- and posttest measures of mood and body image. Appearance schema activation was also measured. Men exposed to the muscular clips showed poorer posttest levels of anger, body and muscle tone satisfaction compared to men exposed to the scenery or average clips. No evidence of schema activation was found, although potential problems with the measure are noted. These preliminary findings suggest that even short term exposure to music clips can produce negative effects on men's body image and mood.

  18. Imaging of sickle cell disease

    Energy Technology Data Exchange (ETDEWEB)

    Crowley, J.J. [Department of Pediatric Imaging, Children`s Hospital of Michigan, Detroit (United States); Sarnaik, S. [Sickle Cell Center, Children`s Hospital of Michigan, Detroit (United States)

    1999-09-01

    Sickle cell disease is an important health care issue in the United States and in certain areas in Africa, the Middle East and India. Although a great deal of progress has been made in understanding the disease at the molecular and pathophysiologic level, specific treatment which is safe and accessible for most patients is still elusive. Going into the next millennium, the management of this disease is still largely dependent on early diagnosis and the treatment of complications with supportive care. Thus, diagnosis and evaluation of the complications of the disease are crucial in directing clinical care at the bedside. Modern imaging modalities have greatly improved, and their application in the patient with the sickling disorders has enhanced the decision - making process. The purpose of this article is to review the clinical aspects of common complications of the disease and to discuss imaging approaches which are useful in their evaluation. (orig.) With 15 figs., 102 refs.

  19. The Automatically Triggered Video or Imaging Station (ATVIS): An Inexpensive Way to Catch Geomorphic Events on Camera

    Science.gov (United States)

    Wickert, A. D.

    2010-12-01

    To understand how single events can affect landscape change, we must catch the landscape in the act. Direct observations are rare and often dangerous. While video is a good alternative, commercially-available video systems for field installation cost 11,000, weigh ~100 pounds (45 kg), and shoot 640x480 pixel video at 4 frames per second. This is the same resolution as a cheap point-and-shoot camera, with a frame rate that is nearly an order of magnitude worse. To overcome these limitations of resolution, cost, and portability, I designed and built a new observation station. This system, called ATVIS (Automatically Triggered Video or Imaging Station), costs 450--500 and weighs about 15 pounds. It can take roughly 3 hours of 1280x720 pixel video, 6.5 hours of 640x480 video, or 98,000 1600x1200 pixel photos (one photo every 7 seconds for 8 days). The design calls for a simple Canon point-and-shoot camera fitted with custom firmware that allows 5V pulses through its USB cable to trigger it to take a picture or to initiate or stop video recording. These pulses are provided by a programmable microcontroller that can take input from either sensors or a data logger. The design is easily modifiable to a variety of camera and sensor types, and can also be used for continuous time-lapse imagery. We currently have prototypes set up at a gully near West Bijou Creek on the Colorado high plains and at tributaries to Marble Canyon in northern Arizona. Hopefully, a relatively inexpensive and portable system such as this will allow geomorphologists to supplement sensor networks with photo or video monitoring and allow them to see—and better quantify—the fantastic array of processes that modify landscapes as they unfold. Camera station set up at Badger Canyon, Arizona.Inset: view into box. Clockwise from bottom right: camera, microcontroller (blue), DC converter (red), solar charge controller, 12V battery. Materials and installation assistance courtesy of Ron Griffiths and the

  20. Use of ImageJ to recover information from individual cells in a G protein-coupled receptor assay.

    Science.gov (United States)

    Trabuco, João R C; Martins, Sofia Aires M; Prazeres, Duarte Miguel F

    2015-01-01

    Live-cell assays used in GPCR research often rely on fluorescence techniques that generate large amounts of raw image data. Consequently, the capacity to accurately and timely extract useful information from image and video data has become more and more important. Image J is an open-source program that provides powerful tools with a simple interface designed to fit the needs of image analysis of most researchers. In this chapter, Image J routines to extract information from individual cells in a calcium GPCR assay are described. In these routines, individual cells in the same image/video data can be separated using either a progressive threshold or a local threshold method. Both methods can be optimized to either a maximum number of selection or maximum area selected resulting in conceptually distinct selections.

  1. Estimating age ratios and size of pacific walrus herds on coastal haulouts using video imaging.

    Directory of Open Access Journals (Sweden)

    Daniel H Monson

    Full Text Available During Arctic summers, sea ice provides resting habitat for Pacific walruses as it drifts over foraging areas in the eastern Chukchi Sea. Climate-driven reductions in sea ice have recently created ice-free conditions in the Chukchi Sea by late summer causing walruses to rest at coastal haulouts along the Chukotka and Alaska coasts, which provides an opportunity to study walruses at relatively accessible locations. Walrus age can be determined from the ratio of tusk length to snout dimensions. We evaluated use of images obtained from a gyro-stabilized video system mounted on a helicopter flying at high altitudes (to avoid disturbance to classify the sex and age of walruses hauled out on Alaska beaches in 2010-2011. We were able to classify 95% of randomly selected individuals to either an 8- or 3-category age class, and we found measurement-based age classifications were more repeatable than visual classifications when using images presenting the correct head profile. Herd density at coastal haulouts averaged 0.88 walruses/m(2 (std. err. = 0.02, herd size ranged from 8,300 to 19,400 (CV 0.03-0.06 and we documented ∼30,000 animals along ∼1 km of beach in 2011. Within the herds, dependent walruses (0-2 yr-olds tended to be located closer to water, and this tendency became more pronounced as the herd spent more time on the beach. Therefore, unbiased estimation of herd age-ratios will require a sampling design that allows for spatial and temporal structuring. In addition, randomly sampling walruses available at the edge of the herd for other purposes (e.g., tagging, biopsying will not sample walruses with an age structure representative of the herd. Sea ice losses are projected to continue, and population age structure data collected with aerial videography at coastal haulouts may provide demographic information vital to ongoing efforts to understand effects of climate change on this species.

  2. Estimating age ratios and size of pacific walrus herds on coastal haulouts using video imaging.

    Science.gov (United States)

    Monson, Daniel H; Udevitz, Mark S; Jay, Chadwick V

    2013-01-01

    During Arctic summers, sea ice provides resting habitat for Pacific walruses as it drifts over foraging areas in the eastern Chukchi Sea. Climate-driven reductions in sea ice have recently created ice-free conditions in the Chukchi Sea by late summer causing walruses to rest at coastal haulouts along the Chukotka and Alaska coasts, which provides an opportunity to study walruses at relatively accessible locations. Walrus age can be determined from the ratio of tusk length to snout dimensions. We evaluated use of images obtained from a gyro-stabilized video system mounted on a helicopter flying at high altitudes (to avoid disturbance) to classify the sex and age of walruses hauled out on Alaska beaches in 2010-2011. We were able to classify 95% of randomly selected individuals to either an 8- or 3-category age class, and we found measurement-based age classifications were more repeatable than visual classifications when using images presenting the correct head profile. Herd density at coastal haulouts averaged 0.88 walruses/m(2) (std. err. = 0.02), herd size ranged from 8,300 to 19,400 (CV 0.03-0.06) and we documented ∼30,000 animals along ∼1 km of beach in 2011. Within the herds, dependent walruses (0-2 yr-olds) tended to be located closer to water, and this tendency became more pronounced as the herd spent more time on the beach. Therefore, unbiased estimation of herd age-ratios will require a sampling design that allows for spatial and temporal structuring. In addition, randomly sampling walruses available at the edge of the herd for other purposes (e.g., tagging, biopsying) will not sample walruses with an age structure representative of the herd. Sea ice losses are projected to continue, and population age structure data collected with aerial videography at coastal haulouts may provide demographic information vital to ongoing efforts to understand effects of climate change on this species.

  3. Interventional video tomography

    Science.gov (United States)

    Truppe, Michael J.; Pongracz, Ferenc; Ploder, Oliver; Wagner, Arne; Ewers, Rolf

    1995-05-01

    Interventional Video Tomography (IVT) is a new imaging modality for Image Directed Surgery to visualize in real-time intraoperatively the spatial position of surgical instruments relative to the patient's anatomy. The video imaging detector is based on a special camera equipped with an optical viewing and lighting system and electronic 3D sensors. When combined with an endoscope it is used for examining the inside of cavities or hollow organs of the body from many different angles. The surface topography of objects is reconstructed from a sequence of monocular video or endoscopic images. To increase accuracy and speed of the reconstruction the relative movement between objects and endoscope is continuously tracked by electronic sensors. The IVT image sequence represents a 4D data set in stereotactic space and contains image, surface topography and motion data. In ENT surgery an IVT image sequence of the planned and so far accessible surgical path is acquired prior to surgery. To simulate the surgical procedure the cross sectional imaging data is superimposed with the digitally stored IVT image sequence. During surgery the video sequence component of the IVT simulation is substituted by the live video source. The IVT technology makes obsolete the use of 3D digitizing probes for the patient image coordinate transformation. The image fusion of medical imaging data with live video sources is the first practical use of augmented reality in medicine. During surgery a head-up display is used to overlay real-time reformatted cross sectional imaging data with the live video image.

  4. Video-rate processing in tomographic phase microscopy of biological cells using CUDA.

    Science.gov (United States)

    Dardikman, Gili; Habaza, Mor; Waller, Laura; Shaked, Natan T

    2016-05-30

    We suggest a new implementation for rapid reconstruction of three-dimensional (3-D) refractive index (RI) maps of biological cells acquired by tomographic phase microscopy (TPM). The TPM computational reconstruction process is extremely time consuming, making the analysis of large data sets unreasonably slow and the real-time 3-D visualization of the results impossible. Our implementation uses new phase extraction, phase unwrapping and Fourier slice algorithms, suitable for efficient CPU or GPU implementations. The experimental setup includes an external off-axis interferometric module connected to an inverted microscope illuminated coherently. We used single cell rotation by micro-manipulation to obtain interferometric projections from 73 viewing angles over a 180° angular range. Our parallel algorithms were implemented using Nvidia's CUDA C platform, running on Nvidia's Tesla K20c GPU. This implementation yields, for the first time to our knowledge, a 3-D reconstruction rate higher than video rate of 25 frames per second for 256 × 256-pixel interferograms with 73 different projection angles (64 × 64 × 64 output). This allows us to calculate additional cellular parameters, while still processing faster than video rate. This technique is expected to find uses for real-time 3-D cell visualization and processing, while yielding fast feedback for medical diagnosis and cell sorting.

  5. Live cell imaging in Drosophila melanogaster.

    Science.gov (United States)

    Parton, Richard M; Vallés, Ana Maria; Dobbie, Ian M; Davis, Ilan

    2010-04-01

    Although many of the techniques of live cell imaging in Drosophila melanogaster are also used by the greater community of cell biologists working on other model systems, studying living fly tissues presents unique difficulties with regard to keeping the cells alive, introducing fluorescent probes, and imaging through thick, hazy cytoplasm. This article outlines the major tissue types amenable to study by time-lapse cinematography and different methods for keeping the cells alive. It describes various imaging and associated techniques best suited to following changes in the distribution of fluorescently labeled molecules in real time in these tissues. Imaging, in general, is a rapidly developing discipline, and recent advances in imaging technology are able to greatly extend what can be achieved with live cell imaging of Drosophila tissues. As far as possible, this article includes the latest technical developments and discusses likely future developments in imaging methods that could have an impact on research using Drosophila.

  6. Peri-operative imaging of cancer margins with reflectance confocal microscopy during Mohs micrographic surgery: feasibility of a video-mosaicing algorithm

    Science.gov (United States)

    Flores, Eileen; Yelamos, Oriol; Cordova, Miguel; Kose, Kivanc; Phillips, William; Rossi, Anthony; Nehal, Kishwer; Rajadhyaksha, Milind

    2017-02-01

    Reflectance confocal microscopy (RCM) imaging shows promise for guiding surgical treatment of skin cancers. Recent technological advancements such as the introduction of the handheld version of the reflectance confocal microscope, video acquisition and video-mosaicing have improved RCM as an emerging tool to evaluate cancer margins during routine surgical skin procedures such as Mohs micrographic surgery (MMS). Detection of residual non-melanoma skin cancer (NMSC) tumor during MMS is feasible, as demonstrated by the introduction of real-time perioperative imaging on patients in the surgical setting. Our study is currently testing the feasibility of a new mosaicing algorithm for perioperative RCM imaging of NMSC cancer margins on patients during MMS. We report progress toward imaging and image analysis on forty-five patients, who presented for MMS at the MSKCC Dermatology service. The first 10 patients were used as a training set to establish an RCM imaging algorithm, which was implemented on the remaining test set of 35 patients. RCM imaging, using 35% AlCl3 for nuclear contrast, was performed pre- and intra-operatively with the Vivascope 3000 (Caliber ID). Imaging was performed in quadrants in the wound, to simulate the Mohs surgeon's examination of pathology. Videos were taken at the epidermal and deep dermal margins. Our Mohs surgeons assessed all videos and video-mosaics for quality and correlation to histology. Overall, our RCM video-mosaicing algorithm is feasible. RCM videos and video-mosaics of the epidermal and dermal margins were found to be of clinically acceptable quality. Assessment of cancer margins was affected by type of NMSC, size and location. Among the test set of 35 patients, 83% showed acceptable imaging quality, resolution and contrast. Visualization of nuclear and cellular morphology of residual BCC/SCC tumor and normal skin features could be detected in the peripheral and deep dermal margins. We observed correlation between the RCM videos/video

  7. A framework for the recognition of high-level surgical tasks from video images for cataract surgeries.

    Science.gov (United States)

    Lalys, F; Riffaud, L; Bouget, D; Jannin, P

    2012-04-01

    The need for a better integration of the new generation of computer-assisted-surgical systems has been recently emphasized. One necessity to achieve this objective is to retrieve data from the operating room (OR) with different sensors, then to derive models from these data. Recently, the use of videos from cameras in the OR has demonstrated its efficiency. In this paper, we propose a framework to assist in the development of systems for the automatic recognition of high-level surgical tasks using microscope videos analysis. We validated its use on cataract procedures. The idea is to combine state-of-the-art computer vision techniques with time series analysis. The first step of the framework consisted in the definition of several visual cues for extracting semantic information, therefore, characterizing each frame of the video. Five different pieces of image-based classifiers were, therefore, implemented. A step of pupil segmentation was also applied for dedicated visual cue detection. Time series classification algorithms were then applied to model time-varying data. Dynamic time warping and hidden Markov models were tested. This association combined the advantages of all methods for better understanding of the problem. The framework was finally validated through various studies. Six binary visual cues were chosen along with 12 phases to detect, obtaining accuracies of 94%.

  8. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  9. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  10. Determination of Bingham Rheological Parameters of SCC using On-line Video Image Analysis of Automatic Slump Flow Testing

    DEFF Research Database (Denmark)

    Thrane, Lars Nyholm; Pade, Claus

    2005-01-01

    A “touch one bottom” prototype system for estimation of Bingham rheological parameters of SCC has been developed. Video image analysis is used to obtain a series of corresponding values of concrete spread versus time during an automatic slump flow test. The spread versus time curve is subsequentl...... used to estimate the Bingham rheological parameters by a least square search into a database. It takes less than 120 seconds from the start of the slump flow test to the SCC’s Bingham rheological parameters appear on the system’s PC....

  11. In situ calibration of the foil detector for an infrared imaging video bolometer using a carbon evaporation technique

    Science.gov (United States)

    Mukai, K.; Peterson, B. J.; Takayama, S.; Sano, R.

    2016-11-01

    The InfraRed imaging Video Bolometer (IRVB) is a useful diagnostic for the multi-dimensional measurement of plasma radiation profiles. For the application of IRVB measurement to the neutron environment in fusion plasma devices such as the Large Helical Device (LHD), in situ calibration of the thermal characteristics of the foil detector is required. Laser irradiation tests of sample foils show that the reproducibility and uniformity of the carbon coating for the foil were improved using a vacuum evaporation method. Also, the principle of the in situ calibration system was justified.

  12. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    Science.gov (United States)

    Agueh, Max; Diouris, Jean-François; Diop, Magaye; Devaux, François-Olivier; De Vleeschouwer, Christophe; Macq, Benoit

    2008-12-01

    Based on the analysis of real mobile ad hoc network (MANET) traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC) rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS) to wireless clients is demonstrated.

  13. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Benoit Macq

    2008-07-01

    Full Text Available Based on the analysis of real mobile ad hoc network (MANET traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS to wireless clients is demonstrated.

  14. Real time three-dimensional space video rate sensors for millimeter waves imaging based very inexpensive plasma LED lamps

    Science.gov (United States)

    Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S.; Rozban, Daniel; Abramovich, Amir

    2014-10-01

    In recent years, much effort has been invested to develop inexpensive but sensitive Millimeter Wave (MMW) detectors that can be used in focal plane arrays (FPAs), in order to implement real time MMW imaging. Real time MMW imaging systems are required for many varied applications in many fields as homeland security, medicine, communications, military products and space technology. It is mainly because this radiation has high penetration and good navigability through dust storm, fog, heavy rain, dielectric materials, biological tissue, and diverse materials. Moreover, the atmospheric attenuation in this range of the spectrum is relatively low and the scattering is also low compared to NIR and VIS. The lack of inexpensive room temperature imaging systems makes it difficult to provide a suitable MMW system for many of the above applications. In last few years we advanced in research and development of sensors using very inexpensive (30-50 cents) Glow Discharge Detector (GDD) plasma indicator lamps as MMW detectors. This paper presents three kinds of GDD sensor based lamp Focal Plane Arrays (FPA). Those three kinds of cameras are different in the number of detectors, scanning operation, and detection method. The 1st and 2nd generations are 8 × 8 pixel array and an 18 × 2 mono-rail scanner array respectively, both of them for direct detection and limited to fixed imaging. The last designed sensor is a multiplexing frame rate of 16x16 GDD FPA. It permits real time video rate imaging of 30 frames/ sec and comprehensive 3D MMW imaging. The principle of detection in this sensor is a frequency modulated continuous wave (FMCW) system while each of the 16 GDD pixel lines is sampled simultaneously. Direct detection is also possible and can be done with a friendly user interface. This FPA sensor is built over 256 commercial GDD lamps with 3 mm diameter International Light, Inc., Peabody, MA model 527 Ne indicator lamps as pixel detectors. All three sensors are fully supported

  15. Social Evaluations of Stereotypic Images in Video Games: Unfair, Legitimate, or "Just Entertainment"?

    Science.gov (United States)

    Brenick, Alaina; Henning, Alexandra; Killen, Melanie; O'Connor, Alexander; Collins, Michael

    2007-01-01

    The aim of this study is to assess late adolescents' evaluations of and reasoning about gender stereotypes in video games. Female (n = 46) and male (n = 41) students, predominantly European American, with a mean age 19 years, are interviewed about their knowledge of game usage, awareness and evaluation of stereotypes, beliefs about the influences…

  16. The Impact of Video Compression on Remote Cardiac Pulse Measurement Using Imaging Photoplethysmography

    Science.gov (United States)

    2017-05-30

    samples used to represent chromaticity , where the BVP predominately resides [17]. Furthermore, the BVP is an imperceptible color change that often...MATLAB (R2016b on Windows 10) due to the popularity of the MATLAB environment for data and video processing in this area of research. The resulting

  17. The effects of video compression on acceptability of images for monitoring life sciences' experiments

    Science.gov (United States)

    Haines, Richard F.; Chuang, Sherry L.

    1993-01-01

    Current plans indicate that there will be a large number of life science experiments carried out during the thirty year-long mission of the Biological Flight Research Laboratory (BFRL) on board Space Station Freedom (SSF). Non-human life science experiments will be performed in the BFRL. Two distinct types of activities have already been identified for this facility: (1) collect, store, distribute, analyze and manage engineering and science data from the Habitats, Glovebox and Centrifuge, (2) perform a broad range of remote science activities in the Glovebox and Habitat chambers in conjunction with the remotely located principal investigator (PI). These activities require extensive video coverage, viewing and/or recording and distribution to video displays on board SSF and to the ground. This paper concentrates mainly on the second type of activity. Each of the two BFRL habitat racks are designed to be configurable for either six rodent habitats per rack, four plant habitats per rack, or a combination of the above. Two video cameras will be installed in each habitat with a spare attachment for a third camera when needed. Therefore, a video system that can accommodate up to 12-18 camera inputs per habitat rack must be considered.

  18. Functional magnetic resonance imaging--video diagnosis of soft-tissue trauma to the craniocervical joints and ligaments.

    Science.gov (United States)

    Volle, E

    2000-01-01

    Patients suffering from distortion of the cervical spine after an acceleration trauma present problems with respect to the correct diagnostic recognition of the existing injuries. To define instability of the craniocervical junction, attention should be given to the position of the dens and the dimension of its subarachnoid space during the entire rotational maneuver. Our diagnosis via functional magnetic resonance imaging (fMRI) with video did not focus on injuries to the ligamentous microstructure as visualized with high-resolution MRI. Our purpose was to demonstrate the cause of instability of the craniocervical junction by direct visualization during fMRI-video technique. Between December 1997 and March 1999, 200 patients were studied using fMRI on a 0.2-Tesla Magnetom Open. Routine evaluation of the extracranial vertebral circulation by MRI angiography as an additional preinvestigative requirement is recommended. The earliest examination time from injury to MRI evaluation was 3 months and the maximum, 5 years (average, 2.6 years). Among the 200 patients investigated, 30 showed instability of the ligamentous dens complex. Of the same 200, 8 (4%) had a complete rupture and 22 (11%) an incomplete rupture of the alar ligament, with instability signs. In another 45 patients (22.5%), fMRI-video showed evidence of instability, and all these patients had coexisting intraligamentous signal pattern variation, probably due to granulation tissue. Eighty patients of the 200 (40%) had signal indifference without demonstrable video instability signs, and 43 patients (21.5%) showed no evidence of instability and no signal variation in the alar ligaments. On the basis of recognition of instability and the malfunction of the ligaments, the fibrous capsula, and the tiny dens capsula, we now can distinguish between lesions caused by rotatory trauma to the craniocervical junction and those from classic whiplash injury.

  19. An efficient HW and SW design of H.264 video compression, storage and playback on FPGA devices for handheld thermal imaging systems

    Science.gov (United States)

    Gunay, Omer; Ozsarac, Ismail; Kamisli, Fatih

    2017-05-01

    Video recording is an essential property of new generation military imaging systems. Playback of the stored video on the same device is also desirable as it provides several operational benefits to end users. Two very important constraints for many military imaging systems, especially for hand-held devices and thermal weapon sights, are power consumption and size. To meet these constraints, it is essential to perform most of the processing applied to the video signal, such as preprocessing, compression, storing, decoding, playback and other system functions on a single programmable chip, such as FPGA, DSP, GPU or ASIC. In this work, H.264/AVC (Advanced Video Coding) compatible video compression, storage, decoding and playback blocks are efficiently designed and implemented on FPGA platforms using FPGA fabric and Altera NIOS II soft processor. Many subblocks that are used in video encoding are also used during video decoding in order to save FPGA resources and power. Computationally complex blocks are designed using FPGA fabric, while blocks such as SD card write/read, H.264 syntax decoding and CAVLC decoding are done using NIOS processor to benefit from software flexibility. In addition, to keep power consumption low, the system was designed to require limited external memory access. The design was tested using 640x480 25 fps thermal camera on CYCLONE V FPGA, which is the ALTERA's lowest power FPGA family, and consumes lower than 40% of CYCLONE V 5CEFA7 FPGA resources on average.

  20. Probing bacterial cell biology using image cytometry.

    Science.gov (United States)

    Cass, Julie A; Stylianidou, Stella; Kuwada, Nathan J; Traxler, Beth; Wiggins, Paul A

    2017-03-01

    Advances in automated fluorescence microscopy have made snapshot and time-lapse imaging of bacterial cells commonplace, yet fundamental challenges remain in analysis. The vast quantity of data collected in high-throughput experiments requires a fast and reliable automated method to analyze fluorescence intensity and localization, cell morphology and proliferation as well as other descriptors. Inspired by effective yet tractable methods of population-level analysis using flow cytometry, we have developed a framework and tools for facilitating analogous analyses in image cytometry. These tools can both visualize and gate (generate subpopulations) more than 70 cell descriptors, including cell size, age and fluorescence. The method is well suited to multi-well imaging, analysis of bacterial cultures with high cell density (thousands of cells per frame) and complete cell cycle imaging. We give a brief description of the analysis of four distinct applications to emphasize the broad applicability of the tool.

  1. Video-assisted Thoracoscope versus Video-assisted Mini-thoracotomy for Non-small Cell Lung Cancer: A Meta-analysis

    Directory of Open Access Journals (Sweden)

    Bing WANG

    2017-05-01

    Full Text Available Background and objective The aim of this study is to assess the effect of video-assisted thoracoscopic surgery (VATS and video-assisted mini-thoracotomy (VAMT in the treatment of non-small cell lung cancer (NSCLC. Methods We searched PubMed, EMbase, CNKI, VIP and ISI Web of Science to collect randomized controlled trials (RCTs of VATS versus VAMT for NSCLC. Each database was searched from May 2006 to May 2016. Two reviewers independently assessed the quality of the included studies and extracted relevant data, using RevMan 5.3 meta-analysis software. Results We finally identified 13 RCTs involving 1,605 patients. There were 815 patients in the VATS group and 790 patients in the VAMT group. The results of meta-analysis were as follows: statistically significant difference was found in the harvested lymph nodes (SMD=-0.48, 95%CI: -0.80--0.17, operating time (SMD=13.56, 95%CI: 4.96-22.16, operation bleeding volume (SMD=-33.68, 95%CI: -45.70--21.66, chest tube placement time (SMD=-1.05, 95%CI: -1.48--0.62, chest tube drainage flow (SMD=-83.69, 95%CI: -143.33--24.05, postoperative pain scores (SMD=-1.68, 95%CI: -1.98--1.38 and postoperative hospital stay (SMD=-2.27, 95%CI: -3.23--1.31. No statistically significant difference was found in postoperative complications (SMD=0.83, 95%CI: 0.54-1.29 and postoperative mortality (SMD=0.95, 95%CI: 0.55-1.63 between videoassisted thoracoscopic surgery lobectomy and video-assisted mini-thoracotomy lobectomy in the treatment of NSCLC. Conclusion Compared with video-assisted mini-thoracotomy lobectomy in the treatment of non-small cell lung cancer, the amount of postoperative complications and postoperative mortality were almost the same in video-assisted thoracoscopic lobectomy, but the amount of harvested lymph nodes, operating time, blood loss, chest tube drainage flow, and postoperative hospital stay were different. VATS is safe and effective in the treatment of NSCLC.

  2. The advantages of using photographs and video images in telephone consultations with a specialist in paediatric surgery

    Directory of Open Access Journals (Sweden)

    Ibrahim Akkoyun

    2012-01-01

    Full Text Available Background: The purpose of this study was to evaluate the advantages of a telephone consultation with a specialist in paediatric surgery after taking photographs and video images by a general practitioner for the diagnosis of some diseases. Materials and Methods: This was a prospective study of the reliability of paediatric surgery online consultation among specialists and general practitioners. Results: Of 26 general practitioners included in the study, 12 were working in the city and 14 were working in districts outside the city. A total of 41 pictures and 3 videos of 38 patients were sent and evaluated together with the medical history and clinical findings. These patients were diagnosed with umbilical granuloma (n = 6, physiological/pathological phimosis (n = 6, balanitis (n = 6, hydrocele (n = 6, umbilical hernia (n = 4, smegma cyst (n = 2, reductable inguinal hernia (n = 1, incarcerated inguinal hernia (n = 1, paraphimosis (n = 1, burried penis (n = 1, hypospadias (n = 1, epigastric hernia (n = 1, vulva synechia (n = 1, and rectal prolapse (n = 1. Twelve patients were asked to be referred urgently, but it was suggested that only two of these patients, who had paraphimosis and incarcerated inguinal hernia be referred in emergency conditions. It was decided that there was no need for the other ten patients to be referred to a specialist at night or at the weekend. All diagnoses were confirmed to be true, when all patients underwent examination in the pediatric surgery clinic in elective conditions. Conclusion: Evaluation of photographs and video images of a lesion together with medical history and clinical findings via a telephone consultation between a paediatric surgery specialist and a general practitioner provides a definitive diagnosis and prevents patients from being referred unnecessarily.

  3. Red blood cells, sickle cell (image)

    Science.gov (United States)

    Sickle cell anemia is an inherited blood disease in which the red blood cells produce abnormal pigment (hemoglobin). ... abnormal hemoglobin causes deformity of the red blood cells into crescent or sickle-shapes, as seen in this photomicrograph.

  4. Red blood cells, multiple sickle cells (image)

    Science.gov (United States)

    Sickle cell anemia is an inherited disorder in which abnormal hemoglobin (the red pigment inside red blood cells) is produced. The abnormal hemoglobin causes red blood cells to assume a sickle shape, like the ones seen in this photomicrograph.

  5. In vivo cell tracking with bioluminescence imaging.

    Science.gov (United States)

    Kim, Jung Eun; Kalimuthu, Senthilkumar; Ahn, Byeong-Cheol

    2015-03-01

    Molecular imaging is a fast growing biomedical research that allows the visual representation, characterization and quantification of biological processes at the cellular and subcellular levels within intact living organisms. In vivo tracking of cells is an indispensable technology for development and optimization of cell therapy for replacement or renewal of damaged or diseased tissue using transplanted cells, often autologous cells. With outstanding advantages of bioluminescence imaging, the imaging approach is most commonly applied for in vivo monitoring of transplanted stem cells or immune cells in order to assess viability of administered cells with therapeutic efficacy in preclinical small animal models. In this review, a general overview of bioluminescence is provided and recent updates of in vivo cell tracking using the bioluminescence signal are discussed.

  6. In vivo cell tracking with bioluminescence imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Eun; Kalimuthu, Senthilkumar; Ahn, Byeong Cheol [Dept. of Nuclear Medicine, Kyungpook National University School of Medicine and Hospital, Daegu (Korea, Republic of)

    2015-03-15

    Molecular imaging is a fast growing biomedical research that allows the visual representation, characterization and quantification of biological processes at the cellular and subcellular levels within intact living organisms. In vivo tracking of cells is an indispensable technology for development and optimization of cell therapy for replacement or renewal of damaged or diseased tissue using transplanted cells, often autologous cells. With outstanding advantages of bioluminescence imaging, the imaging approach is most commonly applied for in vivo monitoring of transplanted stem cells or immune cells in order to assess viability of administered cells with therapeutic efficacy in preclinical small animal models. In this review, a general overview of bioluminescence is provided and recent updates of in vivo cell tracking using the bioluminescence signal are discussed.

  7. Concurrent Calculations on Reconfigurable Logic Devices Applied to the Analysis of Video Images

    Directory of Open Access Journals (Sweden)

    Sergio R. Geninatti

    2010-01-01

    Full Text Available This paper presents the design and implementation on FPGA devices of an algorithm for computing similarities between neighboring frames in a video sequence using luminance information. By taking advantage of the well-known flexibility of Reconfigurable Logic Devices, we have designed a hardware implementation of the algorithm used in video segmentation and indexing. The experimental results show the tradeoff between concurrent sequential resources and the functional blocks needed to achieve maximum operational speed while achieving minimum silicon area usage. To evaluate system efficiency, we compare the performance of the hardware solution to that of calculations done via software using general-purpose processors with and without an SIMD instruction set.

  8. Social Evaluations of Stereotypic Images in Video Games: Unfair, Legitimate, or “Just Entertainment”?

    OpenAIRE

    Brenick, Alaina; Henning, Alexandra; Killen, Melanie; O'Connor , Alexander; Collins, Michael

    2007-01-01

    The aim of this study was to assess adolescents' evaluations of, and reasoning about, gender stereotypes in video games. Female (N = 46) and male (N = 41), predominantly European-American, mean age = 19 years, were interviewed about their knowledge of game usage, awareness and evaluation of stereotypes, beliefs about the influences of games on the players, and authority jurisdiction over 3 different types of games: games with negative male stereotypes, and games with negative female stereotyp...

  9. Intracellular viscoelasticity of HeLa cells during cell division studied by video particle-tracking microrheology.

    Science.gov (United States)

    Chen, Yin-Quan; Kuo, Chia-Yu; Wei, Ming-Tzo; Wu, Kelly; Su, Pin-Tzu; Huang, Chien-Shiou; Chiou, Arthur

    2014-01-01

    Cell division plays an important role in regulating cell proliferation and differentiation. It is managed by a complex sequence of cytoskeleton alteration that induces dividing cells to change their morphology to facilitate their division. The change in cytoskeleton structure is expected to affect the intracellular viscoelasticity, which may also contribute to cellular dynamic deformation during cell division. However, the intracellular viscoelasticity during cell division is not yet well understood. In this study, we injected 100-nm (diameter) carboxylated polystyrene beads into the cytoplasm of HeLa cells and applied video particle tracking microrheology to measure their intracellular viscoelasticity at different phases during cell division. The Brownian motion of the intracellular nanoprobes was analyzed to compute the viscoelasticity of HeLa cells in terms of the elastic modulus and viscous modulus as a function of frequency. Our experimental results indicate that during the course of cell division, both intracellular elasticity and viscosity increase in the transition from the metaphase to the anaphase, plausibly due to the remodeling of cytoskeleton and redistributions of molecular motors, but remain approximately the same from the anaphase to the telophase.

  10. Optical cell sorting with multiple imaging modalities

    DEFF Research Database (Denmark)

    Banas, Andrew; Carrissemoux, Caro; Palima, Darwin

    2017-01-01

    techniques. Scattering forces from beams actuated via efficient phase-only efficient modulation has been adopted. This has lowered the required power for sorting cells to a tenth of our previous approach, and also makes the cell sorter safer for use in clinical settings. With the versatility of dynamically...... programmable phase spatial light modulators, a plurality of light shaping techniques, including hybrid approaches, can be utilized in cell sorting....... healthy cells. With the richness of visual information, a lot of microscopy techniques have been developed and have been crucial in biological studies. To utilize their complementary advantages we adopt both fluorescence and brightfield imaging in our optical cell sorter. Brightfield imaging has...

  11. Automatic cell counting with ImageJ.

    Science.gov (United States)

    Grishagin, Ivan V

    2015-03-15

    Cell counting is an important routine procedure. However, to date there is no comprehensive, easy to use, and inexpensive solution for routine cell counting, and this procedure usually needs to be performed manually. Here, we report a complete solution for automatic cell counting in which a conventional light microscope is equipped with a web camera to obtain images of a suspension of mammalian cells in a hemocytometer assembly. Based on the ImageJ toolbox, we devised two algorithms to automatically count these cells. This approach is approximately 10 times faster and yields more reliable and consistent results compared with manual counting.

  12. Quantum dot imaging for embryonic stem cells

    Directory of Open Access Journals (Sweden)

    Gambhir Sanjiv S

    2007-10-01

    Full Text Available Abstract Background Semiconductor quantum dots (QDs hold increasing potential for cellular imaging both in vitro and in vivo. In this report, we aimed to evaluate in vivo multiplex imaging of mouse embryonic stem (ES cells labeled with Qtracker delivered quantum dots (QDs. Results Murine embryonic stem (ES cells were labeled with six different QDs using Qtracker. ES cell viability, proliferation, and differentiation were not adversely affected by QDs compared with non-labeled control cells (P = NS. Afterward, labeled ES cells were injected subcutaneously onto the backs of athymic nude mice. These labeled ES cells could be imaged with good contrast with one single excitation wavelength. With the same excitation wavelength, the signal intensity, defined as (total signal-background/exposure time in millisecond was 11 ± 2 for cells labeled with QD 525, 12 ± 9 for QD 565, 176 ± 81 for QD 605, 176 ± 136 for QD 655, 167 ± 104 for QD 705, and 1,713 ± 482 for QD 800. Finally, we have shown that QD 800 offers greater fluorescent intensity than the other QDs tested. Conclusion In summary, this is the first demonstration of in vivo multiplex imaging of mouse ES cells labeled QDs. Upon further improvements, QDs will have a greater potential for tracking stem cells within deep tissues. These results provide a promising tool for imaging stem cell therapy non-invasively in vivo.

  13. Echo-power estimation from log-compressed video data in dynamic contrast-enhanced ultrasound imaging.

    Science.gov (United States)

    Payen, Thomas; Coron, Alain; Lamuraglia, Michele; Le Guillou-Buffello, Delphine; Gaud, Emmanuel; Arditi, Marcel; Lucidarme, Olivier; Bridal, S Lori

    2013-10-01

    Ultrasound (US) scanners typically apply lossy, non-linear modifications to the US data for visualization purposes. The resulting images are then stored as compressed video data. Some system manufacturers provide dedicated software for quantification purposes to eliminate such processing distortions, at least partially. This is currently the recommended approach for quantitatively assessing changes in contrast-agent concentration from clinical data. However, the machine-specific access to US data and the limited set of analysis functionalities offered by each dedicated-software package make it difficult to perform comparable analyses with different US systems. The objective of this work was to establish if linearization of compressed video images obtained with an arbitrary US system can provide an alternative to dedicated-software analysis of machine-specific files for the estimation of echo-power. For this purpose, an Aplio 50 system (Toshiba Medical Systems, Tochigi, Japan), coupled with dedicated CHI-Q (Contrast Harmonic Imaging Quantification) software by Toshiba Medical Systems, was used. Results were compared with two approaches that apply algorithms to estimate relative echo-power from compressed video images: commercially available VueBox software by Bracco Suisse SA (Geneva, Switzerland) and in-laboratory software called PixPower. The echo-power estimated by CHI-Q analysis indicated a strong linear relationship versus agent concentration in vitro (R(2) ≥ 0.9996) for dynamic range (DR) settings of DR60 and DR80, with slopes between 9.22 and 9.57 dB/decade (p = 0.05). These values approach the theoretically predicted dependence of 10.0 dB/decade (equivalent to 3 dB for each concentration doubling). Echo-power estimations obtained from compressed video images with VueBox and PixPower also exhibited strong linear proportionality with concentration (R(2) ≥ 0.9996), with slopes between 9.30 and 9.68 dB/decade (p = 0.05). On an independent in vivo data set (N

  14. Unattended video surveillance systems for international safeguards

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, C.S.

    1979-01-01

    The use of unattended video surveillance systems places some unique requirements on the systems and their hardware. The systems have the traditional requirements of video imaging, video storage, and video playback but also have some special requirements such as tamper safing. The technology available to meet these requirements and how it is being applied to unattended video surveillance systems are discussed in this paper.

  15. 人红细胞的显微毛细管电泳法研究%Studies on Human Red Blood Cells by Micro-video CE System

    Institute of Scientific and Technical Information of China (English)

    戴东升; 齐莉; 余晓; 陈义

    2005-01-01

    A micro-video imaging system on-line coupled with UV detection and capillary electrophoresis has been set up and used for the investigation of human red blood cells(hRBC). Both free and aggregated cells were observed via the imaging and the broad peak overlain by bar-like peaks measured from UV detection was shown responsible for the free migrated and aggregated cells, respectively. More importantly,fast measurement of cell mobility was achieved within one second using the image acquired and the calculated mobility data agreed with that of UV, giving a deviation of less than 7%. Further more, this microvideo system allows us to vividly observe the adsorption-desorption process. In a fused-silica capillary, about 10% of the human red blood cells were turned out to adsorb on the tubing surface, they left the surface after 0.04 s to 3.12s, giving an average retarding time of less than 1s. This causes a loss of migration mobility of 5×10-6 cm2·V-1·s-1. It was thought that this system be applicable to the study of adsorption of other types of molecules with some modification.

  16. Measuring the viscous and elastic properties of single cells using video particle tracking microrheology

    CERN Document Server

    Warren, Rebecca Louisa; Li, Xiang; Glidle, Andrew; Carlsson, Allan; Cooper, Jonathan M

    2011-01-01

    We present a simple and \\emph{non-invasive} experimental procedure to measure the linear viscoelastic properties of cells by passive video particle tracking microrheology. In order to do this, a generalised Langevin equation is adopted to relate the time-dependent thermal fluctuations of a bead, chemically bound to the cell's \\emph{exterior}, to the frequency-dependent viscoelastic moduli of the cell. It is shown that these moduli are related to the cell's cytoskeletal structure, which in this work is changed by varying the solution osmolarity from iso- to hypo-osmotic conditions. At high frequencies, the viscoelastic moduli frequency dependence changes from $\\propto \\omega^{3/4}$ found in iso-osmotic solutions to $\\propto \\omega^{1/2}$ in hypo--osmotic solutions; the first situation is typical of bending modes in isotropic \\textit{in vitro} reconstituted F--actin networks, and the second could indicate that the restructured cytoskeleton behaves as a gel with "\\textit{dangling branches}". The insights gained ...

  17. Assessment of the Potential of UAV Video Image Analysis for Planning Irrigation Needs of Golf Courses

    Directory of Open Access Journals (Sweden)

    Alberto-Jesús Perea-Moreno

    2016-12-01

    Full Text Available Golf courses can be considered as precision agriculture, as being a playing surface, their appearance is of vital importance. Areas with good weather tend to have low rainfall. Therefore, the water management of golf courses in these climates is a crucial issue due to the high water demand of turfgrass. Golf courses are rapidly transitioning to reuse water, e.g., the municipalities in the USA are providing price incentives or mandate the use of reuse water for irrigation purposes; in Europe this is mandatory. So, knowing the turfgrass surfaces of a large area can help plan the treated sewage effluent needs. Recycled water is usually of poor quality, thus it is crucial to check the real turfgrass surface in order to be able to plan the global irrigation needs using this type of water. In this way, the irrigation of golf courses does not detract from the natural water resources of the area. The aim of this paper is to propose a new methodology for analysing geometric patterns of video data acquired from UAVs (Unmanned Aerial Vehicle using a new Hierarchical Temporal Memory (HTM algorithm. A case study concerning maintained turfgrass, especially for golf courses, has been developed. It shows very good results, better than 98% in the confusion matrix. The results obtained in this study represent a first step toward video imagery classification. In summary, technical progress in computing power and software has shown that video imagery is one of the most promising environmental data acquisition techniques available today. This rapid classification of turfgrass can play an important role for planning water management.

  18. Efficient video panoramic image stitching based on an improved selection of Harris corners and a multiple-constraint corner matching.

    Directory of Open Access Journals (Sweden)

    Minchen Zhu

    Full Text Available Video panoramic image stitching is extremely time-consuming among other challenges. We present a new algorithm: (i Improved, self-adaptive selection of Harris corners. The successful stitching relies heavily on the accuracy of corner selection. We fragment each image into numerous regions and select corners within each region according to the normalized variance of region grayscales. Such a selection is self-adaptive and guarantees that corners are distributed proportional to region texture information. The possible clustering of corners is also avoided. (ii Multiple-constraint corner matching. The traditional Random Sample Consensus (RANSAC algorithm is inefficient, especially when handling a large number of images with similar features. We filter out many inappropriate corners according to their position information, and then generate candidate matching pairs based on grayscales of adjacent regions around corners. Finally we apply multiple constraints on every two pairs to remove incorrectly matched pairs. By a significantly reduced number of iterations needed in RANSAC, the stitching can be performed in a much more efficient manner. Experiments demonstrate that (i our corner matching is four times faster than normalized cross-correlation function (NCC rough match in RANSAC and (ii generated panoramas feature a smooth transition in overlapping image areas and satisfy real-time human visual requirements.

  19. The effects of physique-salient and physique non-salient exercise videos on women's body image, self-presentational concerns, and exercise motivation.

    Science.gov (United States)

    Ginis, Kathleen A Martin; Prapavessis, Harry; Haase, Anne M

    2008-06-01

    This experiment examined the effects of exposure to physique-salient (PS) and physique non-salient (PNS) exercise videos and the moderating influence of perceived physique discrepancies, on body image, self-presentational concerns, and exercise motivation. Eighty inactive women (M age=26) exercised to a 30 min instructional exercise video. In the PS condition, the video instructor wore revealing attire that emphasized her thin and toned physique. In the PNS condition, she wore attire that concealed her physique. Participants completed pre- and post-exercise measures of body image, social physique anxiety (SPA) and self-presentational efficacy (SPE) and a post-exercise measure of exercise motivation and perceived discrepancies with the instructor's body. No main or moderated effects emerged for video condition. However, greater perceived negative discrepancies were associated with poorer post-exercise body satisfaction and body evaluations, and higher state SPA. There were no effects on SPE or motivation. Results suggest that exercise videos that elicit perceived negative discrepancies can be detrimental to women's body images.

  20. Cataloguing artists' videos

    OpenAIRE

    Cooke, Jacqueline

    2009-01-01

    Artist’s videos present some challenges to cataloguers. How to select the source of information, how to describe them in ways which will help library users to find them, and particularly how to facilitate subject access are matters addressed in this article. With reference to the artist’s video collections at Goldsmiths, I consider interpretations of the rules for cataloguing art documentation and moving image material and discuss how they can be applied to video works and art documentation f...

  1. Individual cell motility studied by time-lapse video recording: influence of experimental conditions

    DEFF Research Database (Denmark)

    Hartmann-Petersen, R; Walmod, P S; Berezin, A

    2000-01-01

    BACKGROUND: Eukaryotic cell motility plays a key role during development, wound healing, and tumour invasion. Computer-assisted image analysis now makes it a realistic task to quantify individual cell motility of a large number of cells. However, the influence of culture conditions before...... line. Cellular morphology and organization of filamentous actin were assessed by means of phase-contrast and confocal laser scanning microscopy and compared to the corresponding motility data. RESULTS: Cell dissociation procedure, seeding density, time of cultivation, and substrate concentration were...

  2. New method for identifying features of an image on a digital video display

    Science.gov (United States)

    Doyle, Michael D.

    1991-04-01

    The MetaMap process extends the concept of direct manipulation human-computer interfaces to new limits. Its specific capabilities include the correlation of discrete image elements to relevant text information and the correlation of these image features to other images as well as to program control mechanisms. The correlation is accomplished through reprogramming of both the color map and the image so that discrete image elements comprise unique sets of color indices. This process allows the correlation to be accomplished with very efficient data storage and program execution times. Image databases adapted to this process become object-oriented as a result. Very sophisticated interrelationships can be set up between images text and program control mechanisms using this process. An application of this interfacing process to the design of an interactive atlas of medical histology as well as other possible applications are described. The MetaMap process is protected by U. S. patent #4

  3. Laser-induced dental caries and plaque diagnosis on patients by sensitive autofluorescence spectroscopy and time-gated video imaging: preliminary studies

    Science.gov (United States)

    Koenig, Karsten; Schneckenburger, Herbert

    1994-09-01

    The laser-induced in vivo autofluorescence of human teeth was investigated by means of time- resolved/time-gated fluorescence techniques. The aim of these studies was non-contact caries and plaque detection. Carious lesions and dental plaque fluoresce in the red spectral region. This autofluorescence seems to be based on porphyrin-producing bacteria. We report on preliminary studies on patients using a novel method of autofluorescence imaging. A special device was constructed for time-gated video imaging. Nanosecond laser pulses for fluorescence excitation were provided by a frequency-doubled, Q-switched Nd:YAG laser. Autofluorescence was detected in an appropriate nanosecond time window using a video camera with a time-gated image intensifier (minimal time gate: 5 ns). Laser-induced autofluorescence based on porphyrin-producing bacteria seems to be an appropriate tool for detecting dental lesions and for creating `caries-images' and `dental plaque' images.

  4. [Research on measuring the velocity and displacement of the coxa and knee based on video image processing].

    Science.gov (United States)

    Chen, Zhao; Zhao, Hang; Zheng, Jianli; An, Meijun; Xu, Xiulin; Chen, Wenhuan

    2014-12-01

    Based on repeated experiments as well as continuous researching and improving, an efficient scheme to measure velocity and displacement of the coxa and knee movements based on video image processing technique is presented in this paper. The scheme performed precise and real-time quantitative measurements of 2D velocity or displacement of the coxa and knee using a video camera mounted on one side of the healing and training beds. The beds were based on simplified pinhole projection model. In addition, we used a special-designed auxiliary calibration target, composed by 24 circle points uniformly located on two concentric circles and two straight rods which can rotate freely along the concentric center within the vertical plane, to do the measurements. Experiments carried out in our laboratory showed that the proposed scheme could basically satisfy the requirements about precision and processing speed of such kind of system, and would be very suitable to be applied to smart evaluation/training and healing system for muscles/balance function disability as an advanced and intuitional helping method.

  5. DCT/DST-based transform coding for intra prediction in image/video coding.

    Science.gov (United States)

    Saxena, Ankur; Fernandes, Felix C

    2013-10-01

    In this paper, we present a DCT/DST based transform scheme that applies either the conventional DCT or type-7 DST for all the video-coding intra-prediction modes: vertical, horizontal, and oblique. Our approach is applicable to any block-based intra prediction scheme in a codec that employs transforms along the horizontal and vertical direction separably. Previously, Han, Saxena, and Rose showed that for the intra-predicted residuals of horizontal and vertical modes, the DST is the optimal transform with performance close to the KLT. Here, we prove that this is indeed the case for the other oblique modes. The optimal choice of using DCT or DST is based on intra-prediction modes and requires no additional signaling information or rate-distortion search. The DCT/DST scheme presented in this paper was adopted in the HEVC standardization in March 2011. Further simplifications, especially to reduce implementation complexity, which remove the mode-dependency between DCT and DST, and simply always use DST for the 4 × 4 intra luma blocks, were adopted in the HEVC standard in July 2012. Simulation results conducted for the DCT/DST algorithm are shown in the reference software for the ongoing HEVC standardization. Our results show that the DCT/DST scheme provides significant BD-rate improvement over the conventional DCT based scheme for intra prediction in video sequences.

  6. An Alternative Scalable Video Coding Scheme Used For Efficient Image Representation In Multimedia Applications

    Directory of Open Access Journals (Sweden)

    Aravinda T.V

    2010-07-01

    Full Text Available This paper describes a novel video coding scheme based on a three-dimensional Matching Pursuit algorithm. In addition to good compression performance at low bit rate, the proposed coder allows for flexible spatial, temporal and rate scalability thanks to its progressive coding structure. The Matching Pursuit algorithm generates a sparse composition of a video sequence in a series of spatio-temporal atoms, taken from an over complete dictionary of three-dimensional basis functions. The dictionary is generated by shifting, scaling and rotating two different mother atoms in order to cover the whole frequency cube. An embedded stream is then produced from the series of atoms. They are first distributed into sets through the set-partitioned position map algorithm (SPPM to form the index-map, inspired from bit plane encoding. Scalar quantization is then applied to the coefficients which are finally arithmetic coded. A completeMP3D codec has been implemented, and performances are shown to favorably compare to other scalable coders like MPEG-4 FGS and SPIHT-3D. In addition, the MP3D streams offer an incomparable flexibility for multiresolution streaming or adaptive decoding.

  7. Automated live cell imaging systems reveal dynamic cell behavior.

    Science.gov (United States)

    Chirieleison, Steven M; Bissell, Taylor A; Scelfo, Christopher C; Anderson, Jordan E; Li, Yong; Koebler, Doug J; Deasy, Bridget M

    2011-07-01

    Automated time-lapsed microscopy provides unique research opportunities to visualize cells and subcellular components in experiments with time-dependent parameters. As accessibility to these systems is increasing, we review here their use in cell science with a focus on stem cell research. Although the use of time-lapsed imaging to answer biological questions dates back nearly 150 years, only recently have the use of an environmentally controlled chamber and robotic stage controllers allowed for high-throughput continuous imaging over long periods at the cell and subcellular levels. Numerous automated imaging systems are now available from both companies that specialize in live cell imaging and from major microscope manufacturers. We discuss the key components of robots used for time-lapsed live microscopic imaging, and the unique data that can be obtained from image analysis. We show how automated features enhance experimentation by providing examples of uniquely quantified proliferation and migration live cell imaging data. In addition to providing an efficient system that drastically reduces man-hours and consumes fewer laboratory resources, this technology greatly enhances cell science by providing a unique dataset of temporal changes in cell activity. Copyright © 2011 American Institute of Chemical Engineers (AIChE).

  8. Image mosaic based on video sequence%基于视频序列的图像拼接

    Institute of Scientific and Technical Information of China (English)

    李战明; 施颖迪

    2015-01-01

    为消除视频序列之间的冗余信息,以简单的摘要形式表达视频的主要内容,提出了一种基于视频序列的图像拼接方法。首先,采用改进的帧间聚类算法提取视频的关键帧;其次,利用SIFT算法提取关键帧的特征点,采用最近邻算法进行特征点匹配,通过引导互匹配法和投票过滤法提高匹配精度;再次通过RANSAC鲁棒估计算法得到所选帧间的单映矩阵,并使用LM非线性迭代算法对单映矩阵进行精炼。最后,利用级联单映矩阵结合加权融合算法实现了视频序列的无缝拼接,实验效果较为理想。%In order to eliminate the redundant information between video sequences, express the main content of the video in a simple abstract form, an image mosaic method based on video sequence was proposed. First of all, the key frames were extracted using the improved inter frame clustering algorithm. Secondly, the feature of key frames were extracted by using the SIFT algorithm, the nearest neighbor algorithm is used to match the feature points, guided complementary matching and voting filter is used to improve the matching accuracy. Then with the RANSAC robust estimation algorithm to get the selected frames' homography matrix. The LM nonlinear iteration algorithm was used to refine the homography matrix. Finally, cascade homography matrix and weighted fusion algorithm is used to realize the seamless splicing of video sequences. The experiment demonstrates this method is efficient.

  9. Dynamic measurements of flowing cells labeled by gold nanoparticles using full-field photothermal interferometric imaging

    Science.gov (United States)

    Turko, Nir A.; Roitshtain, Darina; Blum, Omry; Kemper, Björn; Shaked, Natan T.

    2017-06-01

    We present highly dynamic photothermal interferometric phase microscopy for quantitative, selective contrast imaging of live cells during flow. Gold nanoparticles can be biofunctionalized to bind to specific cells, and stimulated for local temperature increase due to plasmon resonance, causing a rapid change of the optical phase. These phase changes can be recorded by interferometric phase microscopy and analyzed to form an image of the binding sites of the nanoparticles in the cells, gaining molecular specificity. Since the nanoparticle excitation frequency might overlap with the sample dynamics frequencies, photothermal phase imaging was performed on stationary or slowly dynamic samples. Furthermore, the computational analysis of the photothermal signals is time consuming. This makes photothermal imaging unsuitable for applications requiring dynamic imaging or real-time analysis, such as analyzing and sorting cells during fast flow. To overcome these drawbacks, we utilized an external interferometric module and developed new algorithms, based on discrete Fourier transform variants, enabling fast analysis of photothermal signals in highly dynamic live cells. Due to the self-interference module, the cells are imaged with and without excitation in video-rate, effectively increasing signal-to-noise ratio. Our approach holds potential for using photothermal cell imaging and depletion in flow cytometry.

  10. Implementation of Image Registration Algorithms for Real-time Target Tracking Through Video Sequences

    Directory of Open Access Journals (Sweden)

    Jharna Majumdar

    2002-07-01

    Full Text Available "Automatic detection and tracking of interesting targets from a sequence of images obtained from a reconnaissance platform is an interesting area of research for defence-related applications. Image registration is the basic step used in target tracking application. The paper briefly reviews some of the image registration algorithms, analyse their performance using a suitable image processing hardware, and selects the most suitable algorithm for a real-time target tracking application using cubic-spline model and spline model Kalman filter for the prediction of an occluded target. The algorithms developed are implemented in a ground-based image exploitation system (GIES developed at the Aeronautical Development Establishment for unmanned aerial vehicle application, and the results presented for the images obtained during actual flight trial.

  11. Low cost multi-purpose balloon-borne platform for wide-field imaging and video observation

    CERN Document Server

    Ocaña, Francisco; Conde, Aitor

    2016-01-01

    Atmosphere layers, especially the troposphere, hinder the astronomical observation. For more than 100 years astronomers have tried observing from balloons to avoid turbulence and extinction. New developments in cardsize computers, RF equipment and satellite navigation have democratised the access to the stratosphere. As a result of a ProAm collaboration with the Daedalus Team we have developed a low-cost multi-purpose platform with stratospheric balloons carrying up to 3 kg of scientific payload. The Daedalus Team is an amateur group that has been launching sounding probes since 2010. Since then the first two authors have provided scienti fic payloads for nighttime flights with the purpose of technology demonstration for astronomical observation. We have successfully observed meteor showers (Geminids 2012, Camelopardalis 2014, Quadrantids 2016 and Lyrids 2016) and city light pollution emission with image and video sensors covering the 400-1000nm range.

  12. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  13. SECRETS OF SONG VIDEO

    Directory of Open Access Journals (Sweden)

    Chernyshov Alexander V.

    2014-04-01

    Full Text Available The article focuses on the origins of the song videos as TV and Internet-genre. In addition, it considers problems of screen images creation depending on the musical form and the text of a songs in connection with relevant principles of accent and phraseological video editing and filming techniques as well as with additional frames and sound elements.

  14. Micro-Imagers for Spaceborne Cell-Growth Experiments

    Science.gov (United States)

    Behar, Alberto; Matthews, Janet; SaintAnge, Beverly; Tanabe, Helen

    2006-01-01

    A document discusses selected aspects of a continuing effort to develop five micro-imagers for both still and video monitoring of cell cultures to be grown aboard the International Space Station. The approach taken in this effort is to modify and augment pre-existing electronic micro-cameras. Each such camera includes an image-detector integrated-circuit chip, signal-conditioning and image-compression circuitry, and connections for receiving power from, and exchanging data with, external electronic equipment. Four white and four multicolor light-emitting diodes are to be added to each camera for illuminating the specimens to be monitored. The lens used in the original version of each camera is to be replaced with a shorter-focal-length, more-compact singlet lens to make it possible to fit the camera into the limited space allocated to it. Initially, the lenses in the five cameras are to have different focal lengths: the focal lengths are to be 1, 1.5, 2, 2.5, and 3 cm. Once one of the focal lengths is determined to be the most nearly optimum, the remaining four cameras are to be fitted with lenses of that focal length.

  15. The STOne Transform: Multi-Resolution Image Enhancement and Compressive Video.

    Science.gov (United States)

    Goldstein, Tom; Xu, Lina; Kelly, Kevin F; Baraniuk, Richard

    2015-12-01

    Compressive sensing enables the reconstruction of high-resolution signals from under-sampled data. While the compressive methods simplify data acquisition, they require the solution of difficult recovery problems to make use of the resulting measurements. This paper presents a new sensing framework that combines the advantages of both the conventional and the compressive sensing. Using the proposed sum-to-one transform, the measurements can be reconstructed instantly at the Nyquist rates at any power-of-two resolution. The same data can then be enhanced to higher resolutions using the compressive methods that leverage sparsity to beat the Nyquist limit. The availability of a fast direct reconstruction enables the compressive measurements to be processed on small embedded devices. We demonstrate this by constructing a real-time compressive video camera.

  16. VidCat: an image and video analysis service for personal media management

    Science.gov (United States)

    Begeja, Lee; Zavesky, Eric; Liu, Zhu; Gibbon, David; Gopalan, Raghuraman; Shahraray, Behzad

    2013-03-01

    Cloud-based storage and consumption of personal photos and videos provides increased accessibility, functionality, and satisfaction for mobile users. One cloud service frontier that is recently growing is that of personal media management. This work presents a system called VidCat that assists users in the tagging, organization, and retrieval of their personal media by faces and visual content similarity, time, and date information. Evaluations for the effectiveness of the copy detection and face recognition algorithms on standard datasets are also discussed. Finally, the system includes a set of application programming interfaces (API's) allowing content to be uploaded, analyzed, and retrieved on any client with simple HTTP-based methods as demonstrated with a prototype developed on the iOS and Android mobile platforms.

  17. Langerhans cell histiocytosis of bone: MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    George, J.C. [Indiana Univ., Indianapolis, IN (United States). Dept. of Radiology; Buckwalter, K.A. [Indiana Univ., Indianapolis, IN (United States). Dept. of Radiology; Cohen, M.D. [Indiana Univ., Indianapolis, IN (United States). Dept. of Radiology; Edwards, M.K. [Indiana Univ., Indianapolis, IN (United States). Dept. of Radiology; Smith, R.R. [Indiana Univ., Indianapolis, IN (United States). Dept. of Radiology

    1994-03-01

    Magnetic resonance (MR) images of 12 pathologically proven lesions of Langerhans cell histiocytosis (LCH) of bone were reviewed retrospectively. MR identified all lesions, three of which were not identified on plain radiographs. In all cases, MR showed greater abnormality than did plain radiographs. With one exception, all lesions were hypointense on T1-weighted images and hyperintense on T2-weighted images. The lesions and associated soft tissue abnormalities were very conspicuous on short TI inversion sequences and T1-weighted post-contrast images. Follow-up MR studies in two patients after chemotherapy showed decreased size and enhancement of lesions compared with baseline studies. (orig.)

  18. Langerhans cell histiocytosis of bone: MR imaging.

    Science.gov (United States)

    George, J C; Buckwalter, K A; Cohen, M D; Edwards, M K; Smith, R R

    1994-01-01

    Magnetic resonance (MR) images of 12 pathologically proven lesions of Langerhans cell histiocytosis (LCH) of bone were reviewed retrospectively. MR identified all lesions, three of which were not identified on plain radiographs. In all cases, MR showed greater abnormality than did plain radiographs. With one exception, all lesions were hypointense on T1-weighted images and hyperintense on T2-weighted images. The lesions and associated soft tissue abnormalities were very conspicuous on short TI inversion sequences and T1-weighted post-contrast images. Follow-up MR studies in two patients after chemotherapy showed decreased size and enhancement of lesions compared with baseline studies.

  19. Bioluminescence imaging in live cells and animals.

    Science.gov (United States)

    Tung, Jack K; Berglund, Ken; Gutekunst, Claire-Anne; Hochgeschwender, Ute; Gross, Robert E

    2016-04-01

    The use of bioluminescent reporters in neuroscience research continues to grow at a rapid pace as their applications and unique advantages over conventional fluorescent reporters become more appreciated. Here, we describe practical methods and principles for detecting and imaging bioluminescence from live cells and animals. We systematically tested various components of our conventional fluorescence microscope to optimize it for long-term bioluminescence imaging. High-resolution bioluminescence images from live neurons were obtained with our microscope setup, which could be continuously captured for several hours with no signs of phototoxicity. Bioluminescence from the mouse brain was also imaged noninvasively through the intact skull with a conventional luminescence imager. These methods demonstrate how bioluminescence can be routinely detected and measured from live cells and animals in a cost-effective way with common reagents and equipment.

  20. Extracting a Good Quality Frontal Face Image from a Low-Resolution Video Sequence

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2011-01-01

    Feeding low-resolution and low-quality images, from inexpensive surveillance cameras, to systems like, e.g., face recognition, produces erroneous and unstable results. Therefore, there is a need for a mechanism to bridge the gap between on one hand low-resolution and low-quality images and on the......Feeding low-resolution and low-quality images, from inexpensive surveillance cameras, to systems like, e.g., face recognition, produces erroneous and unstable results. Therefore, there is a need for a mechanism to bridge the gap between on one hand low-resolution and low-quality images...... and on the other hand facial analysis systems. The proposed system in this paper deals with exactly this problem. Our approach is to apply a reconstruction-based super-resolution algorithm. Such an algorithm, however, has two main problems: first, it requires relatively similar images with not too much noise...

  1. Extracting a Good Quality Frontal Face Image from a Low-Resolution Video Sequence

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2011-01-01

    Feeding low-resolution and low-quality images, from inexpensive surveillance cameras, to systems like, e.g., face recognition, produces erroneous and unstable results. Therefore, there is a need for a mechanism to bridge the gap between on one hand low-resolution and low-quality images and on the......Feeding low-resolution and low-quality images, from inexpensive surveillance cameras, to systems like, e.g., face recognition, produces erroneous and unstable results. Therefore, there is a need for a mechanism to bridge the gap between on one hand low-resolution and low-quality images...... and on the other hand facial analysis systems. The proposed system in this paper deals with exactly this problem. Our approach is to apply a reconstruction-based super-resolution algorithm. Such an algorithm, however, has two main problems: first, it requires relatively similar images with not too much noise...

  2. Learning Trajectory for Transforming Teachers' Knowledge for Teaching Mathematics and Science with Digital Image and Video Technologies in an Online Learning Experience

    Science.gov (United States)

    Niess, Margaret L.; Gillow-Wiles, Henry

    2014-01-01

    This qualitative cross-case study explores the influence of a designed learning trajectory on transforming teachers' technological pedagogical content knowledge (TPACK) for teaching with digital image and video technologies. The TPACK Learning Trajectory embeds tasks with specific instructional strategies within a social metacognitive…

  3. PET imaging of adoptive progenitor cell therapies.

    Energy Technology Data Exchange (ETDEWEB)

    Gelovani, Juri G.

    2008-05-13

    Objectives. The overall objective of this application is to develop novel technologies for non-invasive imaging of adoptive stem cell-based therapies with positron emission tomography (PET) that would be applicable to human patients. To achieve this objective, stem cells will be genetically labeled with a PET-reporter gene and repetitively imaged to assess their distribution, migration, differentiation, and persistence using a radiolabeled reporter probe. This new imaging technology will be tested in adoptive progenitor cell-based therapy models in animals, including: delivery pro-apoptotic genes to tumors, and T-cell reconstitution for immunostimulatory therapy during allogeneic bone marrow progenitor cell transplantation. Technical and Scientific Merits. Non-invasive whole body imaging would significantly aid in the development and clinical implementation of various adoptive progenitor cell-based therapies by providing the means for non-invasive monitoring of the fate of injected progenitor cells over a long period of observation. The proposed imaging approaches could help to address several questions related to stem cell migration and homing, their long-term viability, and their subsequent differentiation. The ability to image these processes non-invasively in 3D and repetitively over a long period of time is very important and will help the development and clinical application of various strategies to control and direct stem cell migration and differentiation. Approach to accomplish the work. Stem cells will be genetically with a reporter gene which will allow for repetitive non-invasive “tracking” of the migration and localization of genetically labeled stem cells and their progeny. This is a radically new approach that is being developed for future human applications and should allow for a long term (many years) repetitive imaging of the fate of tissues that develop from the transplanted stem cells. Why the approach is appropriate. The novel approach to

  4. Two-photon imaging of stem cells

    Science.gov (United States)

    Uchugonova, A.; Gorjup, E.; Riemann, I.; Sauer, D.; König, K.

    2008-02-01

    A variety of human and animal stem cells (rat and human adult pancreatic stem cells, salivary gland stem cells, dental pulpa stem cells) have been investigated by femtosecond laser 5D two-photon microscopy. Autofluorescence and second harmonic generation have been imaged with submicron spatial resolution, 270 ps temporal resolution, and 10 nm spectral resolution. In particular, NADH and flavoprotein fluorescence was detected in stem cells. Major emission peaks at 460nm and 530nm with typical mean fluorescence lifetimes of 1.8 ns and 2.0 ns, respectively, were measured using time-correlated single photon counting and spectral imaging. Differentiated stem cells produced the extracellular matrix protein collagen which was detected by SHG signals at 435 nm.

  5. Red blood cells, spherocytosis (image)

    Science.gov (United States)

    Spherocytosis is a hereditary disorder of the red blood cells (RBCs), which may be associated with a mild anemia. Typically, the affected RBCs are small, spherically shaped, and lack the light centers seen ...

  6. Video Information Communication and Retrieval/Image Based Information System (VICAR/IBIS)

    Science.gov (United States)

    Wherry, D. B.

    1981-01-01

    The acquisition, operation, and planning stages of installing a VICAR/IBIS system are described. The system operates in an IBM mainframe environment, and provides image processing of raster data. System support problems with software and documentation are discussed.

  7. Application of 3D Morphable Models to faces in video images

    NARCIS (Netherlands)

    van Rootseler, R.T.A.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.; van den Biggelaar, Olivier

    2011-01-01

    The 3D Morphable Face Model (3DMM) has been used for over a decade for creating 3D models from single images of faces. This model is based on a PCA model of the 3D shape and texture generated from a limited number of 3D scans. The goal of fitting a 3DMM to an image is to find the model coefficients,

  8. Content Based Video Retrieval

    Directory of Open Access Journals (Sweden)

    B. V. Patel

    2012-10-01

    Full Text Available Content based video retrieval is an approach for facilitating the searching and browsing of large image collections over World Wide Web. In this approach, video analysis is conducted on low level visual properties extracted from video frame. We believed that in order to create an effective video retrieval system, visual perception must be taken into account. We conjectured that a technique which employs multiple features for indexing and retrieval would be more effective in the discrimination and search tasks of videos. In order to validate this claim, content based indexing and retrieval systems were implemented using color histogram, various texture features and other approaches. Videos were stored in Oracle 9i Database and a user study measured correctness of response.

  9. Video Analytics for Indexing, Summarization and Searching of Video Archives

    Energy Technology Data Exchange (ETDEWEB)

    Trease, Harold E.; Trease, Lynn L.

    2009-08-01

    This paper will be submitted to the proceedings The Eleventh IASTED International Conference on. Signal and Image Processing. Given a video or video archive how does one effectively and quickly summarize, classify, and search the information contained within the data? This paper addresses these issues by describing a process for the automated generation of a table-of-contents and keyword, topic-based index tables that can be used to catalogue, summarize, and search large amounts of video data. Having the ability to index and search the information contained within the videos, beyond just metadata tags, provides a mechanism to extract and identify "useful" content from image and video data.

  10. Rapid diagnostic imaging and pathologic evaluation of surgical tissue using video rate structured illumination microscopy (VR-SIM) (Conference Presentation)

    Science.gov (United States)

    Wang, Mei; Tulman, David; Elfer, Kate; Sholl, Andrew; Brown, J. Quincy

    2016-03-01

    Currently available pathology techniques for obtaining a rapid tissue diagnosis, or for determining the adequacy of specimens intended for downstream analysis, are too slow, labor-intensive, and destructive for point-of-care (POC) applications. We previously demonstrated video-rate structured illumination microscopy (VR-SIM) for accurate, high-throughput, non-destructive diagnostic imaging of fluorescently-stained prostate biopsies in seconds per biopsy, with an area under the ROC curve of 0.82-0.88 after pathologist review. In addition, we have demonstrated that it is feasible to use VR-SIM to routinely image very large gross pathology specimens, such as entire prostate resection surfaces, in relatively short timeframes at subcellular resolution. However, our prior work has focused on applications in prostate cancer; the utility in other organ sites has not been explored. Here we extended our technology to varying size kidney, liver, and lung biopsies. We conducted a validation study of VR-SIM against histopathology on a variety of human tissues, including both small biopsies and large slices of tissue. We conducted a blinded study in which the study pathologist accurately identified the organs based on VR-SIM images alone. The results were then used to create a clinical atlas between VR-SIM and H and E images for the different tissues of interest. This clinical atlas will be used to aid in pathologist interpretation in future POC clinical applications of VR-SIM in kidney, liver, and lung. Such applications could include on-site identification of the presence of kidney glomeruli for to ensure successful downstream IHC analysis, or determination of the adequacy of lung cancer biopsies for genomic analysis.

  11. Short Project-Based Learning with MATLAB Applications to Support the Learning of Video-Image Processing

    Science.gov (United States)

    Gil, Pablo

    2017-10-01

    University courses concerning Computer Vision and Image Processing are generally taught using a traditional methodology that is focused on the teacher rather than on the students. This approach is consequently not effective when teachers seek to attain cognitive objectives involving their students' critical thinking. This manuscript covers the development, implementation and assessment of a short project-based engineering course with MATLAB applications Multimedia Engineering being taken by Bachelor's degree students. The principal goal of all course lectures and hands-on laboratory activities was for the students to not only acquire image-specific technical skills but also a general knowledge of data analysis so as to locate phenomena in pixel regions of images and video frames. This would hopefully enable the students to develop skills regarding the implementation of the filters, operators, methods and techniques used for image processing and computer vision software libraries. Our teaching-learning process thus permits the accomplishment of knowledge assimilation, student motivation and skill development through the use of a continuous evaluation strategy to solve practical and real problems by means of short projects designed using MATLAB applications. Project-based learning is not new. This approach has been used in STEM learning in recent decades. But there are many types of projects. The aim of the current study is to analyse the efficacy of short projects as a learning tool when compared to long projects during which the students work with more independence. This work additionally presents the impact of different types of activities, and not only short projects, on students' overall results in this subject. Moreover, a statistical study has allowed the author to suggest a link between the students' success ratio and the type of content covered and activities completed on the course. The results described in this paper show that those students who took part

  12. Short Project-Based Learning with MATLAB Applications to Support the Learning of Video-Image Processing

    Science.gov (United States)

    Gil, Pablo

    2017-05-01

    University courses concerning Computer Vision and Image Processing are generally taught using a traditional methodology that is focused on the teacher rather than on the students. This approach is consequently not effective when teachers seek to attain cognitive objectives involving their students' critical thinking. This manuscript covers the development, implementation and assessment of a short project-based engineering course with MATLAB applications Multimedia Engineering being taken by Bachelor's degree students. The principal goal of all course lectures and hands-on laboratory activities was for the students to not only acquire image-specific technical skills but also a general knowledge of data analysis so as to locate phenomena in pixel regions of images and video frames. This would hopefully enable the students to develop skills regarding the implementation of the filters, operators, methods and techniques used for image processing and computer vision software libraries. Our teaching-learning process thus permits the accomplishment of knowledge assimilation, student motivation and skill development through the use of a continuous evaluation strategy to solve practical and real problems by means of short projects designed using MATLAB applications. Project-based learning is not new. This approach has been used in STEM learning in recent decades. But there are many types of projects. The aim of the current study is to analyse the efficacy of short projects as a learning tool when compared to long projects during which the students work with more independence. This work additionally presents the impact of different types of activities, and not only short projects, on students' overall results in this subject. Moreover, a statistical study has allowed the author to suggest a link between the students' success ratio and the type of content covered and activities completed on the course. The results described in this paper show that those students who took part

  13. One decade of imaging precipitation measurement by 2D-video-distrometer

    Directory of Open Access Journals (Sweden)

    M. Schönhuber

    2007-01-01

    Full Text Available The 2D-Video-Distrometer (2DVD is a ground-based point-monitoring precipitation gauge. From each particle reaching the measuring area front and side contours as well as fall velocity and precise time stamp are recorded. In 1991 the 2DVD development has been started to clarify discrepancies found when comparing weather radar data analyses with literature models. Then being manufactured in a small scale series the first 2DVD delivery took place in 1996, 10 years back from now. An overview on present 2DVD features is given, and it is presented how the instrument was continuously improved in the past ten years. Scientific merits of 2DVD measurements are explained, including drop size readings without upper limit, drop shape and orientation angle information, contours of solid and melting particles, and an independent measurement of particles' fall velocity also in mixed phase events. Plans for a next generation instrument are described, by enhanced user-friendliness the unique data type shall be opened to a wider user community.

  14. Sensor Fusion of a Mobile Device to Control and Acquire Videos or Images of Coffee Branches and for Georeferencing Trees

    Directory of Open Access Journals (Sweden)

    Paula Jimena Ramos Giraldo

    2017-04-01

    Full Text Available Smartphones show potential for controlling and monitoring variables in agriculture. Their processing capacity, instrumentation, connectivity, low cost, and accessibility allow farmers (among other users in rural areas to operate them easily with applications adjusted to their specific needs. In this investigation, the integration of inertial sensors, a GPS, and a camera are presented for the monitoring of a coffee crop. An Android-based application was developed with two operating modes: (i Navigation: for georeferencing trees, which can be as close as 0.5 m from each other; and (ii Acquisition: control of video acquisition, based on the movement of the mobile device over a branch, and measurement of image quality, using clarity indexes to select the most appropriate frames for application in future processes. The integration of inertial sensors in navigation mode, shows a mean relative error of ±0.15 m, and total error ±5.15 m. In acquisition mode, the system correctly identifies the beginning and end of mobile phone movement in 99% of cases, and image quality is determined by means of a sharpness factor which measures blurriness. With the developed system, it will be possible to obtain georeferenced information about coffee trees, such as their production, nutritional state, and presence of plagues or diseases.

  15. Detailed in situ laser calibration of the infrared imaging video bolometer for the JT-60U tokamak

    Science.gov (United States)

    Parchamy, H.; Peterson, B. J.; Konoshima, S.; Hayashi, H.; Seo, D. C.; Ashikawa, N.; JT-60U Team

    2006-10-01

    The infrared imaging video bolometer (IRVB) in JT-60U includes a single graphite-coated gold foil with an effective area of 9 × 7 cm 2 and a thickness of 2.5 μ m . The thermal images of the foil resulting from the plasma radiation are provided by an IR camera. The calibration technique of the IRVB gives confidence in the absolute levels of the measured values of the plasma radiation. The in situ calibration is carried out in order to obtain local foil properties such as the thermal diffusivity κ and the product of the thermal conductivity k and the thickness t f of the foil. These quantities are necessary for solving the two-dimensional heat diffusion equation of the foil which is used in the experiments. These parameters are determined by comparing the measured temperature profiles (for k t f ) and their decays (for κ ) with the corresponding results of a finite element model using the measured HeNe laser power profile as a known radiation power source. The infrared camera (Indigo/Omega) is calibrated by fitting the temperature rise of a heated plate to the resulting camera data using the Stefan-Boltzmann law.

  16. Fuzzy-Based Segmentation for Variable Font-Sized Text Extraction from Images/Videos

    Directory of Open Access Journals (Sweden)

    Samabia Tehsin

    2014-01-01

    Full Text Available Textual information embedded in multimedia can provide a vital tool for indexing and retrieval. A lot of work is done in the field of text localization and detection because of its very fundamental importance. One of the biggest challenges of text detection is to deal with variation in font sizes and image resolution. This problem gets elevated due to the undersegmentation or oversegmentation of the regions in an image. The paper addresses this problem by proposing a solution using novel fuzzy-based method. This paper advocates postprocessing segmentation method that can solve the problem of variation in text sizes and image resolution. The methodology is tested on ICDAR 2011 Robust Reading Challenge dataset which amply proves the strength of the recommended method.

  17. Assessment of the quality of durum wheat products by spectrofluorometry and fluorescence video image analysis

    Science.gov (United States)

    Novales, Bruno; Abecassis, Joel; Bertrand, Dominique; Devaux, Marie-Francoise; Robert, Paul

    1995-01-01

    Because assessment of Durum wheat semolina purity by standard ash-test has been widely criticized, we attempted to characterize products of a semolina mill by spectrofluorometry and fluorescence imaging. A collection of milled wheat products ranging from very pure semolina to brans were chosen for this study. Multidimensional statistical analyses (Principal component analyses) were applied to the spectral and image data. Maps showing a classification of the products according to purity were obtained without biochemical calibration. Principal component regression was applied to the data in order to test the relationship of aleurone fluorescence to ash content. Both spectrofluorometry and fluorescence imaging gave similar results with good determination coefficients (r2 equals 0.97 and 0.92) for the study of a single wheat variety. Products obtained from different wheat varieties were more difficult to compare.

  18. 高保真图像与视频处理技术%High-Fidelity Image/Video Processing Technologies

    Institute of Scientific and Technical Information of China (English)

    武筱林

    2012-01-01

    Through years of intensive research and heavy investment in imaging technologies,spatial,spectral and temporal fidelities of digital images are steadily improving and now can match and even exceed those of traditional film.However,no matter how much sensor technologies advance,new,more exciting and exotic applications will always present themselves that demand even higher image precision.Researchers in medicine,space,engineering and sciences all have insatiable desire for imaging ever more minuscule and subtle details.Users cannot solely count on raw sensor capability to satisfy their needs.There exist hard physical limits on native fidelity of imaging devices.Therefore,signal processing techniques to algorithmically improve native sensor precision are and will be playing an important role in the fields of image and video processing and computer vision.In this talk,we will examine challenging technical problems in the field of high-fidelity image/video processing,and review scientific and engineering approaches,both established and emerging,to overcoming these technical challenges.%得益于对图像技术多年的深入研究和持续投入,数字图像在空间、频域和时域的保真度一直稳步提升,现在已经可以赶上甚至超过传统胶卷。但是无论传感技术如何发展,总会有一些令人兴奋的新应用要求更高的图像精度。医药、空间、工程和科学领域的研究人员总是渴望得到更小尺度、更精确的图像细节。由于传感器件自身的保真能力受到物理定律的严格限制,所以用户不能指望仅仅靠传感器本身达到这些成像要求。而通过信号处理技术从算法上提高传感器件的成像精度已经并且将会在图像与视频处理以及机器视觉领域扮演重要的角色。介绍了高保真图像与视频处理领域的一些技术难题,并回顾了已经存在的和正在兴起的解决这些难题的科学方法和技术手段。

  19. DHM (Digital Holography Microscope) for imaging cells

    Energy Technology Data Exchange (ETDEWEB)

    Emery, Yves [Lyncee Tec SA, PSE-A, 1015 Lausanne (Switzerland); Cuche, Etienne [Lyncee Tec SA, PSE-A, 1015 Lausanne (Switzerland); Colomb, Tristan [STI-IOA-EPFL, 1015 Lausanne (Switzerland); Depeursinge, Christian [STI-IOA-EPFL, 1015 Lausanne (Switzerland); Rappaz, Benjamin [SV-BM-EPFL, 1015 Lausanne (Switzerland); Marquet, Pierre [CNP-CHUV, Site de Cery, 1008 Prilly (Switzerland); Magistretti, Pierre [SV-BM-EPFL, 1015 Lausanne (Switzerland)

    2007-04-15

    Light interaction with a sample modifies both intensity and phase of the illuminating wave. Any available supports for image recording are only sensitive to intensity, but Denis Gabor [P. Marquet, B. Rappaz, P. Magistretti, et. al. Digital Holography for quantitative phase-contrast imaging, Optics Letters, 30, 5, pp 291-93 (2005)] invented in 1948 a way to encode the phase as an intensity variation: the {sup h}ologram{sup .} Digital Holographic Microscopy (DHM) [D. Gabor, A new microscopic principle, Nature, 1948] implements digitally this powerful hologram. Characterization of various pollen grains and of morphology changes of neurones associated with hypotonic shock demonstrates the potential of DHM for imaging cells.

  20. Multiparameter fluorescence spectroscopic imaging of cell function

    Science.gov (United States)

    Bright, Gary R.

    1994-08-01

    The ability to quantitate physiological parameters in single living cells using fluorescence spectroscopic imaging has expanded our understanding of many cell regulatory processes. Previous studies have focussed on the measurement of single parameters, such as the concentration of calcium, and more recently two parameters, such as calcium and pH using fluorescence ratio imaging. The complexity of the interrelationships among cell biochemical reactions suggests a need to extend the measurement scheme to several parameters. Expansion of the number of parameters involves several complexities associated with fluorescent probe selection and instrumentation design as well as the processing and management of the data. A system has been assembled which provides maximum flexibility in multiparameter fluorescence imaging measurements. The system provides multiple combinations of excitation, dichroic mirror, and emission wavelengths. It has automatic acquisition of any number of parameters. The number of parameters is primarily limited by the selection of fluorescent probes with nonoverlapping spectra. We demonstrate the utility of the system by the coordinated monitoring of stimulated changes in the concentrations of calcium, magnesium, and pH using fluorescence ratio imaging coupled with a conventional transmitted light image of single smooth muscle cells. The results demonstrate coordinated changes in some instances but uncoordinated changes in others.

  1. Developing a Video Steganography Toolkit

    OpenAIRE

    Ridgway, James; Stannett, Mike

    2014-01-01

    Although techniques for separate image and audio steganography are widely known, relatively little has been described concerning the hiding of information within video streams ("video steganography"). In this paper we review the current state of the art in this field, and describe the key issues we have encountered in developing a practical video steganography system. A supporting video is also available online at http://www.youtube.com/watch?v=YhnlHmZolRM

  2. A Feasibility Study of Smartphone-Based Telesonography for Evaluating Cardiac Dynamic Function and Diagnosing Acute Appendicitis with Control of the Image Quality of the Transmitted Videos.

    Science.gov (United States)

    Kim, Changsun; Cha, Hyunmin; Kang, Bo Seung; Choi, Hyuk Joong; Lim, Tae Ho; Oh, Jaehoon

    2016-06-01

    Our aim was to prove the feasibility of the remote interpretation of real-time transmitted ultrasound videos of dynamic and static organs using a smartphone with control of the image quality given a limited internet connection speed. For this study, 100 cases of echocardiography videos (dynamic organ)-50 with an ejection fraction (EF) of ≥50 s and 50 with EF smartphone, to which the images were transmitted from the ultrasound machine. The resolution of the transmitted echocardiography videos was reduced by approximately 20 % to increase the frame rate of transmission given the limited internet speed. The differences in diagnostic performance between the two devices when evaluating left ventricular (LV) systolic function by measuring the EF and when evaluating the presence of acute appendicitis were investigated using a five-point Likert scale. The average areas under the receiver operating characteristic curves for each reviewer's interpretations using the LCD monitor and smartphone were respectively 0.968 (0.949-0.986) and 0.963 (0.945-0.982) (P = 0.548) for echocardiography and 0.972 (0.954-0.989) and 0.966 (0.947-0.984) (P = 0.175) for abdominal ultrasonography. We confirmed the feasibility of remotely interpreting ultrasound images using smartphones, specifically for evaluating LV function and diagnosing pediatric acute appendicitis; the images were transferred from the ultrasound machine using image quality-controlled telesonography.

  3. Subjective evaluation of the accuracy of video imaging prediction following orthognathic surgery in Chinese patients

    NARCIS (Netherlands)

    Chew, Ming Tak; Koh, Chay Hui; Sandham, John; Wong, Hwee Bee

    Purpose: The aims of this retrospective study were to assess the subjective accuracy of predictions generated by a computer imaging software in Chinese patients who had undergone orthognathic surgery and to determine the influence of initial dysgnathia and complexity of the surgical procedure on

  4. Twente Optical Perfusion Camera: system overview and performance for video rate laser Doppler perfusion imaging

    NARCIS (Netherlands)

    M. Draijer; E. Hondebrink; T. van Leeuwen; W. Steenbergen

    2009-01-01

    We present the Twente Optical Perfusion Camera (TOPCam), a novel laser Doppler Perfusion Imager based on CMOS technology. The tissue under investigation is illuminated and the resulting dynamic speckle pattern is recorded with a high speed CMOS camera. Based on an overall analysis of the signal-to-n

  5. Twente Optical Perfusion Camera: system overview and performance for video rate laser Doppler perfusion imaging

    NARCIS (Netherlands)

    Draijer, M.; Hondebrink, E.; van Leeuwen, T.; Steenbergen, W.

    2009-01-01

    We present the Twente Optical Perfusion Camera (TOPCam), a novel laser Doppler Perfusion Imager based on CMOS technology. The tissue under investigation is illuminated and the resulting dynamic speckle pattern is recorded with a high speed CMOS camera. Based on an overall analysis of the

  6. Subjective evaluation of the accuracy of video imaging prediction following orthognathic surgery in Chinese patients

    NARCIS (Netherlands)

    Chew, Ming Tak; Koh, Chay Hui; Sandham, John; Wong, Hwee Bee

    2008-01-01

    Purpose: The aims of this retrospective study were to assess the subjective accuracy of predictions generated by a computer imaging software in Chinese patients who had undergone orthognathic surgery and to determine the influence of initial dysgnathia and complexity of the surgical procedure on pre

  7. Video Stream Processors: A Cost-Effective Computational Architecture for Image Processing.

    Science.gov (United States)

    1980-06-01

    plowed field (Figure 13). The most sophisticated application of correlation to date on the De Anza was a fast implementation of David Marr’s theory of edge detection [22...34 Theory of Edge Detection ," AI Memo S18, M.I.T., Cambridge, Massachusetts (1979). 23. Proceedings ARPA Image Understanding Workshops, Lee Baumann, ed. (Science Applications, Inc., Arlington, Virginia). 63

  8. Dashboard Videos

    Science.gov (United States)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  9. Evaluation of performance of the Omni mode for detecting video capsule endoscopy images: A multicenter randomized controlled trial.

    Science.gov (United States)

    Hosoe, Naoki; Watanabe, Kenji; Miyazaki, Takako; Shimatani, Masaaki; Wakamatsu, Takahiro; Okazaki, Kazuichi; Esaki, Motohiro; Matsumoto, Takayuki; Abe, Takayuki; Kanai, Takanori; Ohtsuka, Kazuo; Watanabe, Mamoru; Ikeda, Keiichi; Tajiri, Hisao; Ohmiya, Naoki; Nakamura, Masanao; Goto, Hidemi; Tsujikawa, Tomoyuki; Ogata, Haruhiko

    2016-08-01

    Olympus recently developed a new algorithm called Omni mode that discards redundant video capsule endoscopy (VCE) images. The current study aimed to demonstrate the non-inferiority of the Omni mode in terms of true positives (TPs) and the superiority of the Omni mode with regard to reading time against a control (ordinary ES-10 system). This multicenter prospective study included 40 patients with various small bowel diseases. VCE images were evaluated by 7 readers and 3 judging committee members. Two randomly allocated readers assessed the VCE images obtained using the 2 modalities for each patient. The order of the modalities was switched between the 2 readers and the interval between readings by the same reader was 2 weeks. The judging committee predefined clinically relevant lesions as major lesions and irrelevant lesions as minor lesions. The number of TPs for major and minor lesions and the reading times were compared between the modalities. The predefined non-inferiority margin for the TP ratio of the Omni mode compared with the control was 0.9. The estimated TP ratios and 95 % confidence intervals for total, major, and minor lesions were 0.87 (0.80 - 0.95), 0.93 (0.83 - 1.04), and 0.83 (0.74 - 0.94), respectively. Although non-inferiority was not demonstrated, the rate of detection of major lesions was not significantly different between the modalities. The reading time was significantly lower when using the Omni mode than when using the control. The Omni mode may be only appropriate for the assessment of major lesions.

  10. A stripe noise removal method for the video surveillance image%一种视频监控图像条纹噪声去除算法

    Institute of Scientific and Technical Information of China (English)

    武楠; 王珩

    2012-01-01

    针对视频图像中图像传感器故障引起的等间距横纹噪声去除问题。提出了基于陷波滤波器的去除条纹噪声算法;首先将视频条纹图进行傅里叶变换,通过频域累积分布函数映射法构造自适应滤波器,用该滤波器对变换后的条纹图像进行滤波,最后用Laplacian算子进行锐化处理得到最终去噪声图像。实验结果表明,本算法适合用于去除有图像传感器故障引起的等间距横纹噪声。%For the evenly spaced horizontal noise removal problem caused by image sensor fault in video images, we propose a stripes noise removal algorithm based on depression wave filter. First of all, we make Fourier transformation for video stripe images, and construct an adaptation filter based on depression filter through cumulative distribution function map method in frequency domain. Then we make the inverse Fourier transformation for the filtered video stripe images and get the restoration image. Finally, we sharpened the restoration image using Laplacian operator and overlay it to the restoration image, then we get the final noise removal image. Experimental results show that, the proposed algorithm is suitable for spaced horizontal noise removal caused by the image sensor fault.

  11. Cherenkov radiation dosimetry in water tanks - video rate imaging, tomography and IMRT & VMAT plan verification

    Science.gov (United States)

    Pogue, Brian W.; Glaser, Adam K.; Zhang, Rongxiao; Gladstone, David J.

    2015-01-01

    This paper presents a survey of three types of imaging of radiation beams in water tanks for comparison to dose maps. The first was simple depth and lateral profile verification, showing excellent agreement between Cherenkov and planned dose, as predicted by the treatment planning system for a square 5cm beam. The second approach was 3D tomography of such beams, using a rotating water tank with camera attached, and using filtered backprojection for the recovery of the 3D volume. The final presentation was real time 2D imaging of IMRT or VMAT treatments in a water tank. In all cases the match to the treatment planning system was within what would be considered acceptable for clinical medical physics acceptance.

  12. Guiding synchrotron X-ray diffraction by multimodal video-rate protein crystal imaging

    OpenAIRE

    Newman, Justin A.; ZHANG, Shijie; Sullivan, Shane Z.; Dow, Ximeng Y.; Becker, Michael; Sheedlo, Michael J.; Stepanov, Sergey; Carlsen, Mark S.; Everly, R. Michael; Das, Chittaranjan; Fischetti, Robert F.; Simpson, Garth J.

    2016-01-01

    Synchronous digitization, in which an optical sensor is probed synchronously with the firing of an ultrafast laser, was integrated into an optical imaging station for macromolecular crystal positioning prior to synchrotron X-ray diffraction. Using the synchronous digitization instrument, second-harmonic generation, two-photon-excited fluorescence, one-photon-excited fluorescence, two-photon-excited ultraviolet fluorescence and bright field by laser transmittance were all acquired with perfect...

  13. Imaging Tumor Cell Movement In Vivo

    OpenAIRE

    Entenberg, David; Kedrin, Dmitriy; Wyckoff, Jeffrey; Sahai, Erik; Condeelis, John; Segall, Jeffrey E

    2013-01-01

    This unit describes the methods that we have been developing for analyzing tumor cell motility in mouse and rat models of breast cancer metastasis. Rodents are commonly used both to provide a mammalian system for studying human tumor cells (as xenografts in immunocompromised mice) as well as for following the development of tumors from a specific tissue type in transgenic lines. The Basic Protocol in this unit describes the standard methods used for generation of mammary tumors and imaging th...

  14. Measurement of steady and transient liquid coiling with high-speed video and digital image processing

    Science.gov (United States)

    Mier, Frank Austin; Bhakta, Raj; Castano, Nicolas; Thackrah, Joshua; Marquis, Tyler; Garcia, John; Hargather, Michael

    2016-11-01

    Liquid coiling occurs as a gravitationally-accelerated viscous fluid flows into a stagnant reservoir causing a localized accumulation of settling material, commonly designated as stack. This flow is broadly characterized by a vertical rope of liquid, the tail, flowing into the stack in a coiled motion with frequency defined parametrically within four different flow regimes. These regimes are defined as viscous, gravitational, inertial-gravitational, and inertial. Relations include parameters such as flow rate, drop height, rope radius, gravitational acceleration, and kinematic viscosity. While previous work on the subject includes high speed imaging, only basic and often averaged measurements have been taken by visual inspection of images. Through the implementation of additional image processing routines in MATLAB, time resolved measurements are taken on coiling frequency, tail diameter, stack diameter and height. Synchronization between a high speed camera and stepper motor driven syringe pump provides accurate correlation with flow rate. Additionally, continuous measurement of unsteady transition between flow regimes is visualized and quantified. This capability allows a deeper experimental understanding of processes involved in the liquid coiling phenomenon.

  15. Recommendations to quantify villous atrophy in video capsule endoscopy images of celiac disease patients

    Science.gov (United States)

    Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2016-01-01

    AIM To quantify the presence of villous atrophy in endoscopic images for improved automation. METHODS There are two main categories of quantitative descriptors helpful to detect villous atrophy: (1) Statistical and (2) Syntactic. Statistical descriptors measure the small intestinal substrate in endoscope-acquired images based on mathematical methods. Texture is the most commonly used statistical descriptor to quantify villous atrophy. Syntactic descriptors comprise a syntax, or set of rules, for analyzing and parsing the substrate into a set of objects with boundaries. The syntax is designed to identify and distinguish three-dimensional structures based on their shape. RESULTS The variance texture statistical descriptor is useful to describe the average variability in image gray level representing villous atrophy, but does not determine the range in variability and the spatial relationships between regions. Improved textural descriptors will incorporate these factors, so that areas with variability gradients and regions that are orientation dependent can be distinguished. The protrusion syntactic descriptor is useful to detect three-dimensional architectural components, but is limited to identifying objects of a certain shape. Improvement in this descriptor will require incorporating flexibility to the prototypical template, so that protrusions of any shape can be detected, measured, and distinguished. CONCLUSION Improved quantitative descriptors of villous atrophy are being developed, which will be useful in detecting subtle, varying patterns of villous atrophy in the small intestinal mucosa of suspected and known celiac disease patients. PMID:27803772

  16. Calcium Imaging of Sonoporation of Mammalian Cells

    Science.gov (United States)

    Sabens, David; Aehle, Matthew; Steyer, Grant; Kourennyi, Dmitri; Deng, Cheri X.

    2006-05-01

    Ultrasound mediated delivery of compounds is a relatively recent development in drug delivery and gene transfection techniques. Due to the lack of methods for real-time monitoring of sonoporation at the cellular level, the efficiency of drug/gene delivery and sonoporation associated side effects, such as the loss of cell viability and enhanced apoptosis, have been studied only through post US exposure analyses, requiring days for cell incubation. Furthermore, because microporation appears to be transient in nature, it was not possible to correlate transfection with microporation on an individual cellular basis. By studying the role of calcium in the cell and using fluorescent calcium imaging to study sonoporation it is possible to quantify both cell porosity and sonoporation side effects. Since both post sonoporation cell survival and delivery efficiency are related to the dynamic process of the cell membrane poration, calcium imaging of sonoporation will provide important knowledge to obtain improved understanding of sonoporation mechanism. Our experimental results demonstrated the feasibility of calcium imaging of sonoporation in Chinese Hamster Ovary (CHO) cells. We have measured the changes in the intracellular calcium concentration using Fura-2, a fluorescent probe, which indicate influx or flow of Calcium across the cell membrane. Analysis of data identified key aspects in the dynamic sonoporation process including the formation of pores in the cell membrane, and the relative temporal duration of the pores and their resealing. These observations are obtained through the analysis of the rate the calcium concentration changes within the cells, making it possible to visualize membrane opening and repair in real-time through such changes in the intracellular calcium concentration.

  17. Computerized video time-lapse (CVTL) analysis of cell death kinetics in human bladder carcinoma cells (EJ30) X-irradiated in different phases of the cell cycle.

    Science.gov (United States)

    Chu, Kenneth; Leonhardt, Edith A; Trinh, Maxine; Prieur-Carrillo, Geraldine; Lindqvist, Johan; Albright, Norman; Ling, C Clifton; Dewey, William C

    2002-12-01

    The purpose of this study was to quantify the modes and kinetics of cell death for EJ30 human bladder carcinoma cells irradiated in different phases of the cell cycle. Asynchronous human bladder carcinoma cells were observed in multiple fields by computerized video time-lapse (CVTL) microscopy for one to two cell divisions before irradiation (6 Gy) and for 6-11 days afterward. By analyzing time-lapse movies collected from these fields, pedigrees were constructed showing the behaviors of 231 cells irradiated in different phases of the cell cycle (i.e. at different times after mitosis). A total of 219 irradiated cells were determined to be non-colony-forming over the time spans of the experiments. In these nonclonogenic pedigrees, cells died primarily by necrosis either without entering mitosis or over 1 to 10 postirradiation generations. A total of 105 giant cells developed from the irradiated cells or their progeny, and 30% (31/105) divided successfully. Most nonclonogenic cells irradiated in mid-S phase (9-12 h after mitosis) died by the second generation, while those irradiated either before or after this short period in mid-S phase had cell deaths occurring over one to nine postirradiation generations. The nonclonogenic cells irradiated in mid-S phase also experienced the longest average delay before their first division. Clonogenic cells (11/12 cells) divided sooner after irradiation than the average nonclonogenic cells derived from the same phase of the cell cycle. The early death and long division delay observed for nonclonogenic cells irradiated in mid-S phase could possibly result from an increase in damage induced during the transition from the replication of euchromatin to the replication of heterochromatin.

  18. A 3-D nonlinear recursive digital filter for video image processing

    Science.gov (United States)

    Bauer, P. H.; Qian, W.

    1991-01-01

    This paper introduces a recursive 3-D nonlinear digital filter, which is capable of performing noise suppression without degrading important image information such as edges in space or time. It also has the property of unnoticeable bandwidth reduction immediately after a scene change, which makes the filter an attractive preprocessor to many interframe compression algorithms. The filter consists of a nonlinear 2-D spatial subfilter and a 1-D temporal filter. In order to achieve the required computational speed and increase the flexibility of the filter, all of the linear shift-variant filter modules are of the IIR type.

  19. Multisensor fusion in gastroenterology domain through video and echo endoscopic image combination: a challenge

    Science.gov (United States)

    Debon, Renaud; Le Guillou, Clara; Cauvin, Jean-Michel; Solaiman, Basel; Roux, Christian

    2001-08-01

    Medical domain makes intensive use of information fusion. In particular, the gastro-enterology is a discipline where physicians have the choice between several imagery modalities that offer complementary advantages. Among all existing systems, videoendoscopy (based on a CCD sensor) and echoendoscopy (based on an ultrasound sensor) are the most efficient. The use of each system corresponds to a given step in the physician diagnostic elaboration. Nowadays, several works aim to achieve automatic interpretation of videoendoscopic sequences. These systems can quantify color and superficial textures of the digestive tube. Unfortunately the relief information, which is important for the diagnostic, is very difficult to retrieve. On the other hand, some studies have proved that 3D information can be easily quantified using echoendoscopy image sequences. That is why the idea to combine these information, acquired from two very different points of view, can be considered as a real challenge for the medical image fusion topic. In this paper, after a review of actual works concerning numerical exploitation of videoendoscopy and echoendoscopy, the following question will be discussed: how can the use of complementary aspects of the different systems ease the automatic exploitation of videoendoscopy ? In a second time, we will evaluate the feasibility of the achievement of a realistic 3D reconstruction based both on information given by echoendoscopy (relief) and videoendoscopy (texture). Enumeration of potential applications of such a fusion system will then follow. Further discussions and perspectives will conclude this first study.

  20. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  1. Quantifying fish swimming behavior in response to acute exposure of aqueous copper using computer assisted video and digital image analysis

    Science.gov (United States)

    Calfee, Robin D.; Puglis, Holly J.; Little, Edward E.; Brumbaugh, William G.; Mebane, Christopher A.

    2016-01-01

    Behavioral responses of aquatic organisms to environmental contaminants can be precursors of other effects such as survival, growth, or reproduction. However, these responses may be subtle, and measurement can be challenging. Using juvenile white sturgeon (Acipenser transmontanus) with copper exposures, this paper illustrates techniques used for quantifying behavioral responses using computer assisted video and digital image analysis. In previous studies severe impairments in swimming behavior were observed among early life stage white sturgeon during acute and chronic exposures to copper. Sturgeon behavior was rapidly impaired and to the extent that survival in the field would be jeopardized, as fish would be swept downstream, or readily captured by predators. The objectives of this investigation were to illustrate protocols to quantify swimming activity during a series of acute copper exposures to determine time to effect during early lifestage development, and to understand the significance of these responses relative to survival of these vulnerable early lifestage fish. With mortality being on a time continuum, determining when copper first affects swimming ability helps us to understand the implications for population level effects. The techniques used are readily adaptable to experimental designs with other organisms and stressors.

  2. Monitoring stem cells in phase contrast imaging

    Science.gov (United States)

    Lam, K. P.; Dempsey, K. P.; Collins, D. J.; Richardson, J. B.

    2016-04-01

    Understanding the mechanisms behind the proliferation of Mesenchymal Stem cells (MSCs) can offer a greater insight into the behaviour of these cells throughout their life cycles. Traditional methods of determining the rate of MSC differentiation rely on population based studies over an extended time period. However, such methods can be inadequate as they are unable to track cells as they interact; for example, in autologous cell therapies for osteoarthritis, the development of biological assays that could predict in vivo functional activity and biological action are particularly challenging. Here further research is required to determine non-histochemical biomarkers which provide correlations between cell survival and predictive functional outcome. This paper proposes using a (previously developed) advanced texture-based analysis algorithm to facilitate in vitro cells tracking using time-lapsed microscopy. The technique was adopted to monitor stem cells in the context of unlabelled, phase contrast imaging, with the goal of examining the cell to cell interactions in both monoculture and co-culture systems. The results obtained are analysed using established exploratory procedures developed for time series data and compared with the typical fluorescent-based approach of cell labelling. A review of the progress and the lessons learned are also presented.

  3. Video modeling and imaging training on performance of tennis service of 9- to 12-year-old children.

    Science.gov (United States)

    Atienza, F L; Balaguer, I; García-Merita, M L

    1998-10-01

    The purpose of this work is to analyze, in a pilot study, the effects of video modeling and imagery training over 24 weeks on tennis service performance. Three groups of 9- to 12-yr.-old tennis players participated: (a) a physical practice group, who received physical training, (b) a physical practice + video group who received physical training plus watched a video modeling mental training, and (c) a physical practice + video + imagery group who received physical training plus video modeling and imagery mental training. The results for the intragroup pre-post-test comparisons showed that tennis performance did not significantly improve for the physical training group. The groups given mental training showed improvement from pre- to postintervention. Finally, the posttest comparison between groups indicated that there were significant differences between the group given physical training only compared to the groups given mental training but that the latter two did not differ significantly from each other.

  4. 多组视频流的实时图象拼接%Real-Time Image Mosaicing for Multiple Live Video Streams

    Institute of Scientific and Technical Information of China (English)

    N.R. Wanigasekara; 曾勇勤; 严壮志

    2003-01-01

    此文介绍了一个实时图象拼接系统.该系统从多个录像设备(如网络摄像头)采集的视频流中实时拼接出全景图象.本系统由两个主要摸块组成:实时视频采集,实时图象拼接以及自动更新.为有效地实现图象匹配,该系统运用了基于多分辨率特征的方法.此算法较之传统的基于亮度的方法有着更多优点.实验结果显示上述系统实现了高质量图象拼接并明显地提高了实时处理的速度.本系统适合于如视频监控以及虚拟现实等应用.%This paper describes a real-time image mosaicing system that creates panoramas from multiple video streams captured by video devices such as webcams. The implemented system consists of two principal modules:live video capturing and real time image mosaicing with automatic updating. It uses a multi-resolution feature based method for effective image registration, which has many advantages over traditional intensity based approaches.Experimental results demonstrate the superior validity of the proposed system by providing high quality real time image mosaics with significant speedup in computation. The proposed system is suitable for applications such as video surveillance and virtual reality, etc.

  5. Imaging in haematopoietic stem cell transplantation

    Energy Technology Data Exchange (ETDEWEB)

    Evans, A.; Steward, C.G.; Lyburn, I.D.; Grier, D.J

    2003-03-01

    Haematopoietic stem cell transplantation (SCT) is used to treat a wide range of malignant and non-malignant haematological conditions, solid malignancies, and metabolic and autoimmune diseases. Although imaging has a limited role before SCT, it is important after transplantation when it may support the clinical diagnosis of a variety of complications. It may also be used to monitor the effect of therapy and to detect recurrence of the underlying disease if the transplant is unsuccessful. We present a pictorial review of the imaging of patients who have undergone SCT, based upon 15 years experience in a large unit performing both adult and paediatric transplants.

  6. Power Distortion Optimization for Uncoded Linear Transformed Transmission of Images and Videos.

    Science.gov (United States)

    Xiong, Ruiqin; Zhang, Jian; Wu, Feng; Xu, Jizheng; Gao, Wen

    2017-01-01

    Recently, there is a resurgence of interest in uncoded transmission for wireless visual communication. While conventional coded systems suffer from cliff effect as the channel condition varies dynamically, uncoded linear-transformed transmission (ULT) provides elegant quality degradation for wide channel SNR range. ULT skips non-linear operations, such as quantization and entropy coding. Instead, it utilizes linear decorrelation transform and linear scaling power allocation to achieve optimized transmission. This paper presents a theoretical analysis for power-distortion optimization of ULT. In addition to the observation in our previous work that a decorrelation transform can bring significant performance gain, this paper reveals that exploiting the energy diversity in transformed signal is the key to achieve the full potential of decorrelation transform. In particular, we investigated the efficiency of ULT with exact or inexact signal statistics, highlighting the impact of signal energy modeling accuracy. Based on that, we further proposed two practical energy modeling schemes for ULT of visual signals. Experimental results show that the proposed schemes improve the quality of reconstructed images by 3~5 dB, while reducing the signal modeling overhead from hundreds or thousands of meta data to only a few meta data. The perceptual quality of reconstruction is significantly improved.

  7. Power-Distortion Optimization for Uncoded Linear-Transformed Transmission of Images and Videos.

    Science.gov (United States)

    Xiong, Ruiqin; Zhang, Jian; Wu, Feng; Xu, Jizheng; Gao, Wen

    2016-10-26

    Recently there is a resurgence of interest in uncoded transmission for wireless visual communication. While conventional coded systems suffer from cliff effect as the channel condition varies dynamically, uncoded linear-transformed transmission (ULT) provides elegant quality degradation for wide channel SNR range. ULT skips non-linear operations such as quantization and entropy coding. Instead, it utilizes linear decorrelation transform and linear scaling power allocation to achieve optimized transmission. This paper presents a theoretical analysis for power-distortion optimization of ULT. In addition to the observation in our previous work that a decorrelation transform can bring significant performance gain, this work reveals that exploiting the energy diversity in transformed signal is the key to achieve the full potential of decorrelation transform. In particular, we investigated the efficiency of ULT with exact or inexact signal statistics, highlighting the impact of signal energy modeling accuracy. Based on that, we further proposed two practical energy modeling schemes for ULT of visual signals. Experimental results show that the proposed schemes improve the quality of reconstructed images by 3 5dB, while reducing the signal modeling overhead from hundreds or thousands of meta data to only a few meta data. The perceptual quality of reconstruction is significantly improved.

  8. Molecular imaging of stem cell transplantation for neurodegenerative diseases.

    Science.gov (United States)

    Wang, Ping; Moore, Anna

    2012-01-01

    Cell replacement therapy with stem cells holds tremendous therapeutic potential for treating neurodegenerative diseases. Over the last decade, molecular imaging techniques have proven to be of great value in tracking transplanted cells and assessing the therapeutic efficacy. This current review summarizes the role and capabilities of different molecular imaging modalities including optical imaging, nuclear imaging and magnetic resonance imaging in the field of stem cell therapy for neurodegenerative disorders. We discuss current challenges and perspectives of these techniques and encompass updated information such as theranostic imaging and optogenetics in stem cell-based treatment of neurodegenerative diseases.

  9. Bioorthogonal probes for imaging sterols in cells.

    Science.gov (United States)

    Jao, Cindy Y; Nedelcu, Daniel; Lopez, Lyle V; Samarakoon, Thilani N; Welti, Ruth; Salic, Adrian

    2015-03-01

    Cholesterol is a fundamental lipid component of eukaryotic membranes and a precursor of potent signaling molecules, such as oxysterols and steroid hormones. Cholesterol and oxysterols are also essential for Hedgehog signaling, a pathway critical in embryogenesis and cancer. Despite their importance, the use of imaging sterols in cells is currently very limited. We introduce a robust and versatile method for sterol microscopy based on C19 alkyne cholesterol and oxysterol analogues. These sterol analogues are fully functional; they rescue growth of cholesterol auxotrophic cells and faithfully recapitulate the multiple roles that sterols play in Hedgehog signal transduction. Alkyne sterol analogues incorporate efficiently into cellular membranes and can be imaged with high resolution after copper(I)-catalyzed azide-alkyne cycloaddition reaction with fluorescent azides. We demonstrate the use of alkyne sterol probes for visualizing the subcellular distribution of cholesterol and for two-color imaging of sterols and choline phospholipids. Our imaging strategy should be broadly applicable to studying the role of sterols in normal physiology and disease.

  10. The Effect of Teacher's Image on the Multimedia Video Learning%多媒体视频学习中的教师角色

    Institute of Scientific and Technical Information of China (English)

    郑俊; 赵欢欢; 颜志强; 王福兴; 马征; 张红萍

    2012-01-01

    The image of the teacher has been widely used in the multimedia instructional videos. Two experiments were conducted to explore the effect of teacher's image in the multimedia video learning. Eye tracking technique was used to record online learning data. In Experiment 1, the multimedia teaching video was divided into the teacher's image region and the PPT text region. In Experiment 2, the PPT text region was replaced by the PPT picture to generalize the resuhs of experiment 1. The results indicated as follows: The dynamic region of teacher's image was easier to attract the learners' attention, and was processed more sufficiently than the text region. The teacher's image seem to facilitate the processing of instruction material and lead to better learning performance when the teacher's image region and PPT text region were all included in the teaching video. However, when the teacher's image region and PPT picture region were presented simultaneously, there was no relationship between teacher's image and learning outcomes.%教师角色在多媒体教学视频中已得到广泛应用。为了考察教师角色在多媒体视频学习中的作用,研究从多媒体教学视频中内容的不同呈现方式人手,采用眼动技术,探讨了多媒体视频内容不同区域注视与学习效果的关系。实验一将教学视频内容分为教师形象区与文字PPT区,实验二以图片PPT替换文字PPT。结果发现:(1)动态的教师形象区较易吸引学习者的注意力,被试对视频中教师形象的加工多于文字PPT;(2)当教学视频中同时呈现教师形象与文字PPT时,对教师形象的注视与学习效果之间呈正相关;当教学视频中同时呈现教师形象和图片PPT时,教师角色和学习成效之间没有很大的相关性。

  11. A new level set model for cell image segmentation

    Science.gov (United States)

    Ma, Jing-Feng; Hou, Kai; Bao, Shang-Lian; Chen, Chun

    2011-02-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing.

  12. A new level set model for cell image segmentation

    Institute of Scientific and Technical Information of China (English)

    Ma Jing-Feng; Hou Kai; Bao Shang-Lian; Chen Chun

    2011-01-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing.

  13. Cell Wall Biology: Perspectives from Cell Wall Imaging

    Institute of Scientific and Technical Information of China (English)

    Kieran J.D.Lee; Susan E.Marcus; J.Paul Knox

    2011-01-01

    Polysaccharide-rich plant cell walls are important biomaterials that underpin plant growth,are major repositories for photosynthetically accumulated carbon,and,in addition,impact greatly on the human use of plants. Land plant cell walls contain in the region of a dozen major polysaccharide structures that are mostly encompassed by cellulose,hemicelluloses,and pectic polysaccharides. During the evolution of land plants,polysaccharide diversification appears to have largely involved structural elaboration and diversification within these polysaccharide groups. Cell wall chemistry is well advanced and a current phase of cell wall science is aimed at placing the complex polysaccharide chemistry in cellular contexts and developing a detailed understanding of cell wall biology. Imaging cell wall glycomes is a challenging area but recent developments in the establishment of cell wall molecular probe panels and their use in high throughput procedures are leading to rapid advances in the molecular understanding of the spatial heterogeneity of individual cell walls and also cell wall differences at taxonomic levels. The challenge now is to integrate this knowledge of cell wall heterogeneity with an understanding of the molecular and physiological mechanisms that underpin cell wall properties and functions.

  14. Current MR imaging of renal cell carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Sae Lin; Sung, Seuk Jae [Dept. of Radiology, Anam Hospital, Korea University College of Medicine, Seoul (Korea, Republic of)

    2016-08-15

    Renal cell carcinoma (RCC) consists of approximately 85-90% of renal masses, and its incidence is increasing due to widespread use of modern imaging modalities such as ultrasonography or computed tomography. Computed tomography has served an important role in the diagnosis and staging of RCC; however, recent advances in magnetic resonance imaging (MRI) techniques have considerably improved our ability to predict tumor biology beyond the morphologic assessment. Multiparametric MRI protocols include standard sequences tailored for the morphologic evaluation and acquisitions that provide information about the tumor microenvironment such as diffusion-weighted imaging and dynamic contrast-enhanced MRI. The role of multiparametric MRI in the evaluation of RCC now extends to preoperative characterization of RCC subtypes, histologic grade, and quantitative assessment of tumor response to targeted therapies in patients with metastatic disease. Herein, the clinical applications and recent advances in MRI applied to RCC are reviewed along with its merits and demerits. We aimed to review MRI techniques and image analysis that can improve the management of patients with RCC. Familiarity with the advanced MRI techniques and various imaging findings of RCC would also facilitate optimal clinical recommendations for patients.

  15. Probabilistic recognition of human faces from video

    DEFF Research Database (Denmark)

    Zhou, Saohua; Krüger, Volker; Chellappa, Rama

    2003-01-01

    Recognition of human faces using a gallery of still or video images and a probe set of videos is systematically investigated using a probabilistic framework. In still-to-video recognition, where the gallery consists of still images, a time series state space model is proposed to fuse temporal...... of the identity variable produces the recognition result. The model formulation is very general and it allows a variety of image representations and transformations. Experimental results using videos collected by NIST/USF and CMU illustrate the effectiveness of this approach for both still-to-video and video...... demonstrate that, due to the propagation of the identity variable over time, a degeneracy in posterior probability of the identity variable is achieved to give improved recognition. The gallery is generalized to videos in order to realize video-to-video recognition. An exemplar-based learning strategy...

  16. Artificial Video for Video Analysis

    Science.gov (United States)

    Gallis, Michael R.

    2010-01-01

    This paper discusses the use of video analysis software and computer-generated animations for student activities. The use of artificial video affords the opportunity for students to study phenomena for which a real video may not be easy or even possible to procure, using analysis software with which the students are already familiar. We will…

  17. Embryonic stem cell biology: insights from molecular imaging.

    Science.gov (United States)

    Sallam, Karim; Wu, Joseph C

    2010-01-01

    Embryonic stem (ES) cells have therapeutic potential in disorders of cellular loss such as myocardial infarction, type I diabetes and neurodegenerative disorders. ES cell biology in living subjects was largely poorly understood until incorporation of molecular imaging into the field. Reporter gene imaging works by integrating a reporter gene into ES cells and using a reporter probe to induce a signal detectable by normal imaging modalities. Reporter gene imaging allows for longitudinal tracking of ES cells within the same host for a prolonged period of time. This has advantages over postmortem immunohistochemistry and traditional imaging modalities. The advantages include expression of reporter gene is limited to viable cells, expression is conserved between generations of dividing cells, and expression can be linked to a specific population of cells. These advantages were especially useful in studying a dynamic cell population such as ES cells and proved useful in elucidating the biology of ES cells. Reporter gene imaging identified poor integration of differentiated ES cells transplanted into host tissue as well as delayed donor cell death as reasons for poor long-term survival in vivo. This imaging technology also confirmed that ES cells indeed have immunogenic properties that factor into cell survival and differentiation. Finally, reporter gene imaging improved our understanding of the neoplastic risk of undifferentiated ES cells in forming teratomas. Despite such advances, much remains to be understood about ES cell biology to translate this technology to the bedside, and reporter gene imaging will certainly play a key role in formulating this understanding.

  18. Trends in Scientific Literature on Addiction to the Internet, Video Games, and Cell Phones from 2006 to 2010

    Science.gov (United States)

    Carbonell, Xavier; Guardiola, Elena; Fuster, Héctor; Gil, Frederic; Panova, Tayana

    2016-01-01

    Background: The goals of the present work were to retrieve the scientific articles published on addiction to the Internet, video games, and cell phones and to analyze the pattern of publications in this area (who is doing the research, when and where it is taking place, and in which journals it is being published), to determine the research being conducted as well as to document geographical trends in publication over time in three types of technological addictions: Internet, cell phones, and video games. Methods: Articles indexed in PubMed and PsycINFO between 2006 and 2010 related to the pathological use of Internet, cell phones, and video games were retrieved. Search results were reviewed to eliminate articles that were not relevant or were duplicates. Results: Three hundred and thirty valid articles were retrieved from PubMed and PsycINFO from 2006 to 2010. Results were compared with those of 1996–2005. The year with the highest number of articles published was 2008 (n = 96). The most productive countries, in terms of number of articles published, were China (n = 67), the United States (n = 56), the United Kingdom (n = 47), and Taiwan (n = 33). The most commonly used language was English (70.3%), followed by Chinese (15.4%). Articles were published in 153 different journals. The journal that published the most articles was Cyberpsychology and Behavior (n = 73), followed by Chinese Journal of Clinical Psychology (n = 27) and International Journal of Mental Health and Addiction (n = 16). Internet was the area most frequently studied, with an increasing interest in other areas such as online video games and cell phones. Conclusions: The number of publications on technological addictions reached a peak in 2008. The scientific contributions of China, Taiwan, and Korea are overrepresented compared to other scientific fields such as drug addiction. The inclusion of Internet Gaming Disorder in the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition

  19. Coded illumination for motion-blur free imaging of cells on cell-phone based imaging flow cytometer

    Science.gov (United States)

    Saxena, Manish; Gorthi, Sai Siva

    2014-10-01

    Cell-phone based imaging flow cytometry can be realized by flowing cells through the microfluidic devices, and capturing their images with an optically enhanced camera of the cell-phone. Throughput in flow cytometers is usually enhanced by increasing the flow rate of cells. However, maximum frame rate of camera system limits the achievable flow rate. Beyond this, the images become highly blurred due to motion-smear. We propose to address this issue with coded illumination, which enables recovery of high-fidelity images of cells far beyond their motion-blur limit. This paper presents simulation results of deblurring the synthetically generated cell/bead images under such coded illumination.

  20. Imaging nanoparticles in cells by nanomechanical holography

    Energy Technology Data Exchange (ETDEWEB)

    Tetard, Laurene [ORNL; Passian, Ali [ORNL; Venmar, Katherine T [ORNL; Lynch, Rachel M [ORNL; Voy, Brynn H [ORNL; Shekhawat, Gajendra [Northwestern University, Evanston; Dravid, Vinayak [Northwestern University, Evanston; Thundat, Thomas George [ORNL

    2008-06-01

    Nanomaterials have potential medical applications, for example in the area of drug delivery, and their possible adverse effects and cytotoxicity are curently receiving attention1,2. Inhalation of nanoparticles is of great concern, because nanoparticles can be easily aerosolized. Imaging techniques that can visualize local populations of nanoparticles at nanometre resolution within the structures of cells are therefore important3. Here we show that cells obtained from mice exposed to single-walled carbon nanohorns can be probed using a scanning probe microscopy technique called scanning near field ultrasonic holography. The nanohorns were observed inside the cells, and this was further confirmed using micro Raman spectroscopy. Scanning near field ultrasonic holography is a useful technique for probing the interactions of engineered nanomaterials in biological systems, which will greatly benefit areas in drug delivery and nanotoxicology.

  1. Development of a video image-based QA system for the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system

    Energy Technology Data Exchange (ETDEWEB)

    Ebe, Kazuyu, E-mail: nrr24490@nifty.com; Tokuyama, Katsuichi; Baba, Ryuta; Ogihara, Yoshisada; Ichikawa, Kosuke; Toyama, Joji [Joetsu General Hospital, 616 Daido-Fukuda, Joetsu-shi, Niigata 943-8507 (Japan); Sugimoto, Satoru [Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo 113-8421 (Japan); Utsunomiya, Satoru; Kagamu, Hiroshi; Aoyama, Hidefumi [Graduate School of Medical and Dental Sciences, Niigata University, Niigata 951-8510 (Japan); Court, Laurence [The University of Texas MD Anderson Cancer Center, Houston, Texas 77030-4009 (United States)

    2015-08-15

    Purpose: To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Methods: Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio–caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on the target phantom during a reproduction of the patient’s tumor motion. A substitute target with the patient’s tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors’ QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients’ tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Results: Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors

  2. Video-assisted thoracoscopic surgery versus open lobectomy for primary non-small-cell lung cancer

    DEFF Research Database (Denmark)

    Falcoz, Pierre-Emmanuel; Puyraveau, Marc; Thomas, Pascal-Alexandre

    2016-01-01

    OBJECTIVES: Video-assisted thoracoscopic anatomical resections are increasingly used in Europe to manage primary lung cancer. The purpose of this study was to compare the outcome following thoracoscopic versus open lobectomy in case-matched groups of patients from the European Society of Thoracic...... Surgeon (ESTS) database. METHODS: All patients having lobectomy as the primary procedure via thoracoscopy [video-assisted thoracoscopic surgery (VATS)-L)] or thoracotomy (TH-L) were identified in the ESTS database (January 2007 to December 2013). A propensity score was constructed using several patients......' baseline characteristics. The matching using the propensity score was responsible for the minimization of selection bias. A propensity score-matched analysis was performed to compare the incidence of postoperative major complications (according to the ESTS database definitions) and mortality at hospital...

  3. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  4. Cardiac Sarcoidosis or Giant Cell Myocarditis? On Treatment Improvement of Fulminant Myocarditis as Demonstrated by Cardiovascular Magnetic Resonance Imaging

    Directory of Open Access Journals (Sweden)

    Hari Bogabathina

    2012-01-01

    Full Text Available Giant cell myocarditis, but not cardiac sarcoidosis, is known to cause fulminant myocarditis resulting in severe heart failure. However, giant cell myocarditis and cardiac sarcoidosis are pathologically similar, and attempts at pathological differentiation between the two remain difficult. We are presenting a case of fulminant myocarditis that has pathological features suggestive of cardiac sarcoidosis, but clinically mimicking giant cell myocarditis. This patient was treated with cyclosporine and prednisone and recovered well. This case we believe challenges our current understanding of these intertwined conditions. By obtaining a sense of severity of cardiac involvement via delayed hyperenhancement of cardiac magnetic resonance imaging, we were more inclined to treat this patient as giant cell myocarditis with cyclosporine. This resulted in excellent improvement of patient’s cardiac function as shown by delayed hyperenhancement images, early perfusion images, and SSFP videos.

  5. Multimodality molecular imaging of stem cells therapy for stroke.

    Science.gov (United States)

    Chao, Fangfang; Shen, Yehua; Zhang, Hong; Tian, Mei

    2013-01-01

    Stem cells have been proposed as a promising therapy for treating stroke. While several studies have demonstrated the therapeutic benefits of stem cells, the exact mechanism remains elusive. Molecular imaging provides the possibility of the visual representation of biological processes at the cellular and molecular level. In order to facilitate research efforts to understand the stem cells therapeutic mechanisms, we need to further develop means of monitoring these cells noninvasively, longitudinally and repeatedly. Because of tissue depth and the blood-brain barrier (BBB), in vivo imaging of stem cells therapy for stroke has unique challenges. In this review, we describe existing methods of tracking transplanted stem cells in vivo, including magnetic resonance imaging (MRI), nuclear medicine imaging, and optical imaging (OI). Each of the imaging techniques has advantages and drawbacks. Finally, we describe multimodality imaging strategies as a more comprehensive and potential method to monitor transplanted stem cells for stroke.

  6. Extracting Text from Video

    Directory of Open Access Journals (Sweden)

    Jayshree Ghorpade

    2011-09-01

    Full Text Available The text data present in images and video contain certain useful information for automatic annotation,indexing, and structuring of images. However variations of the text due to differences in text style, font, size, orientation, alignment as well as low image contrast and complex background make the problem of automatic text extraction extremely difficult and challenging job. A large number of techniques have been proposed to address this problem and the purpose of this paper is to design algorithms for each phase of extracting text from a video using java libraries and classes. Here first we frame the input video into stream of images using the Java Media Framework (JMF with the input being a real time or a video from the database. Then we apply pre processing algorithms to convert the image to gray scale and remove the disturbances like superimposed lines over the text, discontinuity removal, and dot removal.Then we continue with the algorithms for localization, segmentation and recognition for which we use the neural network pattern matching technique. The performance of our approach is demonstrated by presenting experimental results for a set of static images.

  7. EXTRACTING TEXT FROM VIDEO

    Directory of Open Access Journals (Sweden)

    Jayshree Ghorpade

    2011-06-01

    Full Text Available The text data present in images and video contain certain useful information for automatic annotation,indexing, and structuring of images. However variations of the text due to differences in text style, font, size, orientation, alignment as well as low image contrast and complex background make the problem of automatic text extraction extremely difficult and challenging job. A large number of techniques have been proposed to address this problem and the purpose of this paper is to design algorithms for each phase of extracting text from a video using java libraries and classes. Here first we frame the input video into stream of images using the Java Media Framework (JMF with the input being a real time or a video from the database. Then we apply pre processing algorithms to convert the image to gray scale and remove the disturbances like superimposed lines over the text, discontinuity removal, and dot removal.Then we continue with the algorithms for localization, segmentation and recognition for which we use the neural network pattern matching technique. The performance of our approach is demonstrated by presenting experimental results for a set of static images.

  8. Image and Video Processing Based on PyCv and Library in Linux%Linux下基于PyCv及其库的图像视频处理

    Institute of Scientific and Technical Information of China (English)

    姬争强; 李凌云

    2012-01-01

    基于OpenCV提供的Python接口——Pycv,在wxPython搭建的框架之上,结合Python的图像处理库,在Linux环境下完成了图像的处理、3D图像的跟踪以及视频运动轨迹的捕捉等功能.同时提供了一些图像处理的应用,用户通过对该软件的简单操作,不仅可以对视频图像的每一帧进行操作,同时也可以对单个图像进行处理,包括图像的相似度检查,图像的细化,关系图的生成以及图像的修补等.同时融入了图像下载,截屏,文件查询等小工具,同时提供了详细的使用说明,使用更为方便.%Based on the interface PyCv provided by OpenCV and the framework built with wxPython, and combined with Python's image processing library, the image processing, the trajectory tracking of 3D image and video capture functions were completed in Linux environment. This project also provided some image processing applications. Through some simple software operation, clients can not only operate the each frame of video, but also process a single image, including similarity check, refinement, graph generation, image patching and so on. This project provides the image download, screenshots, files, inquiries, and other small tools, as well as detailed instructions.

  9. A reconsideration of the noise equivalent power and the data analysis procedure for the infrared imaging video bolometers.

    Science.gov (United States)

    Pandya, Shwetang N; Peterson, Byron J; Kobayashi, Masahiro; Pandya, Santosh P; Mukai, Kiyofumi; Sano, Ryuichi

    2014-12-01

    The infrared imaging video bolometer (IRVB) used for measurement of the two-dimensional (2D) radiation profiles from the Large Helical Device has been significantly upgraded recently to improve its signal to noise ratio, sensitivity, and calibration, which ultimately provides quantitative measurements of the radiation from the plasma. The reliability of the quantified data needs to be established by various checks. The noise estimates also need to be revised and more realistic values need to be established. It is shown that the 2D heat diffusion equation can be used for estimating the power falling on the IRVB foil, even with a significant amount of spatial variation in the thermal diffusivity across the area of the platinum foil found experimentally during foil calibration. The equation for the noise equivalent power density (NEPD) is re-derived to include the errors in the measurement of the thermophysical and the optical properties of the IRVB foil. The theoretical value estimated using this newly derived equation matches closely, within 5.5%, with the mean experimental value. The change in the contribution of each error term of the NEPD equation with rising foil temperature is also studied and the blackbody term is found to dominate the other terms at elevated operating temperatures. The IRVB foil is also sensitive to the charge exchange (CX) neutrals escaping from the plasma. The CX neutral contribution is estimated to be marginally higher than the noise equivalent power (NEP) of the IRVB. It is also established that the radiation measured by the IRVB originates from the impurity line radiation from the plasma and not from the heated divertor tiles. The change in the power density due to noise reduction measures such as data smoothing and averaging is found to be comparable to the IRVB NEPD. The precautions that need to be considered during background subtraction are also discussed with experimental illustrations. Finally, the analysis algorithm with all the

  10. 视频帧色差突变图像的背景检测参量提取%Video Frame Color Mutation Image Background Detection Parameters Extraction

    Institute of Scientific and Technical Information of China (English)

    黄荣梅

    2015-01-01

    The motion in a video frame object extraction is a key topic in computer vision research, the background color mutation detection video frame image is often influenced by background color interference, target detection performance is not good. Put forward a kind of background detection and parameter extraction of video frame color double background mod-eling video sequence mutation based on image, background modeling joined the light mutation mechanism, for color com-pensation. Calculation of video frame mutation probability and statistical probability information perception within a back-ground of moving average model, the video image edge detection, using the modulus maxima method, get the discriminant function of video frame color stability detection based on the gray variance. The current frame and the background frame subtraction, the establishment of the sliding average background model, extracting the characteristic parameters of back-ground difference. The simulation results show that the detection performance of this algorithm is better, when the back-ground changes, such as light mutation, population increases suddenly, with a background in better detection performance, processing lighting change aspects of image smoothing is better.%对视频帧中运动目标提取是计算机视觉研究的重点课题,对视频帧色差突变图像的背景检测常受到背景色差干扰,目标检测性能不好.提出一种基于视频序列的双背景建模的视频帧色差突变图像的背景检测和参量提取算法,背景建模加入了光照突变处理机制,进行色差补偿.计算视频帧差背景内的突变信息感知概率和统计概率,对视频图像进行滑动平均建模,采用模极大值法进行边缘检测,得到基于灰度方差的视频帧色差稳定性检测的判别函数.将当前帧与背景帧相减,建立滑动平均背景模型,提取其背景差异性特征参量.仿真结果表明,该算法的检测性能较好,当

  11. Practical video indexing and retrieval system

    Science.gov (United States)

    Liang, Yiqing; Wolf, Wayne H.; Liu, Bede; Huang, Jeffrey R.

    1998-03-01

    We integrated a practical digital video database system based on language and image analysis with components from digital video processing, still image search, information retrieval, closed captioning processing. The attempt is to utilize the multiple modalities of information in video and implement data fusion among the multiple modalities; image information, speech/dialog information, closed captioning information, sound track information such as music, gunfire, explosion, caption information, motion information, temporal information. Effort is made to allow access video contents at different levels including video program level, scene level, shot level, and object level. Approaches of browsing, subject-based classification, and random retrieving are available to gain access to the contents.

  12. Innovative Video Diagnostic Equipment for Material Science

    Science.gov (United States)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  13. SIFT Key-points Self-adaptive Extraction Algorithm for Video Images%视频图像的SIFT特征点自适应提取算法

    Institute of Scientific and Technical Information of China (English)

    余宏生; 金伟其

    2013-01-01

    Before matching the video frames in Scale-Invariant Feature Transform(SIFT)algorithm, the key-points must be extracted firstly. If the size and characteristic of input images are changed, gray threshold of key-points must be reinstalled, to avoid extremely computation cost or failure in registration. In this paper, a self-adaptive SIFT key-points extraction algorithm for video images is developed. The algorithm can set appropriate gray threshold of key-points automatically by feeding parameter of previous frame back to present frame to make the number of key-points extracting from present frame close to the expected value. The experiments show that, when the input image is changed, the key-points number of the video frame always keep near the expected value by setting the threshold self-adaptively. The method makes it possible for digital video images to be registered self-adaptively by SIFT algorithm and the number of feature points remains stable so that the computation costs can be reduced while avoiding registration failure.%采用SIFT算法匹配视频图像帧前,必须首先提取特征点。如果输入图像的大小和特性变化,特征点的灰度阈值必须随之重新设置,以避免过大的计算量和配准失败。提出了一种视频图像的特征点自适应提取算法。该算法能够将前一帧的视频图像的参数反馈到当前帧,自动设置适当的特征点灰度阈值,使得从当前帧提取的关键点的数量接近预期值。实验结果表明,当输入图像改变时,采用自适应设置阈值方法,从视频帧提取的特征点的数量始终保持在预期值。该方法可以通过SIFT算法自适应地配准数字视频图像,使特征点数量保持稳定,避免配准失败,减小计算量。

  14. Video Histories, Memories, and Coincidences

    DEFF Research Database (Denmark)

    Kacunko, Slavko

    2012-01-01

    Looping images allows us to notice things that we have never noticed before. Looping a small but exquisite selection of the video tapes of Marcel Odenbach, Dieter Kiessling and Matthias Neuenhofer may allow the discovering of Histories, Coincidences, and Infinitesimal Aesthetics inscribed...... into the Video medium as its unsurpassed topicality....

  15. Pilot study on real-time motion detection in UAS video data by human observer and image exploitation algorithm

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Voit, Michael; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2017-05-01

    Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction with target tracking algorithms, a gaze-enhanced user interface is beneficial. In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm. Besides identifying an appropriate interaction technique for the user interface - again, we compare gaze-based and traditional mouse-based interaction - we focus on the benefit an IDM algorithm might provide for an UAS video analyst. In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing with automatic support, once performing without it. We compare the two conditions considering performance in terms of effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire). The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.

  16. A comparison of the quality of image acquisition between the incident dark field and sidestream dark field video-microscopes

    NARCIS (Netherlands)

    E. Gilbert-Kawai; J. Coppel (Jonny); V. Bountziouka (Vassiliki); C. Ince (Can); D. Martin (Daniel)

    2016-01-01

    textabstractBackground The ‘Cytocam’ is a third generation video-microscope, which enables real time visualisation of the in vivo microcirculation. Based upon the principle of incident dark field (IDF) illumination, this hand held computer-controlled device was designed to address the technical lim

  17. The research forefront of rain removaI for videos and images%视频图像去雨技术研究前沿

    Institute of Scientific and Technical Information of China (English)

    徐波; 朱青松; 熊艳海

    2015-01-01

    户外视觉系统越来越广泛地应用于军事、交通及安全监控等领域,但是恶劣天气严重影响了系统的性能,而下雨是最频繁的恶劣天气之一,会严重损害视频图像的质量,因此检测和去除视频图像中的雨滴对于一个全天候的户外视觉系统来说是必不可少的。去雨技术不仅可以恢复被雨滴影响的视频图像,而且有利于对视频图像的进一步处理,包括基于视频图像的目标检测、识别、追踪、分割和监控等技术的性能提高。为了去除视频图像中的雨滴,首先要分析其成像规律,深入研究雨滴的几何、亮度、色彩和时空等特性,然后基于这些特性进行雨滴检测,最后去除雨滴,修复视频图像。讨论了目前的各种去雨技术,从视频图像中雨滴的特性入手,详细地阐述了各类去雨算法及其优缺点。为了内容的全面性和完整性,对部分具有代表性的算法进行了定量和定性分析。最后对该研究领域中亟待解决的问题进行了总结,并且在此基础上对其未来的发展方向做了进一步展望。%Outdoor vision systems are becoming increasingly widely used in the field of military,transportation,security surveil-lance applications.However,bad weathers affect severely the performance of system.Rain is also the most frequent adverse weather,and the video image influenced by the rainfall will be severely degraded.Therefore,it is essential for an all-weather out-door vision system to detect and remove raindrops in the videos and images.Rain removal technique not only restores the rain-af-fected videos and images,but also is beneficial to the further processing of videos and images,including increasing performance of computer vision algorithms in areas such as object detection,recognition,tracking,segmentation and video surveillance.In order to restore rain-affected pixels in images,the first step is to analyze the process of

  18. Parameter estimation method for blurred cell images from fluorescence microscope

    Science.gov (United States)

    He, Fuyun; Zhang, Zhisheng; Luo, Xiaoshu; Zhao, Shulin

    2016-10-01

    Microscopic cell image analysis is indispensable to cell biology. Images of cells can easily degrade due to optical diffraction or focus shift, as this results in low signal-to-noise ratio (SNR) and poor image quality, hence affecting the accuracy of cell analysis and identification. For a quantitative analysis of cell images, restoring blurred images to improve the SNR is the first step. A parameter estimation method for defocused microscopic cell images based on the power law properties of the power spectrum of cell images is proposed. The circular radon transform (CRT) is used to identify the zero-mode of the power spectrum. The parameter of the CRT curve is initially estimated by an improved differential evolution algorithm. Following this, the parameters are optimized through the gradient descent method. Using synthetic experiments, it was confirmed that the proposed method effectively increased the peak SNR (PSNR) of the recovered images with high accuracy. Furthermore, experimental results involving actual microscopic cell images verified that the superiority of the proposed parameter estimation method for blurred microscopic cell images other method in terms of qualitative visual sense as well as quantitative gradient and PSNR.

  19. CHARACTER RECOGNITION OF VIDEO SUBTITLES\\

    Directory of Open Access Journals (Sweden)

    Satish S Hiremath

    2016-11-01

    Full Text Available An important task in content based video indexing is to extract text information from videos. The challenges involved in text extraction and recognition are variation of illumination on each video frame with text, the text present on the complex background and different font size of the text. Using various image processing algorithms like morphological operations, blob detection and histogram of oriented gradients the character recognition of video subtitles is implemented. Segmentation, feature extraction and classification are the major steps of character recognition. Several experimental results are shown to demonstrate the performance of the proposed algorithm

  20. Employing temporal information for cell segmentation using max-flow/min-cut in phase-contrast video microscopy.

    Science.gov (United States)

    Massoudi, Amir; Sowmya, Arcot; Mele, Katarina; Semenovich, Dimitri

    2011-01-01

    Cell segmentation is a crucial step in many bio-medical image analysis applications and it can be considered as an important part of a tracking system. Segmentation in phase-contrast images is a challenging task since in this imaging technique, the background intensity is approximately similar to the cell pixel intensity. In this paper we propose an interactive automatic pixel level segmentation algorithm, that uses temporal information to improve the segmentation result. This algorithm is based on the max-flow/min-cut algorithm and can be solved in polynomial time. This method is not restricted to any specific cell shape and segments cells of various shapes and sizes. The results of the proposed algorithm show that using the temporal information does improve segmentation considerably.

  1. Video processing project

    CSIR Research Space (South Africa)

    Globisch, R

    2009-03-01

    Full Text Available Video processing source code for algorithms and tools used in software media pipelines (e.g. image scalers, colour converters, etc.) The currently available source code is written in C++ with their associated libraries and DirectShow- Filters....

  2. Tiny videos: a large data set for nonparametric video retrieval and frame classification.

    Science.gov (United States)

    Karpenko, Alexandre; Aarabi, Parham

    2011-03-01

    In this paper, we present a large database of over 50,000 user-labeled videos collected from YouTube. We develop a compact representation called "tiny videos" that achieves high video compression rates while retaining the overall visual appearance of the video as it varies over time. We show that frame sampling using affinity propagation-an exemplar-based clustering algorithm-achieves the best trade-off between compression and video recall. We use this large collection of user-labeled videos in conjunction with simple data mining techniques to perform related video retrieval, as well as classification of images and video frames. The classification results achieved by tiny videos are compared with the tiny images framework [24] for a variety of recognition tasks. The tiny images data set consists of 80 million images collected from the Internet. These are the largest labeled research data sets of videos and images available to date. We show that tiny videos are better suited for classifying scenery and sports activities, while tiny images perform better at recognizing objects. Furthermore, we demonstrate that combining the tiny images and tiny videos data sets improves classification precision in a wider range of categories.

  3. Computationally efficient video restoration for Nyquist sampled imaging sensors combining an affine-motion-based temporal Kalman filter and adaptive Wiener filter.

    Science.gov (United States)

    Rucci, Michael; Hardie, Russell C; Barnard, Kenneth J

    2014-05-01

    In this paper, we present a computationally efficient video restoration algorithm to address both blur and noise for a Nyquist sampled imaging system. The proposed method utilizes a temporal Kalman filter followed by a correlation-model based spatial adaptive Wiener filter (AWF). The Kalman filter employs an affine background motion model and novel process-noise variance estimate. We also propose and demonstrate a new multidelay temporal Kalman filter designed to more robustly treat local motion. The AWF is a spatial operation that performs deconvolution and adapts to the spatially varying residual noise left in the Kalman filter stage. In image areas where the temporal Kalman filter is able to provide significant noise reduction, the AWF can be aggressive in its deconvolution. In other areas, where less noise reduction is achieved with the Kalman filter, the AWF balances the deconvolution with spatial noise reduction. In this way, the Kalman filter and AWF work together effectively, but without the computational burden of full joint spatiotemporal processing. We also propose a novel hybrid system that combines a temporal Kalman filter and BM3D processing. To illustrate the efficacy of the proposed methods, we test the algorithms on both simulated imagery and video collected with a visible camera.

  4. AppIication of OpenCV Image Preprocessing TechnoIogy in Unmanned Aircraft Based on Video%基于OpenCV的图像预处理技术在无人机视频的应用

    Institute of Scientific and Technical Information of China (English)

    吴川平; 黄文恺; 伍冯洁; 张雯雯; 梁俊杰

    2015-01-01

    OpenCV is used for digital image processing and computer vision of the open source code libraries. Starts from image processing in the application of unmanned aircraft video angle, proposes a preprocessing method. Applies Gauss filtering algorithm for image processing, bi-lateral smoothing algorithm, image gain function, and image fusion operation to UAV video, through the video frame image operation, solves the problem of realization of generating video image captured the jitter, low resolution of issues and the environment to produce light, noise etc..%OpenCV(Open Source Computer Vision)是一种用于数字图像处理和计算机视觉的开放源代码函数库。从图像处理在无人机视频的应用的角度出发,提出一种预处理方法,将图像处理的高斯滤波算法、双边平滑算法、图像增益数以及图像融合操作应用于无人机拍摄视频中。通过对视频帧图像的操作,解决无人机视频图像出现的抖动、清晰度低等问题,同时有效降低环境光线及噪声干扰的影响。

  5. A correction method to the distortion of video logging image%一种视频测井图像畸变校正方法

    Institute of Scientific and Technical Information of China (English)

    胡宏涛; 张静娜; 李周利

    2013-01-01

    根据侧向多镜头视频测井装置获取图像的原理,提出了基于空间坐标变换的图像畸变校正方法.该方法首先通过畸变图像与校正图像之间控制点对的映射关系建立几何畸变模型,其次根据多项式拟合公式实现畸变图像的校正,最后通过双线性插值算法对图像进行灰度重建.通过采用直接和间接两种空间坐标变换方法对畸变图像进行仿真校正处理,结果表明,间接法校正图像的准确度高,像素过度平滑,满足了测井图像的精度要求.%The image distortion correction method based on space coordinate transformation is proposed according to the acquisition principle of lateral multi-lens video logging images. A geometrical distortion model is firstly established according to the transformation relationship between controlling point pair at distortion image and corrected image,and then the distortion image is corrected based on polynomial fitting formula. Finally,the grey scale of image is rebuilt through bilinear interpolation method. The distortion logging image is corrected using direct and indirect space coordinate transformation methods separately in simulation experiment. The experimental results show that the image processed by using the indirect correction method has a higher accuracy and natural transition between pixels than direct correction method,and the accuracy can satisfy the requirement of logging images.

  6. Quantitative imaging of epithelial cell scattering identifies specific inhibitors of cell motility and cell-cell dissociation

    NARCIS (Netherlands)

    Loerke, D.; le Duc, Q.; Blonk, I.; Kerstens, A.; Spanjaard, E.; Machacek, M.; Danuser, G.; de Rooij, J.

    2012-01-01

    The scattering of cultured epithelial cells in response to hepatocyte growth factor (HGF) is a model system that recapitulates key features of metastatic cell behavior in vitro, including disruption of cell-cell adhesions and induction of cell migration. We have developed image analysis tools that

  7. Satellite Video Stabilization with Geometric Distortion

    Directory of Open Access Journals (Sweden)

    WANG Xia

    2016-02-01

    Full Text Available There is an exterior orientation difference in each satellite video frame, and the corresponding points have different image locations in adjacent frames images which has geometric distortion. So the projection model, affine model and other classical image stabilization registration model cannot accurately describe the relationship between adjacent frames. This paper proposes a new satellite video image stabilization method with geometric distortion to solve the problem, based on the simulated satellite video, we verify the feasibility and accuracy of proposed satellite video stabilization method.

  8. Sports Video Segmentation using Spectral Clustering

    Directory of Open Access Journals (Sweden)

    Xiaohong Zhao

    2014-07-01

    Full Text Available With the rapid development of the computer and multimedia technology, the video processing technique is applied to the field of sports in order to analyze the sport video. For sports video analysis, how to segment the sports video image has become an important research topic. Nowadays, the algorithms for video image segmentation mainly include neural network, K-means and so on. However, the accuracy and speed of these algorithms for moving objects segmentation are not satisfied, and easily influenced by the irregular movement of the object and illumination, etc. In view of this, this paper proposes an algorithm for object segmentation in sports video image sequence, based on the spectral clustering. This algorithm simultaneously considers the pixel level visual feature and the edge information of the neighboring pixels to make the calculation of similarity is more intuitive and not affected by factors such as image texture. When clustering the image feature, the proposed method: (1 preprocesses video image sequence and extracts the image feature. (2Using weight function to build and calculate the similar matrix between pixels. (2 Extract feature vector. (3 Perform clustering using spectral clustering algorithm to segment the sports video image. The experimental results indicate that the method proposed in this paper has the advantages, such as lower complexity, high computational effectiveness, low computational amount, and so on. It can get better extraction effects on video image

  9. Increased micronucleated cell frequency related to exposure to radiation emitted by computer cathode ray tube video display monitors

    Directory of Open Access Journals (Sweden)

    Carbonari Karina

    2005-01-01

    Full Text Available It is well recognized that electromagnetic fields can affect the biological functions of living organisms at both cellular and molecular level. The potential damaging effects of electromagnetic fields and very low frequency and extremely low frequency radiation emitted by computer cathode ray tube video display monitors (VDMs has become a concern within the scientific community. We studied the effects of occupational exposure to VDMs in 10 males and 10 females occupationally exposed to VDMs and 20 unexposed control subjects matched for age and sex. Genetic damage was assessed by examining the frequency of micronuclei in exfoliated buccal cells and the frequency of other nuclear abnormalities such as binucleated and broken egg cells. Although there were no differences regarding binucleated cells between exposed and control individuals our analysis revealed a significantly higher frequency of micronuclei (p < 0.001 and broken egg cells (p < 0.05 in individuals exposed to VDMs as compared to unexposed. We also found that the differences between individuals exposed to VDMs were significantly related to the sex of the individuals and that there was an increase in skin, central nervous system and ocular disease in the exposed individuals. These preliminary results indicate that microcomputer workers exposed to VDMs are at risk of significant cytogenetic damage and should periodically undergo biological monitoring.

  10. Live cell imaging to understand monocyte, macrophage, and dendritic cell function in atherosclerosis.

    Science.gov (United States)

    McArdle, Sara; Mikulski, Zbigniew; Ley, Klaus

    2016-06-27

    Intravital imaging is an invaluable tool for understanding the function of cells in healthy and diseased tissues. It provides a window into dynamic processes that cannot be studied by other techniques. This review will cover the benefits and limitations of various techniques for labeling and imaging myeloid cells, with a special focus on imaging cells in atherosclerotic arteries. Although intravital imaging is a powerful tool for understanding cell function, it alone does not provide a complete picture of the cell. Other techniques, such as flow cytometry and transcriptomics, must be combined with intravital imaging to fully understand a cell's phenotype, lineage, and function.

  11. Video Retrieval Berdasarkan Teks dan Gambar

    Directory of Open Access Journals (Sweden)

    Rahmi Hidayati

    2013-01-01

    Abstract Retrieval video has been used to search a video based on the query entered by user which were text and image. This system could increase the searching ability on video browsing and expected to reduce the video’s retrieval time. The research purposes were designing and creating a software application of retrieval video based on the text and image on the video. The index process for the text is tokenizing, filtering (stopword, stemming. The results of stemming to saved in the text index table. Index process for the image is to create an image color histogram and compute the mean and standard deviation at each primary color red, green and blue (RGB of each image. The results of feature extraction is stored in the image table The process of video retrieval using the query text, images or both. To text query system to process the text query by looking at the text index tables. If there is a text query on the index table system will display information of the video according to the text query. To image query system to process the image query by finding the value of the feature extraction means red, green means, means blue, red standard deviation, standard deviation and standard deviation of blue green. If the value of the six features extracted query image on the index table image will display the video information system according to the query image. To query text and query images, the system will display the video information if the query text and query images have a relationship that is query text and query image has the same film title.   Keywords—  video, index, retrieval, text, image

  12. Video Instrumentation And Procedures For Data Analysis

    Science.gov (United States)

    Keller, Patrick N.

    1982-02-01

    Video systems can be configured to measure position, size, attitude, brightness, and color of objects including objects in high speed events. The measurements may be time correlated or images from several sources (perhaps widely separated) may be correlated directly by image splitting techniques. The composition and specifications of the video system will vary considerably depending on the parameters measured and the accuracy desired. The basis of making the above measurements, using video, are presented in a format to guide practitioners in applying video as a measuring tool. Topics include relative vs. absolute measurements, scales and references, data insertion and retrieval, human factors, and video digitization.

  13. DSPACE hardware architecture for on-board real-time image/video processing in European space missions

    Science.gov (United States)

    Saponara, Sergio; Donati, Massimiliano; Fanucci, Luca; Odendahl, Maximilian; Leupers, Reiner; Errico, Walter

    2013-02-01

    The on-board data processing is a vital task for any satellite and spacecraft due to the importance of elaborate the sensing data before sending them to the Earth, in order to exploit effectively the bandwidth to the ground station. In the last years the amount of sensing data collected by scientific and commercial space missions has increased significantly, while the available downlink bandwidth is comparatively stable. The increasing demand of on-board real-time processing capabilities represents one of the critical issues in forthcoming European missions. Faster and faster signal and image processing algorithms are required to accomplish planetary observation, surveillance, Synthetic Aperture Radar imaging and telecommunications. The only available space-qualified Digital Signal Processor (DSP) free of International Traffic in Arms Regulations (ITAR) restrictions faces inadequate performance, thus the development of a next generation European DSP is well known to the space community. The DSPACE space-qualified DSP architecture fills the gap between the computational requirements and the available devices. It leverages a pipelined and massively parallel core based on the Very Long Instruction Word (VLIW) paradigm, with 64 registers and 8 operational units, along with cache memories, memory controllers and SpaceWire interfaces. Both the synthesizable VHDL and the software development tools are generated from the LISA high-level model. A Xilinx-XC7K325T FPGA is chosen to realize a compact PCI demonstrator board. Finally first synthesis results on CMOS standard cell technology (ASIC 180 nm) show an area of around 380 kgates and a peak performance of 1000 MIPS and 750 MFLOPS at 125MHz.

  14. Label-free classification of cultured cells through diffraction imaging.

    Science.gov (United States)

    Dong, Ke; Feng, Yuanming; Jacobs, Kenneth M; Lu, Jun Q; Brock, R Scott; Yang, Li V; Bertrand, Fred E; Farwell, Mary A; Hu, Xin-Hua

    2011-06-01

    Automated classification of biological cells according to their 3D morphology is highly desired in a flow cytometer setting. We have investigated this possibility experimentally and numerically using a diffraction imaging approach. A fast image analysis software based on the gray level co-occurrence matrix (GLCM) algorithm has been developed to extract feature parameters from measured diffraction images. The results of GLCM analysis and subsequent classification demonstrate the potential for rapid classification among six types of cultured cells. Combined with numerical results we show that the method of diffraction imaging flow cytometry has the capacity as a platform for high-throughput and label-free classification of biological cells.

  15. Live-cell imaging of mitosis in Caenorhabditis elegans embryos.

    Science.gov (United States)

    Powers, James A

    2010-06-01

    Caenorhabditis elegans is a wonderful model system for live imaging studies of mitosis. A huge collection of research tools is readily available to facilitate experimentation. For imaging, C. elegans embryos provide large clear cells, an invariant pattern of cell division, only six chromosomes, a very short cell cycle, and remain healthy and happy at room temperature. Mitosis is a complicated process and the types of research questions being asked about the mechanisms involved are continuously expanding. For each experiment, the details of imaging methods need to be tailored to the question. Specific imaging methods will depend on the microscopy hardware and software available to each researcher. This article presents points to consider when choosing a microscope, designing an imaging experiment, or selecting appropriate worm strains for imaging. A method for mounting C. elegans embryos and guidelines for fluorescence and differential interference contrast imaging of mitosis in live embryos are presented.

  16. INFECTED HALLER CELL. RADIOLOGY IMAGE OF THE ISSUE

    Directory of Open Access Journals (Sweden)

    Balasubramanian Thiagarajan

    2012-08-01

    Full Text Available Haller cells are also known as infraorbital ethmoidal cells / maxilla ethmoidal cells. These cellsextend into the inferomedial portion of orbital floor. They are seen in 40% of patients. 1 This article discusses the imaging features of haller cell as seen in coronal CT scan.

  17. Intellectual Video Filming

    DEFF Research Database (Denmark)

    Juel, Henrik

    communication as project oriented group work. We also welcome international students for this unique learning experience combining traditional intellectual virtues with experimental aesthetics and modern media. The paper will present the aims, methods and results of this teaching and discuss lines of future...... in favour of worthy causes. However, it is also very rewarding to draw on the creativity, enthusiasm and rapidly improving technical skills of young students, and to guide them to use video equipment themselves for documentary, for philosophical film essays and intellectual debate. In the digital era...... it seems vital that students, scholars and intellectuals begin to utilize the enormous potentials of communication and reflection inherent in the production of moving images and sound. At Roskilde University in Denmark we have a remarkable tradition of teaching documentary, video essays and video...

  18. 基于改进SIFT算法的视频图像序列自动拼接%Automatic video mosaic imaging based on improved SIFT algorithm

    Institute of Scientific and Technical Information of China (English)

    卢斌; 宋夫华

    2013-01-01

    As the existing methods for video image mosaic take high computational costs and huge computation, this paper proposed a method for making panoramas by getting an adaptive frame and matching method of limiting behavior region. According to inter-frame rate and frame overlap interval of linear model, all frames were matched to their latest neighbor key frames. In the process of mosaic, at the aspect of feature extracting, this paper used the improved SIFT feature operator, and the improved RANSAC algorithm to reduce image registration error. This article used the linear weighting fusion algorithm to achieve the desired gradually-enter gradually-leave effect in the overlap image region. Experiments on video sequences showed that this algorithm could extract mosaic key frames with satisfying quality.%本文提出了一种自适应帧采样和限定特征提取区域的拼接方法,根据帧间重叠率和帧间隔建立线性模型,并把各帧图像对准到其前后的关键帧上.在特征点提取方面,提出了一种改进的SIFT算法进行特征点提取,并采用随机采用一致性(RANSAC)方法来更新匹配点,在图像融合中采用线性加权渐入渐出的自然融合算法.实验结果表明:该方法对一般场景能稳定的抽取到关键帧,并进行拼接,取得了较好的拼接效果.

  19. Molecular Imaging in Stem Cell Therapy for Spinal Cord Injury

    Directory of Open Access Journals (Sweden)

    Fahuan Song

    2014-01-01

    Full Text Available Spinal cord injury (SCI is a serious disease of the center nervous system (CNS. It is a devastating injury with sudden loss of motor, sensory, and autonomic function distal to the level of trauma and produces great personal and societal costs. Currently, there are no remarkable effective therapies for the treatment of SCI. Compared to traditional treatment methods, stem cell transplantation therapy holds potential for repair and functional plasticity after SCI. However, the mechanism of stem cell therapy for SCI remains largely unknown and obscure partly due to the lack of efficient stem cell trafficking methods. Molecular imaging technology including positron emission tomography (PET, magnetic resonance imaging (MRI, optical imaging (i.e., bioluminescence imaging (BLI gives the hope to complete the knowledge concerning basic stem cell biology survival, migration, differentiation, and integration in real time when transplanted into damaged spinal cord. In this paper, we mainly review the molecular imaging technology in stem cell therapy for SCI.

  20. Light-Emitting Diode-Assisted Narrow Band Imaging Video Endoscopy System in Head and Neck Cancer

    Science.gov (United States)

    Chang, Hsin-Jen; Wang, Wen-Hung; Chang, Yen-Liang; Jeng, Tzuan-Ren; Wu, Chun-Te; Angot, Ludovic; Lee, Chun-Hsing

    2015-01-01

    Background/Aims To validate the effectiveness of a newly developed light-emitting diode (LED)-narrow band imaging (NBI) system for detecting early malignant tumors in the oral cavity. Methods Six men (mean age, 51.5 years) with early oral mucosa lesions were screened using both the conventional white light and LED-NBI systems. Results Small elevated or ulcerative lesions were found under the white light view, and typical scattered brown spots were identified after shifting to the LED-NBI view for all six patients. Histopathological examination confirmed squamous cell carcinoma. The clinical stage was early malignant lesions (T1), and the patients underwent wide excision for primary cancer. This is the pilot study documenting the utility of a new LED-NBI system as an adjunctive technique to detect early oral cancer using the diagnostic criterion of the presence of typical scattered brown spots in six high-risk patients. Conclusions Although large-scale screening programs should be established to further verify the accuracy of this technology, its lower power consumption, lower heat emission, and higher luminous efficiency appear promising for future clinical applications. PMID:25844342

  1. Toward automatic phenotyping of developing embryos from videos

    OpenAIRE

    Ning, F.; Delhomme, D.; Lecun, Y.; Piano, F.; Bottou, L.; Barbano, P.E.

    2005-01-01

    We describe a trainable system for analyzing videos of developing C. elegans embryos. The system automatically detects, segments, and locates cells and nuclei in microscopic images. The system was designed as the central component of a fully automated phenotyping system. The system contains three modules 1) a convolutional network trained to classify each pixel into five categories. cell wall, cytoplasm, nucleus membrane, nucleus, outside medium; 2) an energy-based model, which cleans up the ...

  2. Non-invasive Imaging of Human Embryonic Stem Cells

    OpenAIRE

    Hong, Hao; Yang, Yunan; Zhang, Yin; Cai, Weibo

    2010-01-01

    Human embryonic stem cells (hESCs) hold tremendous therapeutic potential in a variety of diseases. Over the last decade, non-invasive imaging techniques have proven to be of great value in tracking transplanted hESCs. This review article will briefly summarize the various techniques used for non-invasive imaging of hESCs, which include magnetic resonance imaging (MRI), bioluminescence imaging (BLI), fluorescence, single-photon emission computed tomography (SPECT), positron emission tomography...

  3. Distribution and spatial variation of hydrothermal faunal assemblages at Lucky Strike (Mid-Atlantic Ridge) revealed by high-resolution video image analysis

    Science.gov (United States)

    Cuvelier, Daphne; Sarrazin, Jozée; Colaço, Ana; Copley, Jon; Desbruyères, Daniel; Glover, Adrian G.; Tyler, Paul; Serrão Santos, Ricardo

    2009-11-01

    Whilst the fauna inhabiting hydrothermal vent structures in the Atlantic Ocean is reasonably well known, less is understood about the spatial distributions of the fauna in relation to abiotic and biotic factors. In this study, a major active hydrothermal edifice (Eiffel Tower, at 1690 m depth) on the Lucky Strike vent field (Mid-Atlantic Ridge (MAR)) was investigated. Video transects were carried out by ROV Victor 6000 and complete image coverage was acquired. Four distinct assemblages, ranging from dense larger-sized Bathymodiolus mussel beds to smaller-sized mussel clumps and alvinocaridid shrimps, and two types of substrata were defined based on high definition photographs and video imagery. To evaluate spatial variation, faunal distribution was mapped in three dimensions. A high degree of patchiness characterizes this 11 m high sulfide structure. The differences observed in assemblage and substratum distribution were related to habitat characteristics (fluid exits, depth and structure orientation). Gradients in community structure were observed, which coincided with an increasing distance from the fluid exits. A biological zonation model for the Eiffel Tower edifice was created in which faunal composition and distribution can be visually explained by the presence/absence of fluid exits.

  4. A Novel Morphometry-Based Protocol of Automated Video-Image Analysis for Species Recognition and Activity Rhythms Monitoring in Deep-Sea Fauna

    Directory of Open Access Journals (Sweden)

    Paolo Menesatti

    2009-10-01

    Full Text Available The understanding of ecosystem dynamics in deep-sea areas is to date limited by technical constraints on sampling repetition. We have elaborated a morphometry-based protocol for automated video-image analysis where animal movement tracking (by frame subtraction is accompanied by species identification from animals’ outlines by Fourier Descriptors and Standard K-Nearest Neighbours methods. One-week footage from a permanent video-station located at 1,100 m depth in Sagami Bay (Central Japan was analysed. Out of 150,000 frames (1 per 4 s, a subset of 10.000 was analyzed by a trained operator to increase the efficiency of the automated procedure. Error estimation of the automated and trained operator procedure was computed as a measure of protocol performance. Three displacing species were identified as the most recurrent: Zoarcid fishes (eelpouts, red crabs (Paralomis multispina, and snails (Buccinum soyomaruae. Species identification with KNN thresholding produced better results in automated motion detection. Results were discussed assuming that the technological bottleneck is to date deeply conditioning the exploration of the deep-sea.

  5. Stream Station video server system; Video saba sochi `Stream station`

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    A video server system Stream Station is developed for delivering moving images to the client on the VOD (video on demand) basis. In this system, video data compressed using the MPEG (Moving Picture Experts Group) technique are stored in a RAID (redundant arrays of independent disks) system, and is outputted as analog video data after expansion in a decoder provided at the output stage. The server system easily realizes a VOD system using the existing in-house cable TV facilities for instance for hotel rooms. The data for this purpose may be either in the MPEG1 or MPEG2 format, pictures may be simultaneously transmitted via as many as 32 channels at the maximum, and the storage holds 256 gigabytes at the maximum. It incorporates a video server architecture developed by Toshiba Corporation, and the data transfer rate from data readout to transmission is totally warranted. (translated by NEDO)

  6. Tracking immune cells in vivo using magnetic resonance imaging.

    Science.gov (United States)

    Ahrens, Eric T; Bulte, Jeff W M

    2013-10-01

    The increasing complexity of in vivo imaging technologies, coupled with the development of cell therapies, has fuelled a revolution in immune cell tracking in vivo. Powerful magnetic resonance imaging (MRI) methods are now being developed that use iron oxide- and ¹⁹F-based probes. These MRI technologies can be used for image-guided immune cell delivery and for the visualization of immune cell homing and engraftment, inflammation, cell physiology and gene expression. MRI-based cell tracking is now also being applied to evaluate therapeutics that modulate endogenous immune cell recruitment and to monitor emerging cellular immunotherapies. These recent uses show that MRI has the potential to be developed in many applications to follow the fate of immune cells in vivo.

  7. Cell-based therapies and imaging in cardiology

    Energy Technology Data Exchange (ETDEWEB)

    Bengel, Frank M. [Technische Universitaet Muenchen, Nuklearmedizinische Klinik und Poliklinik, Munich (Germany); Schachinger, Volker; Dimmeler, Stefanie [University of Frankfurt, Department of Molecular Cardiology, Frankfurt (Germany)

    2005-12-01

    Cell therapy for cardiac repair has emerged as one of the most exciting and promising developments in cardiovascular medicine. Evidence from experimental and clinical studies is increasing that this innovative treatment will influence clinical practice in the future. But open questions and controversies with regard to the basic mechanisms of this therapy continue to exist and emphasise the need for specific techniques to visualise the mechanisms and success of therapy in vivo. Several non-invasive imaging approaches which aim at tracking of transplanted cells in the heart have been introduced. Among these are direct labelling of cells with radionuclides or paramagnetic agents, and the use of reporter genes for imaging of cell transplantation and differentiation. Initial studies have suggested that these molecular imaging techniques have great potential. Integration of cell imaging into studies of cardiac cell therapy holds promise to facilitate further growth of the field towards a broadly clinically useful application. (orig.)

  8. Fast Aerial Video Stitching

    Directory of Open Access Journals (Sweden)

    Jing Li

    2014-10-01

    Full Text Available The highly efficient and robust stitching of aerial video captured by unmanned aerial vehicles (UAVs is a challenging problem in the field of robot vision. Existing commercial image stitching systems have seen success with offline stitching tasks, but they cannot guarantee high-speed performance when dealing with online aerial video sequences.In this paper, we present a novel system which has an unique ability to stitch high-frame rate aerial video at a speed of 150 frames per second (FPS. In addition, rather than using a high-speed vision platform such as FPGA or CUDA, our system is running on a normal personal computer. To achieve this, after the careful comparison of the existing invariant features, we choose the FAST corner and binary descriptor for efficient feature extraction and representation, and present a spatial and temporal coherent filter to fuse the UAV motion information into the feature matching. The proposed filter can remove the majority of feature correspondence outliers and significantly increase the speed of robust feature matching by up to 20 times. To achieve a balance between robustness and efficiency, a dynamic key frame-based stitching framework is used to reduce the accumulation errors.Extensive experiments on challenging UAV datasets demonstrate that our approach can break through the speed limitation and generate an accurate stitching image for aerial video stitching tasks.

  9. Video classification for video quality prediction

    Institute of Scientific and Technical Information of China (English)

    LIU Yu-xin; KURCEREN Ragip; BUDHIA Udit

    2006-01-01

    In this paper we propose a novel method for video quality prediction using video classification. In essence, our approach can serve two goals: (1) To measure the video quality of compressed video sequences without referencing to the original uncompressed videos, i.e., to realize No-Reference (NR) video quality evaluation; (2) To predict quality scores for uncompressed video sequences at various bitrates without actually encoding them. The use of our approach can help realize video streaming with ideal Quality of Service (QoS). Our approach is a low complexity solution, which is specially suitable for application to mobile video streaming where the resources at the handsets are scarce.

  10. Use of focused beam reflectance measurement (FBRM) and process video imaging (PVI) in a modified mixed suspension mixed product removal (MSMPR) cooling crystallizer

    Science.gov (United States)

    Kougoulos, E.; Jones, A. G.; Jennings, K. H.; Wood-Kaczmar, M. W.

    2005-01-01

    The FBRM instrument is a 'powerful' tool developed by Lasentec as an 'in situ' particle monitoring technique for in-line real-time measurement of particle size. This technique was successfully used to monitor steady-state operation in a modified MSMPR crystallizer. The FBRM technique was also used to estimate crystallization kinetics. The FBRM particle size measurements were complimented by an in-line process video imaging (PVI) system developed ' in-house' (Microscopy and Microanalysis, 6 (Suppl. 2) (2000) 996-997), to visualize habit and crystal behaviour within an MSMPR crystallizer. A comparison of the steady-state crystal size distributions measured by low angle light scattering (LALLS) and FBRM was made, showing poor sensitivity of the FBRM technique to particles of less than 1 μm hence the technique was not suitable for the measurement of crystallization kinetics for this organic system.

  11. EI Videos

    CERN Document Server

    Courtney, Michael; Courtney, Amy

    2012-01-01

    The Quantitative Reasoning Center (QRC) at USAFA has the institution's primary responsibility for offering after hours extra instruction (EI) in core technical disciplines (mathematics, chemistry, physics, and engineering mechanics). Demand has been tremendous, totaling over 3600 evening EI sessions in the Fall of 2010. Meeting this demand with only four (now five) full time faculty has been challenging. EI Videos have been produced to help serve cadets in need of well-modeled solutions to homework-type problems. These videos have been warmly received, being viewed over 14,000 times in Fall 2010 and probably contributing to a significant increase in the first attempt success rate on the Algebra Fundamental Skills Exam in Calculus 1. EI Video production is being extended to better support Calculus 2, Calculus 3, and Physics 1.

  12. Video doorphone

    OpenAIRE

    Horyna, Miroslav

    2015-01-01

    Tato diplomová práce se zabývá návrhem dveřního video telefonu na platformě Raspberry Pi. Je zde popsána platforma Raspberry Pi, modul Raspberry Pi Camera, operační systémy pro Raspberry Pi a popis instalace a nastavení softwaru. Dále je zde popsán návrh a popis programů vytvořených pro dveřní video telefon a návrh přídavných modulů. This thesis deals with door video phone on the platform Raspberry Pi. There is described the platform Raspberry Pi, Raspberry Pi Camera module, operating syst...

  13. Video Podcasts

    DEFF Research Database (Denmark)

    Nortvig, Anne Mette; Sørensen, Birgitte Holm

    2016-01-01

    This project’s aim was to support and facilitate master’s students’ preparation and collaboration by making video podcasts of short lectures available on YouTube prior to students’ first face-to-face seminar. The empirical material stems from group interviews, from statistical data created through...... YouTube analytics and from surveys answered by students after the seminar. The project sought to explore how video podcasts support learning and reflection online and how students use and reflect on the integration of online activities in the videos. Findings showed that students engaged actively...... in podcasts that included designed activities, and moreover – although to a lesser degree – that students engaged actively in podcasts that did not include additional activities, suggesting that learning via podcast does not always mean learning by passive listening....

  14. Activity based video indexing and search

    Science.gov (United States)

    Chen, Yang; Jiang, Qin; Medasani, Swarup; Allen, David; Lu, Tsai-ching

    2010-04-01

    We describe a method for searching videos in large video databases based on the activity contents present in the videos. Being able to search videos based on the contents (such as human activities) has many applications such as security, surveillance, and other commercial applications such as on-line video search. Conventional video content-based retrieval (CBR) systems are either feature based or semantics based, with the former trying to model the dynamics video contents using the statistics of image features, and the latter relying on automated scene understanding of the video contents. Neither approach has been successful. Our approach is inspired by the success of visual vocabulary of "Video Google" by Sivic and Zisserman, and the work of Nister and Stewenius who showed that building a visual vocabulary tree can improve the performance in both scalability and retrieval accuracy for 2-D images. We apply visual vocabulary and vocabulary tree approach to spatio-temporal video descriptors for video indexing, and take advantage of the discrimination power of these descriptors as well as the scalability of vocabulary tree for indexing. Furthermore, this approach does not rely on any model-based activity recognition. In fact, training of the vocabulary tree is done off-line using unlabeled data with unsupervised learning. Therefore the approach is widely applicable. Experimental results using standard human activity recognition videos will be presented that demonstrate the feasibility of this approach.

  15. Non-intrusive telemetry applications in the oilsands: from visible light and x-ray video to acoustic imaging and spectroscopy

    Science.gov (United States)

    Shaw, John M.

    2013-06-01

    While the production, transport and refining of oils from the oilsands of Alberta, and comparable resources elsewhere is performed at industrial scales, numerous technical and technological challenges and opportunities persist due to the ill defined nature of the resource. For example, bitumen and heavy oil comprise multiple bulk phases, self-organizing constituents at the microscale (liquid crystals) and the nano scale. There are no quantitative measures available at the molecular level. Non-intrusive telemetry is providing promising paths toward solutions, be they enabling technologies targeting process design, development or optimization, or more prosaic process control or process monitoring applications. Operation examples include automated large object and poor quality ore during mining, and monitoring the thickness and location of oil water interfacial zones within separation vessels. These applications involve real-time video image processing. X-ray transmission video imaging is used to enumerate organic phases present within a vessel, and to detect individual phase volumes, densities and elemental compositions. This is an enabling technology that provides phase equilibrium and phase composition data for production and refining process development, and fluid property myth debunking. A high-resolution two-dimensional acoustic mapping technique now at the proof of concept stage is expected to provide simultaneous fluid flow and fluid composition data within porous inorganic media. Again this is an enabling technology targeting visualization of diverse oil production process fundamentals at the pore scale. Far infrared spectroscopy coupled with detailed quantum mechanical calculations, may provide characteristic molecular motifs and intermolecular association data required for fluid characterization and process modeling. X-ray scattering (SAXS/WAXS/USAXS) provides characteristic supramolecular structure information that impacts fluid rheology and process

  16. Measurement of cell traction forces with ImageJ.

    Science.gov (United States)

    Martiel, Jean-Louis; Leal, Aldo; Kurzawa, Laetitia; Balland, Martial; Wang, Irene; Vignaud, Timothée; Tseng, Qingzong; Théry, Manuel

    2015-01-01

    The quantification of cell traction forces requires three key steps: cell plating on a deformable substrate, measurement of substrate deformation, and the numerical estimation of the corresponding cell traction forces. The computing steps to measure gel deformation and estimate the force field have somehow limited the adoption of this method in cell biology labs. Here we propose a set of ImageJ plug-ins so that every lab equipped with a fluorescent microscope can measure cell traction forces.

  17. VLSI Neural Networks Help To Compress Video Signals

    Science.gov (United States)

    Fang, Wai-Chi; Sheu, Bing J.

    1996-01-01

    Advanced analog/digital electronic system for compression of video signals incorporates artificial neural networks. Performs motion-estimation and image-data-compression processing. Effectively eliminates temporal and spatial redundancies of sequences of video images; processes video image data, retaining only nonredundant parts to be transmitted, then transmits resulting data stream in form of efficient code. Reduces bandwidth and storage requirements for transmission and recording of video signal.

  18. Automatic segmentation of HeLa cell images

    CERN Document Server

    Urban, Jan

    2011-01-01

    In this work, the possibilities for segmentation of cells from their background and each other in digital image were tested, combined and improoved. Lot of images with young, adult and mixture cells were able to prove the quality of described algorithms. Proper segmentation is one of the main task of image analysis and steps order differ from work to work, depending on input images. Reply for biologicaly given question was looking for in this work, including filtration, details emphasizing, segmentation and sphericity computing. Order of algorithms and way to searching for them was also described. Some questions and ideas for further work were mentioned in the conclusion part.

  19. Imaging of anticancer drug action in single cells.

    Science.gov (United States)

    Miller, Miles A; Weissleder, Ralph

    2017-06-23

    Imaging is widely used in anticancer drug development, typically for whole-body tracking of labelled drugs to different organs or to assess drug efficacy through volumetric measurements. However, increasing attention has been drawn to pharmacology at the single-cell level. Diverse cell types, including cancer-associated immune cells, physicochemical features of the tumour microenvironment and heterogeneous cell behaviour all affect drug delivery, response and resistance. This Review summarizes developments in the imaging of in vivo anticancer drug action, with a focus on microscopy approaches at the single-cell level and translational lessons for the clinic.

  20. In vivo SPECT reporter gene imaging of regulatory T cells.

    Directory of Open Access Journals (Sweden)

    Ehsan Sharif-Paghaleh

    Full Text Available Regulatory T cells (Tregs were identified several years ago and are key in controlling autoimmune diseases and limiting immune responses to foreign antigens, including alloantigens. In vivo imaging techniques including intravital microscopy as well as whole body imaging using bioluminescence probes have contributed to the understanding of in vivo Treg function, their mechanisms of action and target cells. Imaging of the human sodium/iodide symporter via Single Photon Emission Computed Tomography (SPECT has been used to image various cell types in vivo. It has several advantages over the aforementioned imaging techniques including high sensitivity, it allows non-invasive whole body studies of viable cell migration and localisation of cells over time and lastly it may offer the possibility to be translated to the clinic. This study addresses whether SPECT/CT imaging can be used to visualise the migratory pattern of Tregs in vivo. Treg lines derived from CD4(+CD25(+FoxP3(+ cells were retrovirally transduced with a construct encoding for the human Sodium Iodide Symporter (NIS and the fluorescent protein mCherry and stimulated with autologous DCs. NIS expressing self-specific Tregs were specifically radiolabelled in vitro with Technetium-99m pertechnetate ((99mTcO(4(- and exposure of these cells to radioactivity did not affect cell viability, phenotype or function. In addition adoptively transferred Treg-NIS cells were imaged in vivo in C57BL/6 (BL/6 mice by SPECT/CT using (99mTcO(4(-. After 24 hours NIS expressing Tregs were observed in the spleen and their localisation was further confirmed by organ biodistribution studies and flow cytometry analysis. The data presented here suggests that SPECT/CT imaging can be utilised in preclinical imaging studies of adoptively transferred Tregs without affecting Treg function and viability thereby allowing longitudinal studies within disease models.

  1. Active vision and image/video understanding systems built upon network-symbolic models for perception-based navigation of mobile robots in real-world environments

    Science.gov (United States)

    Kuvich, Gary

    2004-12-01

    To be completely successful, robots need to have reliable perceptual systems that are similar to human vision. It is hard to use geometric operations for processing of natural images. Instead, the brain builds a relational network-symbolic structure of visual scene, using different clues to set up the relational order of surfaces and objects with respect to the observer and to each other. Feature, symbol, and predicate are equivalent in the biologically inspired Network-Symbolic systems. A linking mechanism binds these features/symbols into coherent structures, and image converts from a "raster" into a "vector" representation. View-based object recognition is a hard problem for traditional algorithms that directly match a primary view of an object to a model. In Network-Symbolic Models, the derived structure, not the primary view, is a subject for recognition. Such recognition is not affected by local changes and appearances of the object as seen from a set of similar views. Once built, the model of visual scene changes slower then local information in the visual buffer. It allows for disambiguating visual information and effective control of actions and navigation via incremental relational changes in visual buffer. Network-Symbolic models can be seamlessly integrated into the NIST 4D/RCS architecture and better interpret images/video for situation awareness, target recognition, navigation and actions.

  2. Integration of image/video understanding engine into 4D/RCS architecture for intelligent perception-based behavior of robots in real-world environments

    Science.gov (United States)

    Kuvich, Gary

    2004-10-01

    To be completely successful, robots need to have reliable perceptual systems that are similar to human vision. It is hard to use geometric operations for processing of natural images. Instead, the brain builds a relational network-symbolic structure of visual scene, using different clues to set up the relational order of surfaces and objects with respect to the observer and to each other. Feature, symbol, and predicate are equivalent in the biologically inspired Network-Symbolic systems. A linking mechanism binds these features/symbols into coherent structures, and image converts from a "raster" into a "vector" representation. View-based object recognition is a hard problem for traditional algorithms that directly match a primary view of an object to a model. In Network-Symbolic Models, the derived structure, not the primary view, is a subject for recognition. Such recognition is not affected by local changes and appearances of the object as seen from a set of similar views. Once built, the model of visual scene changes slower then local information in the visual buffer. It allows for disambiguating visual information and effective control of actions and navigation via incremental relational changes in visual buffer. Network-Symbolic models can be seamlessly integrated into the NIST 4D/RCS architecture and better interpret images/video for situation awareness, target recognition, navigation and actions.

  3. Preparation of Single Cells for Imaging Mass Spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Berman, E S; Fortson, S L; Kulp, K S; Checchi, K D; Wu, L; Felton, J S; Wu, K J

    2007-10-24

    Characterizing chemical changes within single cells is important for determining fundamental mechanisms of biological processes that will lead to new biological insights and improved disease understanding. Imaging biological systems with mass spectrometry (MS) has gained popularity in recent years as a method for creating precise chemical maps of biological samples. In order to obtain high-quality mass spectral images that provide relevant molecular information about individual cells, samples must be prepared so that salts and other cell-culture components are removed from the cell surface and the cell contents are rendered accessible to the desorption beam. We have designed a cellular preparation protocol for imaging MS that preserves the cellular contents for investigation and removes the majority of the interfering species from the extracellular matrix. Using this method, we obtain excellent imaging results and reproducibility in three diverse cell types: MCF7 human breast cancer cells, Madin-Darby canine kidney (MDCK) cells, and NIH/3T3 mouse fibroblasts. This preparation technique allows routine imaging MS analysis of cultured cells, allowing for any number of experiments aimed at furthering scientific understanding of molecular processes within individual cells.

  4. Intelligent network video understanding modern video surveillance systems

    CERN Document Server

    Nilsson, Fredrik

    2008-01-01

    Offering ready access to the security industry's cutting-edge digital future, Intelligent Network Video provides the first complete reference for all those involved with developing, implementing, and maintaining the latest surveillance systems. Pioneering expert Fredrik Nilsson explains how IP-based video surveillance systems provide better image quality, and a more scalable and flexible system at lower cost. A complete and practical reference for all those in the field, this volume:Describes all components relevant to modern IP video surveillance systemsProvides in-depth information about ima

  5. Special Needs: Planning for Adulthood (Videos)

    Medline Plus

    Full Text Available ... the future, watch this video series together to learn about everything from financial and health care benefits ... doctor. © 1995- The Nemours Foundation. All rights reserved. Images provided by The Nemours Foundation, iStock, Getty Images, ...

  6. Video animation system

    Energy Technology Data Exchange (ETDEWEB)

    Mareda, J.

    1985-01-01

    A video animation system is being used at Sandia Laboratories in Albuquerque to record computer generated images directly onto 3/4'' videocassettes. The system serves as a quick turn around process for previewing sequences prior to sending them to a Dicomed film recorder. It is also used when videocassette is appropriate for final output. The video animation system in place at Sandia is described. The system consists of a medium resolution graphics display system, a 3/4'' professional quality videocassette recorder, and a controller that allows single frame recording of computer generated images to be performed under program control. Examples of output produced using this system are presented which will include animated sequences of scientific data produced by DISSPLA programs.

  7. 机载多路视频图像采集与传输系统%Airborne Multiplex Video Image Acquisition and Transmission System

    Institute of Scientific and Technical Information of China (English)

    嵇晓强; 戴明; 孙丽娜; 尹传历; 陈晓露; 王子辰

    2012-01-01

    In order to improve the reliability and stability of airborne imaging system, in this paper, a multiplex video image acquisition and transmission system based on Field Programmable Gate Array(FPGA) is designed combing the practical project to process the image data. FPGA is used to control the whole timing sequence and realize the interface control logic, data storing, control the working mode of camera and the image desampling filter. The system is successful applied in an aerial photoelectric imaging platform. Result through practical test and simulation indicates that all the indicators meet the actual project requirements and the system is versatile, practical and expanding.%为提高机载视频图像信号采集与传输的稳定性和可靠性,结合实际工程项目,设计一种基于现场可编程门阵列(FPGA)的多路视频图像采集与传输系统.利用FPGA控制整体时序,完成与各个外部设备的接口控制逻辑、输入输出缓存、相机的实时控制以及图像数据的降采样处理等功能.仿真结果表明,该系统各项指标均能满足工程项目的设计要求,具有可靠性高、数据不易丢失、抗干扰性强、便于数据传输和处理、实用性强等优点.

  8. In vitro and in vivo imaging of initial B-T-cell interactions in the setting of B-cell based cancer immunotherapy

    Science.gov (United States)

    Gonzalez, Nela Klein; Wennhold, Kerstin; Balkow, Sandra; Kondo, Eisei; Bölck, Birgit; Weber, Tanja; Garcia-Marquez, Maria; Grabbe, Stephan; Bloch, Wilhelm; von Bergwelt-Baildon, Michael; Shimabukuro-Vornhagen, Alexander

    2015-01-01

    There has been a growing interest in the use of B cells for cancer vaccines, since they have yielded promising results in preclinical animal models. Contrary to dendritic cells (DCs), we know little about the migration behavior of B cells in vivo. Therefore, we investigated the interactions between CD40-activated B (CD40B) cells and cytotoxic T cells in vitro and the migration behavior of CD40B cells in vivo. Dynamic interactions of human antigen-presenting cells (APCs) and T cells were observed by time-lapse video microscopy. The migratory and chemoattractant potential of CD40B cells was analyzed in vitro and in vivo using flow cytometry, standard transwell migration assays, and imaging of fluorescently labeled murine CD40B cells. Murine CD40B cells show migratory features similar to human CD40B cells. They express important lymph node homing receptors which were functional and induced chemotaxis of T cells in vitro. Striking differences were observed with regard to interactions of human APCs with T cells. CD40B cells differ from DCs by displaying a rapid migratory pattern undergoing highly dynamic, short-lived and sequential interactions with T cells. In vivo, CD40B cells are home to the secondary lymphoid organs where they accumulate in the B cell zone before traveling to the B/T cell boundary. Moreover, intravenous (i.v.) administration of murine CD40B cells induced an antigen-specific cytotoxic T cell response. Taken together, this data show that CD40B cells home secondary lymphoid organs where they physically interact with T cells to induce antigen-specific T cell responses, thus underscoring their potential as cellular adjuvant for cancer immunotherapy. PMID:26405608

  9. Multimedia-Video for Learning

    CERN Document Server

    Chua, Kah Hean; Wee, Loo Kang; Tan, Ching

    2015-01-01

    Multimedia engages an audience through a combination of text, audio, still images, animation, video, or interactivity-based content formats. Along this vein, free platforms have been seen to allow budding enthusiasts to create multimedia content. For example, Google sites (Wee, 2012b) offer creative opportunities in website development that enable text insertion, still image, video and animation embedding, along with audio and hyper-interactive links to simulations (Christian & Esquembre, 2012; Wee, 2013; Wee, Goh, & Chew, 2013; Wee, Goh, & Lim, 2013; Wee, Lee, Chew, Wong, & Tan, 2015). This chapter focuses on the video aspect of multimedia, which can be positioned as a component to any effective self-paced on-line lesson that would be available anytime, anywhere via computer or mobile devices. The multimedia video approach aims to help users overcome barriers in creating engaging, effective and meaningful content (Barron & Darling-Hammond, 2008) for teaching and learning in an online envi...

  10. Design of microdevices for long-term live cell imaging

    Science.gov (United States)

    Chen, Huaying; Rosengarten, Gary; Li, Musen; Nordon, Robert E.

    2012-06-01

    Advances in fluorescent live cell imaging provide high-content information that relates a cell's life events to its ancestors. An important requirement to track clonal growth and development is the retention of motile cells derived from an ancestor within the same microscopic field of view for days to weeks, while recording fluorescence images and controlling the mechanical and biochemical microenvironments that regulate cell growth and differentiation. The aim of this study was to design a microwell device for long-term, time-lapse imaging of motile cells with the specific requirements of (a) inoculating devices with an average of one cell per well and (b) retaining progeny of cells within a single microscopic field of view for extended growth periods. A two-layer PDMS microwell culture device consisting of a parallel-plate flow cell bonded on top of a microwell array was developed for cell capture and clonal culture. Cell deposition statistics were related to microwell geometry (plate separation and well depth) and the Reynolds number. Computational fluid dynamics was used to simulate flow in the microdevices as well as cell-fluid interactions. Analysis of the forces acting upon a cell was used to predict cell docking zones, which were confirmed by experimental observations. Cell-fluid dynamic interactions are important considerations for design of microdevices for long-term, live cell imaging. The analysis of force and torque balance provides a reasonable approximation for cell displacement forces. It is computationally less intensive compared to simulation of cell trajectories, and can be applied to a wide range of microdevice geometries to predict the cell docking behavior.

  11. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  12. Imaging Cells in Flow Cytometer Using Spatial-Temporal Transformation.

    Science.gov (United States)

    Han, Yuanyuan; Lo, Yu-Hwa

    2015-08-18

    Flow cytometers measure fluorescence and light scattering and analyze multiple physical characteristics of a large population of single cells as cells flow in a fluid stream through an excitation light beam. Although flow cytometers have massive statistical power due to their single cell resolution and high throughput, they produce no information about cell morphology or spatial resolution offered by microscopy, which is a much wanted feature missing in almost all flow cytometers. In this paper, we invent a method of spatial-temporal transformation to provide flow cytometers with cell imaging capabilities. The method uses mathematical algorithms and a spatial filter as the only hardware needed to give flow cytometers imaging capabilities. Instead of CCDs or any megapixel cameras found in any imaging systems, we obtain high quality image of fast moving cells in a flow cytometer using PMT detectors, thus obtaining high throughput in manners fully compatible with existing cytometers. To prove the concept, we demonstrate cell imaging for cells travelling at a velocity of 0.2 m/s in a microfluidic channel, corresponding to a throughput of approximately 1,000 cells per second.

  13. Akademisk video

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2016-01-01

    Dette kapitel har fokus på metodiske og systemiske problemstillinger vedrørende forskerens positionering i forhold til at formidle på digitale medier, særligt med video. De systemiske problemer omfatter en Janus dobbelthed; forskeren vil måske gerne formidle på digitale platforme, men er tynget...... indadtil af ansvar, tidspres, renomé, kvalitetskrav, og digitale platformes flygtighed. De metodiske problemer inkluderer at videoanalyse trækker på mange traditioner, men er underudviklet i forhold til den digitale kontekst. Empirien består af eksempler på online formidling i form af “akademisk video......”. Analysen anvender narrativ, multimodal analyse af video, primært to videoer på platformen audiovisualthinking.org, hvor forskeren optræder som fortæller eller “storyteller”. En video er lavet af forfatteren. Videoanalysen er valideret gennem kollaboration med en af audiovisualthinking.org stifterne. De...

  14. Magnetic Resonance Imaging as a Biomarker for Renal Cell Carcinoma

    Directory of Open Access Journals (Sweden)

    Yan Wu

    2015-01-01

    Full Text Available As the most common neoplasm arising from the kidney, renal cell carcinoma (RCC continues to have a significant impact on global health. Conventional cross-sectional imaging has always served an important role in the staging of RCC. However, with recent advances in imaging techniques and postprocessing analysis, magnetic resonance imaging (MRI now has the capability to function as a diagnostic, therapeutic, and prognostic biomarker for RCC. For this narrative literature review, a PubMed search was conducted to collect the most relevant and impactful studies from our perspectives as urologic oncologists, radiologists, and computational imaging specialists. We seek to cover advanced MR imaging and image analysis techniques that may improve the management of patients with small renal mass or metastatic renal cell carcinoma.

  15. Multiresolution Digital Watermarking for Video

    Institute of Scientific and Technical Information of China (English)

    NIU Xiamu; SUN Shenghe

    2001-01-01

    A method of embedding a digital wa-termark image in video is proposed in this paper.Bymultiresolution signal decomposing,the decomposedwatermark image with different resolution is embed-ded in the corresponding resolution of the decomposedvideo.Experimental results show that the proposedtechnique is robust enough against the attack of framedropping,averaging and lossy compression.

  16. Rendering-based video-CT registration with physical constraints for image-guided endoscopic sinus surgery

    Science.gov (United States)

    Otake, Y.; Leonard, S.; Reiter, A.; Rajan, P.; Siewerdsen, J. H.; Ishii, M.; Taylor, R. H.; Hager, G. D.

    2015-03-01

    We present a system for registering the coordinate frame of an endoscope to pre- or intra- operatively acquired CT data based on optimizing the similarity metric between an endoscopic image and an image predicted via rendering of CT. Our method is robust and semi-automatic because it takes account of physical constraints, specifically, collisions between the endoscope and the anatomy, to initialize and constrain the search. The proposed optimization method is based on a stochastic optimization algorithm that evaluates a large number of similarity metric functions in parallel on a graphics processing unit. Images from a cadaver and a patient were used for evaluation. The registration error was 0.83 mm and 1.97 mm for cadaver and patient images respectively. The average registration time for 60 trials was 4.4 seconds. The patient study demonstrated robustness of the proposed algorithm against a moderate anatomical deformation.

  17. 图像去噪的混合滤波方法%Method of Video Image De-Noising Based on Mixed Filter

    Institute of Scientific and Technical Information of China (English)

    项力领; 刘智; 齐冀; 杨阳

    2013-01-01

    Video image mixed noise by Gaussian noise and impulse noise,seriously affected the image storage,coding and decoding,transmission,target identification and tracking post-processing.A mixed filtering method has been proposed based on an average of edge detection.Through the number of impulse noise judgment methods,the impulse noise is separated from the mixed noise and removed using median filtering.The edge of new image will be extracted by an average of edge detection method and then the Gaussian noise of non-edge is filtered using adaptive mean filtering methods.The edge of the new image is embedded in the filtered Gaussian noise images.Experiment results show that the method will be able to effectively remove Gaussian noise and impulse noise of the image,and to maintain the edges of the image information,improving the image de-noising and clarity.%针对视频图像在同时受到高斯噪声和脉冲噪声污染时,严重影响图像的存储、编解码、传输、目标识别与跟踪的问题,提出一种图像去噪的混合滤波方法.该方法通过基于个数判断脉冲噪声的方法,将脉冲噪声从混合噪声中分离,并利用中值滤波将其过滤;再利用分块平均边缘检测的方法提取图像的边缘;利用自适应均值滤波方法滤除非边缘的高斯噪声,并将边缘图像嵌入滤除高斯噪声的图像中.实验结果表明,该方法不但能有效去除图像中的高斯噪声和脉冲噪声,而且能保持图像的边缘信息,从而提高图像的去噪效果和清晰度.

  18. Single Molecule Imaging in Living Cell with Optical Method

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Significance, difficult, international developing actuality and our completed works for single molecules imaging in living cell with optical method are described respectively. Additionally we give out some suggestions for the technology development further.

  19. Cellular transfer and AFM imaging of cancer cells using Bioimprint

    Directory of Open Access Journals (Sweden)

    Melville DOS

    2006-01-01

    Full Text Available Abstract A technique for permanently capturing a replica impression of biological cells has been developed to facilitate analysis using nanometer resolution imaging tools, namely the atomic force microscope (AFM. The method, termed Bioimprint™, creates a permanent cell 'footprint' in a non-biohazardous Poly (dimethylsiloxane (PDMS polymer composite. The transfer of nanometer scale biological information is presented as an alternative imaging technique at a resolution beyond that of optical microscopy. By transferring cell topology into a rigid medium more suited for AFM imaging, many of the limitations associated with scanning of biological specimens can be overcome. Potential for this technique is demonstrated by analyzing Bioimprint™ replicas created from human endometrial cancer cells. The high resolution transfer of this process is further detailed by imaging membrane morphological structures consistent with exocytosis. The integration of soft lithography to replicate biological materials presents an enhanced method for the study of biological systems at the nanoscale.

  20. Fluorescence lifetime imaging of oxygen in living cells

    NARCIS (Netherlands)

    Gerritsen, H.C.; Sanders, R.; Draaijer, A.; Ince, C.; Levine, Y.K.

    1997-01-01

    The usefulness of the fluorescent probe ruthenium tris(2,2′-dipyridyl) dichloride hydrate (RTDP) for the quantitative imaging of oxygen in single cells was investigated utilizing fluorescence life-time imaging. The results indicate that the fluorescence behavior of RTDP in the presence of oxygen can

  1. Molecular Imaging and Therapy of Merkel Cell Carcinoma

    Directory of Open Access Journals (Sweden)

    Volkan Beylergil

    2014-04-01

    Full Text Available Several molecular imaging modalities have been evaluated in the management of Merkel cell carcinoma (MCC, a rare and aggressive tumor with a high tendency to metastasize. Continuous progress in the field of molecular imaging might improve management in these patients. The authors review the current modalities and their impact on MCC in this brief review article.

  2. Stem Cells as a Tool for Breast Imaging

    Directory of Open Access Journals (Sweden)

    Maria Elena Padín-Iruegas

    2012-01-01

    Full Text Available Stem cells are a scientific field of interest due to their therapeutic potential. There are different groups, depending on the differentiation state. We can find lonely stem cells, but generally they distribute in niches. Stem cells don’t survive forever. They are affected for senescence. Cancer stem cells are best defined functionally, as a subpopulation of tumor cells that can enrich for tumorigenic property and can regenerate heterogeneity of the original tumor. Circulating tumor cells are cells that have detached from a primary tumor and circulate in the bloodstream. They may constitute seeds for subsequent growth of additional tumors (metastasis in different tissues. Advances in molecular imaging have allowed a deeper understanding of the in vivo behavior of stem cells and have proven to be indispensable in preclinical and clinical studies. One of the first imaging modalities for monitoring pluripotent stem cells in vivo, magnetic resonance imaging (MRI offers high spatial and temporal resolution to obtain detailed morphological and functional information. Advantages of radioscintigraphic techniques include their picomolar sensitivity, good tissue penetration, and translation to clinical applications. Radionuclide imaging is the sole direct labeling technique used thus far in human studies, involving both autologous bone marrow derived and peripheral stem cells.

  3. Multimodality Molecular Imaging of Stem Cells Therapy for Stroke

    OpenAIRE

    Fangfang Chao; Yehua Shen; Hong Zhang; Mei Tian

    2013-01-01

    Stem cells have been proposed as a promising therapy for treating stroke. While several studies have demonstrated the therapeutic benefits of stem cells, the exact mechanism remains elusive. Molecular imaging provides the possibility of the visual representation of biological processes at the cellular and molecular level. In order to facilitate research efforts to understand the stem cells therapeutic mechanisms, we need to further develop means of monitoring these cells noninvasively, longit...

  4. On generating cell exemplars for detection of mitotic cells in breast cancer histopathology images.

    Science.gov (United States)

    Aloraidi, Nada A; Sirinukunwattana, Korsuk; Khan, Adnan M; Rajpoot, Nasir M

    2014-01-01

    Mitotic activity is one of the main criteria that pathologists use to decide the grade of the cancer. Computerised mitotic cell detection promises to bring efficiency and accuracy into the grading process. However, detection and classification of mitotic cells in breast cancer histopathology images is a challenging task because of the large intra-class variation in the visual appearance of mitotic cells in various stages of cell division life cycle. In this paper, we test the hypothesis that cells in histopathology images can be effectively represented using cell exemplars derived from sub-images of various kinds of cells in an image for the purposes of mitotic cell classification. We compare three methods for generating exemplar cells. The methods have been evaluated in terms of classification performance on the MITOS dataset. The experimental results demonstrate that eigencells combined with support vector machines produce reasonably high detection accuracy among all the methods.

  5. Photoacoustic imaging of single circulating melanoma cells in vivo

    Science.gov (United States)

    Wang, Lidai; Yao, Junjie; Zhang, Ruiying; Xu, Song; Li, Guo; Zou, Jun; Wang, Lihong V.

    2015-03-01

    Melanoma, one of the most common types of skin cancer, has a high mortality rate, mainly due to a high propensity for tumor metastasis. The presence of circulating tumor cells (CTCs) is a potential predictor for metastasis. Label-free imaging of single circulating melanoma cells in vivo provides rich information on tumor progress. Here we present photoacoustic microscopy of single melanoma cells in living animals. We used a fast-scanning optical-resolution photoacoustic microscope to image the microvasculature in mouse ears. The imaging system has sub-cellular spatial resolution and works in reflection mode. A fast-scanning mirror allows the system to acquire fast volumetric images over a large field of view. A 500-kHz pulsed laser was used to image blood and CTCs. Single circulating melanoma cells were imaged in both capillaries and trunk vessels in living animals. These high-resolution images may be used in early detection of CTCs with potentially high sensitivity. In addition, this technique enables in vivo study of tumor cell extravasation from a primary tumor, which addresses an urgent pre-clinical need.

  6. Video Watermark Using Multiresolution Wavelet Decomposition

    Institute of Scientific and Technical Information of China (English)

    WANG Feng-bi; HUANG Jun-cai; WANG Bin; SHE Kun; ZHOU Ming-tian

    2005-01-01

    A novel technique for the video watermarking based on the discrete wavelet transform (DWT) is present. The intra frames of video are transformed to three gray image firstly, and then the 2th-level discrete wavelet decomposition of the gray images is computed, with which the watermark W is embedded simultaneously into and invert wavelet transform is done to obtain the gray images which contain the secret information. Change the intra frames of video based on the three gray images to make the intra frame contain the secret information. While extracting the secret information, the intra frames are transformed to three gray image, 2th-level discrete wavelet transform is done to the gray images, and the watermark W' is distilled from the wavelet coefficients of the three gray images. The test results show the superior performance of the technique and potential for the watermarking of video.

  7. Noninvasive Imaging of Administered Progenitor Cells

    Energy Technology Data Exchange (ETDEWEB)

    Steven R Bergmann, M.D., Ph.D.

    2012-12-03

    The objective of this research grant was to develop an approach for labeling progenitor cells, specifically those that we had identified as being able to replace ischemic heart cells, so that the distribution could be followed non-invasively. In addition, the research was aimed at determining whether administration of progenitor cells resulted in improved myocardial perfusion and function. The efficiency and toxicity of radiolabeling of progenitor cells was to be evaluated. For the proposed clinical protocol, subjects with end-stage ischemic coronary artery disease were to undergo a screening cardiac positron emission tomography (PET) scan using N-13 ammonia to delineate myocardial perfusion and function. If they qualified based on their PET scan, they would undergo an in-hospital protocol whereby CD34+ cells were stimulated by the administration of granulocytes-colony stimulating factor (G-CSF). CD34+ cells would then be isolated by apharesis, and labeled with indium-111 oxine. Cells were to be re-infused and subjects were to undergo single photon emission computed tomography (SPECT) scanning to evaluate uptake and distribution of labeled progenitor cells. Three months after administration of progenitor cells, a cardiac PET scan was to be repeated to evaluate changes in myocardial perfusion and/or function. Indium oxine is a radiopharmaceutical for labeling of autologous lymphocytes. Indium-111 (In-111) decays by electron capture with a t{sub ½} of 67.2 hours (2.8 days). Indium forms a saturated complex that is neutral, lipid soluble, and permeates the cell membrane. Within the cell, the indium-oxyquinolone complex labels via indium intracellular chelation. Following leukocyte labeling, ~77% of the In-111 is incorporated in the cell pellet. The presence of red cells and /or plasma reduces the labeling efficacy. Therefore, the product needed to be washed to eliminate plasma proteins. This repeated washing can damage cells. The CD34 selected product was a 90

  8. In vivo imaging of immune cell trafficking in cancer.

    Science.gov (United States)

    Ottobrini, Luisa; Martelli, Cristina; Trabattoni, Daria Lucia; Clerici, Mario; Lucignani, Giovanni

    2011-05-01

    Tumour establishment, progression and regression can be studied in vivo using an array of imaging techniques ranging from MRI to nuclear-based and optical techniques that highlight the intrinsic behaviour of different cell populations in the physiological context. Clinical in vivo imaging techniques and preclinical specific approaches have been used to study, both at the macroscopic and microscopic level, tumour cells, their proliferation, metastasisation, death and interaction with the environment and with the immune system. Fluorescent, radioactive or paramagnetic markers were used in direct protocols to label the specific cell population and reporter genes were used for genetic, indirect labelling protocols to track the fate of a given cell subpopulation in vivo. Different protocols have been proposed to in vivo study the interaction between immune cells and tumours by different imaging techniques (intravital and whole-body imaging). In particular in this review we report several examples dealing with dendritic cells, T lymphocytes and macrophages specifically labelled for different imaging procedures both for the study of their physiological function and in the context of anti-neoplastic immunotherapies in the attempt to exploit imaging-derived information to improve and optimise anti-neoplastic immune-based treatments.

  9. Labeling and imaging cells in the zebrafish hindbrain.

    Science.gov (United States)

    Jayachandran, Pradeepa; Hong, Elim; Brewster, Rachel

    2010-07-25

    Key to understanding the morphogenetic processes that shape the early vertebrate embryo is the ability to image cells at high resolution. In zebrafish embryos, injection of plasmid DNA results in mosaic expression, allowing for the visualization of single cells or small clusters of cells (1) . We describe how injection of plasmid DNA encoding membrane-targeted Green Fluorescent Protein (mGFP) under the control of a ubiquitous promoter can be used for imaging cells undergoing neurulation. Central to this protocol is the methodology for imaging labeled cells at high resolution in sections and also in real time. This protocol entails the injection of mGFP DNA into young zebrafish embryos. Embryos are then processed for vibratome sectioning, antibody labeling and imaging with a confocal microscope. Alternatively, live embryos expressing mGFP can be imaged using time-lapse confocal microscopy. We have previously used this straightforward approach to analyze the cellular behaviors that drive neural tube formation in the hindbrain region of zebrafish embryos (2). The fixed preparations allowed for unprecedented visualization of cell shapes and organization in the neural tube while live imaging complemented this approach enabling a better understanding of the cellular dynamics that take place during neurulation.

  10. Segmentation of object-based video of gaze communication

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Stegmann, Mikkel Bille; Forchhammer, Søren

    2005-01-01

    Aspects of video communication based on gaze interaction are considered. The overall idea is to use gaze interaction to control video, e.g. for video conferencing. Towards this goal, animation of a facial mask is demonstrated. The animation is based on images using Active Appearance Models (AAM......). Good quality reproduction of (low-resolution) coded video of an animated facial mask as low as 10-20 kbit/s using MPEG-4 object based video is demonstated....

  11. Design of a lossless image compression system for video capsule endoscopy and its performance in in-vivo trials.

    Science.gov (United States)

    Khan, Tareq H; Wahid, Khan A

    2014-11-04

    In this paper, a new low complexity and lossless image compression system for capsule endoscopy (CE) is presented. The compressor consists of a low-cost YEF color space converter and variable-length predictive with a combination of Golomb-Rice and unary encoding. All these components have been heavily optimized for low-power and low-cost and lossless in nature. As a result, the entire compression system does not incur any loss of image information. Unlike transform based algorithms, the compressor can be interfaced with commercial image sensors which send pixel data in raster-scan fashion that eliminates the need of having large buffer memory. The compression algorithm is capable to work with white light imaging (WLI) and narrow band imaging (NBI) with average compression ratio of 78% and 84% respectively. Finally, a complete capsule endoscopy system is developed on a single, low-power, 65-nm field programmable gate arrays (FPGA) chip. The prototype is developed using circular PCBs having a diameter of 16 mm. Several in-vivo and ex-vivo trials using pig's intestine have been conducted using the prototype to validate the performance of the proposed lossless compression algorithm. The results show that, compared with all other existing works, the proposed algorithm offers a solution to wireless capsule endoscopy with lossless and yet acceptable level of compression.

  12. 基于视频图像的眼动追踪系统算法%Eye Tracking System Algorithm Based on Video Image Algorithm

    Institute of Scientific and Technical Information of China (English)

    王际航; 刘富; 袁雨桐; 刘星

    2016-01-01

    In order to solve the problem of eye tracking,we put forward a fast real-time algorithm of eye tracking based on video image. Firstly,convert RGB color space to YCbCr space,and locate human faces by skin color model. After clipping,use Sobel operator edge detection algorithm in convolution processing,and then find the approximate location of eyes through horizontal gray-level projection to determine rough location of the eyes. Divide this area into left and right eye areas with the help of gray-scale projection,then respectively locate the left and right eye,finally get the precise localization of human eyes. Based on the algorithm,we got the results based on video sequence.%为了解决初步眼动追踪问题,提出基于视频图像的实时性眼动追踪的快速算法。将RGB色彩空间转换成YCbCr空间,利用肤色模型定位人脸。剪裁后,用Sobel算子边缘检测算法进行卷积处理,对图像进行水平投影找到人眼大致位置,对眼部进行粗定位。对该区域进行灰度投影,分割左、右眼,再分别对左、右眼进行定位,从而得到人眼的精确定位。实验选取15帧图片作为测试视频序列,其结果表明,该算法准确地解决了眼动追踪问题,满足实时性要求。

  13. PET molecular imaging in stem cell therapy for neurological diseases

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Jiachuan; Zhang, Hong [Second Affiliated Hospital of Zhejiang University School of Medicine, Department of Nuclear Medicine, Hangzhou, Zhejiang (China); Zhejiang University, Medical PET Center, Hangzhou (China); Institute of Nuclear Medicine and Molecular Imaging of Zhejiang University, Hangzhou (China); Key Laboratory of Medical Molecular Imaging of Zhejiang Province, Hangzhou (China); Tian, Mei [University of Texas, M.D. Anderson Cancer Center, Department of Experimental Diagnostic Imaging, Houston, TX (United States)

    2011-10-15

    Human neurological diseases such as Alzheimer's disease, Parkinson's disease, Huntington's disease, spinal cord injury and multiple sclerosis are caused by loss of different types of neurons and glial cells in the brain and spinal cord. At present, there are no effective therapies against these disorders. Discovery of the therapeutic potential of stem cells offers new strategies for the treatment of neurological diseases. Direct assessment of stem cells' survival, interaction with the host and impact on neuronal functions after transplantation requires advanced in vivo imaging techniques. Positron emission tomography (PET) is a potential molecular imaging modality to evaluate the viability and function of transplanted tissue or stem cells in the nervous system. This review focuses on PET molecular imaging in stem cell therapy for neurological diseases. (orig.)

  14. Information management for high content live cell imaging

    Directory of Open Access Journals (Sweden)

    White Michael RH

    2009-07-01

    Full Text Available Abstract Background High content live cell imaging experiments are able to track the cellular localisation of labelled proteins in multiple live cells over a time course. Experiments using high content live cell imaging will generate multiple large datasets that are often stored in an ad-hoc manner. This hinders identification of previously gathered data that may be relevant to current analyses. Whilst solutions exist for managing image data, they are primarily concerned with storage and retrieval of the images themselves and not the data derived from the images. There is therefore a requirement for an information management solution that facilitates the indexing of experimental metadata and results of high content live cell imaging experiments. Results We have designed and implemented a data model and information management solution for the data gathered through high content live cell imaging experiments. Many of the experiments to be stored measure the translocation of fluorescently labelled proteins from cytoplasm to nucleus in individual cells. The functionality of this database has been enhanced by the addition of an algorithm that automatically annotates results of these experiments with the timings of translocations and periods of any oscillatory translocations as they are uploaded to the repository. Testing has shown the algorithm to perform well with a variety of previously unseen data. Conclusion Our repository is a fully functional example of how high throughput imaging data may be effectively indexed and managed to address the requirements of end users. By implementing the automated analysis of experimental results, we have provided a clear impetus for individuals to ensure that their data forms part of that which is stored in the repository. Although focused on imaging, the solution provided is sufficiently generic to be applied to other functional proteomics and genomics experiments. The software is available from: fhttp://code.google.com/p/livecellim/

  15. Small molecule probes for plant cell wall polysaccharide imaging

    Directory of Open Access Journals (Sweden)

    Ian eWallace

    2012-05-01

    Full Text Available Plant cell walls are composed of interlinked polymer networks consisting of cellulose, hemicelluloses, pectins, proteins, and lignin. The ordered deposition of these components is a dynamic process that critically affects the development and differentiation of plant cells. However, our understanding of cell wall synthesis and remodeling, as well as the diverse cell wall architectures that result from these processes, has been limited by a lack of suitable chemical probes that are compatible with live-cell imaging. In this review, we summarize the currently available molecular toolbox of probes for cell wall polysaccharide imaging in plants, with particular emphasis on recent advances in small molecule-based fluorescent probes. We also discuss the potential for further development of small molecule probes for the analysis of cell wall architecture and dynamics.

  16. Quantitative volumetric Raman imaging of three dimensional cell cultures

    Science.gov (United States)

    Kallepitis, Charalambos; Bergholt, Mads S.; Mazo, Manuel M.; Leonardo, Vincent; Skaalure, Stacey C.; Maynard, Stephanie A.; Stevens, Molly M.

    2017-03-01

    The ability to simultaneously image multiple biomolecules in biologically relevant three-dimensional (3D) cell culture environments would contribute greatly to the understanding of complex cellular mechanisms and cell-material interactions. Here, we present a computational framework for label-free quantitative volumetric Raman imaging (qVRI). We apply qVRI to a selection of biological systems: human pluripotent stem cells with their cardiac derivatives, monocytes and monocyte-derived macrophages in conventional cell culture systems and mesenchymal stem cells inside biomimetic hydrogels that supplied a 3D cell culture environment. We demonstrate visualization and quantification of fine details in cell shape, cytoplasm, nucleus, lipid bodies and cytoskeletal structures in 3D with unprecedented biomolecular specificity for vibrational microspectroscopy.

  17. Optical Imaging for Stem Cell Differentiation to Neuronal Lineage

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Do Won; Lee, Dong Soo [Seoul National Univ., Seoul (Korea, Republic of)

    2012-03-15

    In regenerative medicine, the prospect of stem cell therapy hold great promise for the recovery of injured tissues and effective treatment of intractable diseases. Tracking stem cell fate provides critical information to understand and evaluate the success of stem cell therapy. The recent emergence of in vivo noninvasive molecular imaging has enabled assessment of the behavior of grafted stem cells in living subjects. In this review, we provide an overview of current optical imaging strategies based on cell or tissue specific reporter gene expression and of in vivo methods to monitor stem cell differentiation into neuronal lineages. These methods use optical reporters either regulated by neuron-specific promoters or containing neuron-specific microRNA binding sites. Both systems revealed dramatic changes in optical reporter imaging signals in cells differentiating a yeast GAL4 amplification system or an engineering-enhanced luciferase reported gene. Furthermore, we propose an advanced imaging system to monitor neuronal differentiation during neurogenesis that uses in vivo multiplexed imaging techniques capable of detecting several targets simultaneously.

  18. Optical imaging for stem cell differentiation to neuronal lineage.

    Science.gov (United States)

    Hwang, Do Won; Lee, Dong Soo

    2012-03-01

    In regenerative medicine, the prospect of stem cell therapy holds great promise for the recovery of injured tissues and effective treatment of intractable diseases. Tracking stem cell fate provides critical information to understand and evaluate the success of stem cell therapy. The recent emergence of in vivo noninvasive molecular imaging has enabled assessment of the behavior of grafted stem cells in living subjects. In this review, we provide an overview of current optical imaging strategies based on cell- or tissue-specific reporter gene expression and of in vivo methods to monitor stem cell differentiation into neuronal lineages. These methods use optical reporters either regulated by neuron-specific promoters or containing neuron-specific microRNA binding sites. Both systems revealed dramatic changes in optical reporter imaging signals in cells differentiating into a neuronal lineage. The detection limit of weak promoters or reporter genes can be greatly enhanced by adopting a yeast GAL4 amplification system or an engineering-enhanced luciferase reporter gene. Furthermore, we propose an advanced imaging system to monitor neuronal differentiation during neurogenesis that uses in vivo multiplexed imaging techniques capable of detecting several targets simultaneously.

  19. Deep Learning Automates the Quantitative Analysis of Individual Cells in Live-Cell Imaging Experiments.

    Science.gov (United States)

    Van Valen, David A; Kudo, Takamasa; Lane, Keara M; Macklin, Derek N; Quach, Nicolas T; DeFelice, Mialy M; Maayan, Inbal; Tanouchi, Yu; Ashley, Euan A; Covert, Markus W

    2016-11-01

    Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.

  20. Video Analytics

    DEFF Research Database (Denmark)

    include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition......This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...

  1. Short Project-Based Learning with MATLAB Applications to Support the Learning of Video-Image Processing

    Science.gov (United States)

    Gil, Pablo

    2017-01-01

    University courses concerning Computer Vision and Image Processing are generally taught using a traditional methodology that is focused on the teacher rather than on the students. This approach is consequently not effective when teachers seek to attain cognitive objectives involving their students' critical thinking. This manuscript covers the…

  2. Local statistics allow quantification of cell-to-cell variability from high-throughput microscope images.

    Science.gov (United States)

    Handfield, Louis-François; Strome, Bob; Chong, Yolanda T; Moses, Alan M

    2015-03-15

    Quantifying variability in protein expression is a major goal of systems biology and cell-to-cell variability in subcellular localization pattern has not been systematically quantified. We define a local measure to quantify cell-to-cell variability in high-throughput microscope images and show that it allows comparable measures of variability for proteins with diverse subcellular localizations. We systematically estimate cell-to-cell variability in the yeast GFP collection and identify examples of proteins that show cell-to-cell variability in their subcellular localization. Automated image analysis methods can be used to quantify cell-to-cell variability in microscope images. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. High resolution ultrasound and photoacoustic imaging of single cells.

    Science.gov (United States)

    Strohm, Eric M; Moore, Michael J; Kolios, Michael C

    2016-03-01

    High resolution ultrasound and photoacoustic images of stained neutrophils, lymphocytes and monocytes from a blood smear were acquired using a combined acoustic/photoacoustic microscope. Photoacoustic images were created using a pulsed 532 nm laser that was coupled to a single mode fiber to produce output wavelengths from 532 nm to 620 nm via stimulated Raman scattering. The excitation wavelength was selected using optical filters and focused onto the sample using a 20× objective. A 1000 MHz transducer was co-aligned with the laser spot and used for ultrasound and photoacoustic images, enabling micrometer resolution with both modalities. The different cell types could be easily identified due to variations in contrast within the acoustic and photoacoustic images. This technique provides a new way of probing leukocyte structure with potential applications towards detecting cellular abnormalities and diseased cells at the single cell level.

  4. High resolution imaging of surface patterns of single bacterial cells

    Energy Technology Data Exchange (ETDEWEB)

    Greif, Dominik; Wesner, Daniel [Experimental Biophysics and Applied Nanoscience, Bielefeld University, Universitaetsstrasse 25, 33615 Bielefeld (Germany); Regtmeier, Jan, E-mail: jan.regtmeier@physik.uni-bielefeld.de [Experimental Biophysics and Applied Nanoscience, Bielefeld University, Universitaetsstrasse 25, 33615 Bielefeld (Germany); Anselmetti, Dario [Experimental Biophysics and Applied Nanoscience, Bielefeld University, Universitaetsstrasse 25, 33615 Bielefeld (Germany)

    2010-09-15

    We systematically studied the origin of surface patterns observed on single Sinorhizobium meliloti bacterial cells by comparing the complementary techniques atomic force microscopy (AFM) and scanning electron microscopy (SEM). Conditions ranged from living bacteria in liquid to fixed bacteria in high vacuum. Stepwise, we applied different sample modifications (fixation, drying, metal coating, etc.) and characterized the observed surface patterns. A detailed analysis revealed that the surface structure with wrinkled protrusions in SEM images were not generated de novo but most likely evolved from similar and naturally present structures on the surface of living bacteria. The influence of osmotic stress to the surface structure of living cells was evaluated and also the contribution of exopolysaccharide and lipopolysaccharide (LPS) by imaging two mutant strains of the bacterium under native conditions. AFM images of living bacteria in culture medium exhibited surface structures of the size of single proteins emphasizing the usefulness of AFM for high resolution cell imaging.

  5. High resolution ultrasound and photoacoustic imaging of single cells

    Directory of Open Access Journals (Sweden)

    Eric M. Strohm

    2016-03-01

    Full Text Available High resolution ultrasound and photoacoustic images of stained neutrophils, lymphocytes and monocytes from a blood smear were acquired using a combined acoustic/photoacoustic microscope. Photoacoustic images were created using a pulsed 532 nm laser that was coupled to a single mode fiber to produce output wavelengths from 532 nm to 620 nm via stimulated Raman scattering. The excitation wavelength was selected using optical filters and focused onto the sample using a 20× objective. A 1000 MHz transducer was co-aligned with the laser spot and used for ultrasound and photoacoustic images, enabling micrometer resolution with both modalities. The different cell types could be easily identified due to variations in contrast within the acoustic and photoacoustic images. This technique provides a new way of probing leukocyte structure with potential applications towards detecting cellular abnormalities and diseased cells at the single cell level.

  6. Molecular imaging of cell-mediated cancer immunotherapy.

    Science.gov (United States)

    Lucignani, Giovanni; Ottobrini, Luisa; Martelli, Cristina; Rescigno, Maria; Clerici, Mario

    2006-09-01

    New strategies based on the activation of a patient's immune response are being sought to complement present conventional exogenous cancer therapies. Elucidating the trafficking pathways of immune cells in vivo, together with their migratory properties in relation to their differentiation and activation status, is useful for understanding how the immune system interacts with cancer. Methods based on tissue sampling to monitor immune responses are inadequate for repeatedly characterizing the responses of the immune system in different organs. A solution to this problem might come from molecular and cellular imaging - a branch of biomedical sciences that combines biotechnology and imaging methods to characterize, in vivo, the molecular and cellular processes involved in normal and pathologic states. The general concepts of noninvasive imaging of targeted cells as well as the technology and probes applied to cell-mediated cancer immunotherapy imaging are outlined in this review.

  7. Using image processing technology and mathematical algorithm in the automatic selection of vocal cord opening and closing images from the larynx endoscopy video.

    Science.gov (United States)

    Kuo, Chung-Feng Jeffrey; Chu, Yueng-Hsiang; Wang, Po-Chun; Lai, Chun-Yu; Chu, Wen-Lin; Leu, Yi-Shing; Wang, Hsing-Won

    2013-12-01

    The human larynx is an important organ for voice production and respiratory mechanisms. The vocal cord is approximated for voice production and open for breathing. The videolaryngoscope is widely used for vocal cord examination. At present, physicians usually diagnose vocal cord diseases by manually selecting the image of the vocal cord opening to the largest extent (abduction), thus maximally exposing the vocal cord lesion. On the other hand, the severity of diseases such as vocal palsy, atrophic vocal cord is largely dependent on the vocal cord closing to the smallest extent (adduction). Therefore, diseases can be assessed by the image of the vocal cord opening to the largest extent, and the seriousness of breathy voice is closely correlated to the gap between vocal cords when closing to the smallest extent. The aim of the study was to design an automatic vocal cord image selection system to improve the conventional selection process by physicians and enhance diagnosis efficiency. Also, due to the unwanted fuzzy images resulting from examination process caused by human factors as well as the non-vocal cord images, texture analysis is added in this study to measure image entropy to establish a screening and elimination system to effectively enhance the accuracy of selecting the image of the vocal cord closing to the smallest extent.

  8. Interactive Video, The Next Step

    Science.gov (United States)

    Strong, L. R.; Wold-Brennon, R.; Cooper, S. K.; Brinkhuis, D.

    2012-12-01

    Video has the ingredients to reach us emotionally - with amazing images, enthusiastic interviews, music, and video game-like animations-- and it's emotion that motivates us to learn more about our new interest. However, watching video is usually passive. New web-based technology is expanding and enhancing the video experience, creating opportunities to use video with more direct interaction. This talk will look at an Educaton and Outreach team's experience producing video-centric curriculum using innovative interactive media tools from TED-Ed and FlixMaster. The Consortium for Ocean Leadership's Deep Earth Academy has partnered with the Center for Dark Energy Biosphere Investigations (C-DEBI) to send educators and a video producer aboard three deep sea research expeditions to the Juan de Fuca plate to install and service sub-seafloor observatories. This collaboration between teachers, students, scientists and media producers has proved a productive confluence, providing new ways of understanding both ground-breaking science and the process of science itself - by experimenting with new ways to use multimedia during ocean-going expeditions and developing curriculum and other projects post-cruise.

  9. Deblocking of mobile stereo video

    Science.gov (United States)

    Azzari, Lucio; Gotchev, Atanas; Egiazarian, Karen

    2012-02-01

    Most of candidate methods for compression of mobile stereo video apply block-transform based compression based on the H-264 standard with quantization of transform coefficients driven by quantization parameter (QP). The compression ratio and the resulting bit rate are directly determined by the QP level and high compression is achieved for the price of visually noticeable blocking artifacts. Previous studies on perceived quality of mobile stereo video have revealed that blocking artifacts are the most annoying and most influential in the acceptance/rejection of mobile stereo video and can even completely cancel the 3D effect and the corresponding quality added value. In this work, we address the problem of deblocking of mobile stereo video. We modify a powerful non-local transform-domain collaborative filtering method originally developed for denoising of images and video. The method employs grouping of similar block patches residing in spatial and temporal vicinity of a reference block in filtering them collaboratively in a suitable transform domain. We study the most suitable way of finding similar patches in both channels of stereo video and suggest a hybrid four-dimensional transform to process the collected synchronized (stereo) volumes of grouped blocks. The results benefit from the additional correlation available between the left and right channel of the stereo video. Furthermore, addition sharpening is applied through an embedded alpha-rooting in transform domain, which improve the visual appearance of the deblocked frames.

  10. Remote Network Video Education Dynamic Image Prioritize Research Method%远程网络视频教育动态图像清晰化方法研究

    Institute of Scientific and Technical Information of China (English)

    巫桂梅

    2012-01-01

    研究远程网络视频教育动态多帧图像清晰化传递问题.网络视频教育中,授予双方所处的范围距离较远,图像信息压缩后会严重降低图像的清晰度,使得图像包含大量的随机多维噪声,噪声特征属性呈现多元性.传统的图像去噪方法在针对远程传递图像的噪声时,很难针对这种多维噪声属性建立动态的阀值,造成图像去噪效果不好,导致传递图像失真.为此,提出了一种基于离散小波变换算法的远程网络视频教育动态图像清晰化方法.利用离散小波变换算法对图像进行去噪处理,去除外界因素对图像造成的干扰.对视频图像进行直方图均衡化处理,实现远程网络视频教育动态图像的清晰化.实验结果表明,这种算法提高了远程网络视频教育动态图像的清晰度.%Research the transmission problem of dynamic multi—frame image for the remote network video education. In network video education, because of the long distance, the compression of the image information will seriously reduce the definition of images, so that the images contain a lot of random multidimensional noises whose characteristic properties are diversity. To solve this problem, this paper raised a method of dynamic image definition for the remote network video education based on discrete wavelet transform algorithm. This paper used discrete Wavelet Transform algorithm for image denoising process in order to remove the interference of the image caused by external factors. The paper also used histogram equalization to realize dynamic image definition for remote network video education. Experimental results show that the algorithm improves the definition of dynamic image the remote network video education.

  11. Establishing an appropriate mode of comparison for measuring the performance of marbling score output from video image analysis beef carcass grading systems.

    Science.gov (United States)

    Moore, C B; Bass, P D; Green, M D; Chapman, P L; O'Connor, M E; Yates, L D; Scanga, J A; Tatum, J D; Smith, G C; Belk, K E

    2010-07-01

    A beef carcass instrument grading system that improves accuracy and consistency of marbling score (MS) evaluation would have the potential to advance value-based marketing efforts and reduce disparity in quality grading among USDA graders, shifts, and plants. The objectives of this study were to use output data from the Video Image Analysis-Computer Vision System (VIA-CVS, Research Management Systems Inc., Fort Collins, CO) to develop an appropriate method by which performance of video image analysis MS output could be evaluated for accuracy, precision, and repeatability for purposes of seeking official USDA approval for using an instrument in commerce to augment assessment of quality grade, and to use the developed standards to gain approval for VIA-CVS to assist USDA personnel in assigning official beef carcass MS. An initial MS output algorithm was developed (phase I) for the VIA-CVS before 2 separate preliminary instrument evaluation trials (phases II and III) were conducted. During phases II and III, a 3-member panel of USDA expert graders independently assigned MS to 1,068 and 1,242 stationary carcasses, respectively. Mean expert MS was calculated for each carcass. Additionally, a separate 3-member USDA expert panel developed a consensus MS for each carcass in phase III. In phase II, VIA-CVS stationary triple-placement and triple-trigger instrument repeatability values (n = 262 and 260, respectively), measured as the percentage of total variance explained by carcasses, were 99.9 and 99.8%, respectively. In phases II and III, 95% of carcasses were assigned expert MS for which differences between individual expert MS, and for which the consensus MS in phase III only, was < or = 96 MS units. Two differing approaches to simple regression analysis, as well as a separate method-comparability analysis that accommodates error in both dependent and independent variables, were used to assess accuracy and precision of instrument MS predictions vs. mean expert MS. Method

  12. Quantitative volumetric Raman imaging of three dimensional cell cultures

    KAUST Repository

    Kallepitis, Charalambos

    2017-03-22

    The ability to simultaneously image multiple biomolecules in biologically relevant three-dimensional (3D) cell culture environments would contribute greatly to the understanding of complex cellular mechanisms and cell–material interactions. Here, we present a computational framework for label-free quantitative volumetric Raman imaging (qVRI). We apply qVRI to a selection of biological systems: human pluripotent stem cells with their cardiac derivatives, monocytes and monocyte-derived macrophages in conventional cell culture systems and mesenchymal stem cells inside biomimetic hydrogels that supplied a 3D cell culture environment. We demonstrate visualization and quantification of fine details in cell shape, cytoplasm, nucleus, lipid bodies and cytoskeletal structures in 3D with unprecedented biomolecular specificity for vibrational microspectroscopy.

  13. Multimodal nonlinear imaging of arabidopsis thaliana root cell

    Science.gov (United States)

    Jang, Bumjoon; Lee, Sung-Ho; Woo, Sooah; Park, Jong-Hyun; Lee, Myeong Min; Park, Seung-Han

    2017-07-01

    Nonlinear optical microscopy has enabled the possibility to explore inside the living organisms. It utilizes ultrashort laser pulse with long wavelength (greater than 800nm). Ultrashort pulse produces high peak power to induce nonlinear optical phenomenon such as two-photon excitation fluorescence (TPEF) and harmonic generations in the medium while maintaining relatively low average energy pre area. In plant developmental biology, confocal microscopy is widely used in plant cell imaging after the development of biological fluorescence labels in mid-1990s. However, fluorescence labeling itself affects the sample and the sample deviates from intact condition especially when labelling the entire cell. In this work, we report the dynamic images of Arabidopsis thaliana root cells. This demonstrates the multimodal nonlinear optical microscopy is an effective tool for long-term plant cell imaging.

  14. Live-cell imaging of mammalian RNAs with Spinach2

    Science.gov (United States)

    Strack, Rita L.; Jaffrey, Samie R.

    2015-01-01

    The ability to monitor RNAs of interest in living cells is crucial to understanding the function, dynamics, and regulation of this important class of molecules. In recent years, numerous strategies have been developed with the goal of imaging individual RNAs of interest in living cells, each with their own advantages and limitations. This chapter provides an overview of current methods of live-cell RNA imaging, including a detailed discussion of genetically encoded strategies for labeling RNAs in mammalian cells. This chapter then focuses on the development and use of “RNA mimics of GFP” or Spinach technology for tagging mammalian RNAs, and includes a detailed protocol for imaging 5S and CGG60 RNA with the recently described Spinach2 tag. PMID:25605384

  15. Imaging of blood cells based on snapshot Hyper-Spectral Imaging systems

    Science.gov (United States)

    Robison, Christopher J.; Kolanko, Christopher; Bourlai, Thirimachos; Dawson, Jeremy M.

    2015-05-01

    Snapshot Hyper-Spectral imaging systems are capable of capturing several spectral bands simultaneously, offering coregistered images of a target. With appropriate optics, these systems are potentially able to image blood cells in vivo as they flow through a vessel, eliminating the need for a blood draw and sample staining. Our group has evaluated the capability of a commercial Snapshot Hyper-Spectral imaging system, the Arrow system from Rebellion Photonics, in differentiating between white and red blood cells on unstained blood smear slides. We evaluated the imaging capabilities of this hyperspectral camera; attached to a microscope at varying objective powers and illumination intensity. Hyperspectral data consisting of 25, 443x313 hyperspectral bands with ~3nm spacing were captured over the range of 419 to 494nm. Open-source hyper-spectral data cube analysis tools, used primarily in Geographic Information Systems (GIS) applications, indicate that white blood cells features are most prominent in the 428-442nm band for blood samples viewed under 20x and 50x magnification over a varying range of illumination intensities. These images could potentially be used in subsequent automated white blood cell segmentation and counting algorithms for performing in vivo white blood cell counting.

  16. In vivo imaging of cancer cells with electroporation of quantum dots and multispectral imaging

    Science.gov (United States)

    Yoo, Jung Sun; Won, Nayoun; Kim, Hong Bae; Bang, Jiwon; Kim, Sungjee; Ahn, Saeyoung; Soh, Kwang-Sup

    2010-06-01

    Our understanding of dissemination and growth of cancer cells is limited by our inability for long-term followup of this process in vivo. Fluorescence molecular imaging has the potential to track cancer cells with high contrast and sensitivity in living animals. For this purpose, intracellular delivery of near-infrared fluorescence quantum dots (QDs) by electroporation offers considerable advantages over organic fluorophores and other cell tagging methods. In this research we developed a multispectral imaging system that could eliminate two major parameters compromising in vivo fluorescence imaging performance, i.e., variations in the tissue optical properties and tissue autofluorescence. We demonstrated that electroporation of QDs and multispectral imaging allowed in vivo assessment of cancer development and progression in the xenograft mouse tumor model for more than 1 month, providing a powerful means to learn more about the biology of cancer and metastasis.

  17. Intellectual Video Filming

    DEFF Research Database (Denmark)

    Juel, Henrik

    Like everyone else university students of the humanities are quite used to watching Hollywood productions and professional TV. It requires some didactic effort to redirect their eyes and ears away from the conventional mainstream style and on to new and challenging ways of using the film media...... it seems vital that students, scholars and intellectuals begin to utilize the enormous potentials of communication and reflection inherent in the production of moving images and sound. At Roskilde University in Denmark we have a remarkable tradition of teaching documentary, video essays and video...... communication as project oriented group work. We also welcome international students for this unique learning experience combining traditional intellectual virtues with experimental aesthetics and modern media. The paper will present the aims, methods and results of this teaching and discuss lines of future...

  18. New imaging probes to track cell fate: reporter genes in stem cell research.

    Science.gov (United States)

    Jurgielewicz, Piotr; Harmsen, Stefan; Wei, Elizabeth; Bachmann, Michael H; Ting, Richard; Aras, Omer

    2017-07-03

    Cell fate is a concept used to describe the differentiation and development of a cell in its organismal context over time. It is important in the field of regenerative medicine, where stem cell therapy holds much promise but is limited by our ability to assess its efficacy, which is mainly due to the inability to monitor what happens to the cells upon engraftment to the damaged tissue. Currently, several imaging modalities can be used to track cells in the clinical setting; however, they do not satisfy many of the criteria necessary to accurately assess several aspects of cell fate. In recent years, reporter genes have become a popular option for tracking transplanted cells, via various imaging modalities in small mammalian animal models. This review article examines the reporter gene strategies used in imaging modalities such as MRI, SPECT/PET, Optoacoustic and Bioluminescence Imaging. Strengths and limitations of the use of reporter genes in each modality are discussed.

  19. A spatiotemporal decomposition strategy for personal home video management

    Science.gov (United States)

    Yi, Haoran; Kozintsev, Igor; Polito, Marzia; Wu, Yi; Bouguet, Jean-Yves; Nefian, Ara; Dulong, Carole

    2007-01-01

    With the advent and proliferation of low cost and high performance digital video recorder devices, an increasing number of personal home video clips are recorded and stored by the consumers. Compared to image data, video data is lager in size and richer in multimedia content. Efficient access to video content is expected to be more challenging than image mining. Previously, we have developed a content-based image retrieval system and the benchmarking framework for personal images. In this paper, we extend our personal image retrieval system to include personal home video clips. A possible initial solution to video mining is to represent video clips by a set of key frames extracted from them thus converting the problem into an image search one. Here we report that a careful selection of key frames may improve the retrieval accuracy. However, because video also has temporal dimension, its key frame representation is inherently limited. The use of temporal information can give us better representation for video content at semantic object and concept levels than image-only based representation. In this paper we propose a bottom-up framework to combine interest point tracking, image segmentation and motion-shape factorization to decompose the video into spatiotemporal regions. We show an example application of activity concept detection using the trajectories extracted from the spatio-temporal regions. The proposed approach shows good potential for concise representation and indexing of objects and their motion in real-life consumer video.

  20. Epistemic Authority, Lies, and Video

    DEFF Research Database (Denmark)

    Andersen, Rune Saugmann

    2013-01-01

    This article analyses how videos of violent protests become politically powerful arguments able to intervene in debates about security. It does so by looking at a series of videos taken by police authorities and protesters during street battles in Copenhagen in August 2009, when protesters oppose...... how both police and protesters enact strategies that condition the possibility for images to figure in and impact post-conflict debate, the article explores how both governance and resistance is currently constituted by means of images. It ultimately considers what this means in terms...

  1. MATRIX BASED INDEXING TECHNIQUE FOR VIDEO DATA

    OpenAIRE

    2013-01-01

    Due to increasing the usage of media, the utilization of video play central role as it supports various applications. Video is the particular media which contains complex collection of objects like audio, motion, text, color and picture. Due to the rapid growth of this information video indexing process is mandatory for fast and effective retrieval. Many current indexing techniques fails to extract the needed image from the stored data set, based on the users query. Urgent attention in the fi...

  2. Video sensor with range measurement capability

    Science.gov (United States)

    Briscoe, Jeri M. (Inventor); Corder, Eric L. (Inventor); Howard, Richard T. (Inventor); Broderick, David J. (Inventor)

    2008-01-01

    A video sensor device is provided which incorporates a rangefinder function. The device includes a single video camera and a fixed laser spaced a predetermined distance from the camera for, when activated, producing a laser beam. A diffractive optic element divides the beam so that multiple light spots are produced on a target object. A processor calculates the range to the object based on the known spacing and angles determined from the light spots on the video images produced by the camera.

  3. Robust frame-dependent video watermarking

    Science.gov (United States)

    Holliman, Matthew J.; Macy, William W.; Yeung, Minerva M.

    2000-05-01

    In this paper, we describe some of the problems associated with watermarking key management, with particular attention to the case of video. We also describe a possible solution to the problem, which is that of image-dependent watermarking, and briefly discuss some of the possible advantages to be gained from adopting such an approach. The paper also presents a simple, efficient means of robustly extracting bits from a video sequence. The algorithm has applications to secure, oblivious video watermark detection.

  4. Tumor-stem cells interactions by fluorescence imaging

    Science.gov (United States)

    Meleshina, Aleksandra V.; Cherkasova, Elena I.; Sergeeva, Ekaterina; Turchin, Ilya V.; Kiseleva, Ekaterina V.; Dashinimaev, Erdem B.; Shirmanova, Marina V.; Zagaynova, Elena V.

    2013-02-01

    Recently, great deal of interest is investigation the function of the stem cells (SC) in tumors. In this study, we studied «recipient-tumor- fluorescent stem cells » system using the methods of in vivo imaging and laser scanning microscopy (LSM). We used adipose-derived adult stem (ADAS) cells of human lentiviral transfected with the gene of fluorescent protein Turbo FP635. ADAS cells were administrated into nude mice with transplanted tumor HeLa Kyoto (human cervical carcinoma) at different stages of tumor growth (0-8 days) intravenously or into tumor. In vivo imaging was performed on the experimental setup for epi - luminescence bioimaging (IAP RAS, Nizhny Novgorod). The results of the imaging showed localization of fluorophore tagged stem cells in the spleen on day 5-9 after injection. The sensitivity of the technique may be improved by spectral separation autofluorescence and fluorescence of stem cells. We compared the results of in vivo imaging and confocal laser scanning microscopy (LSM 510 META, Carl Zeiss, Germany). Internal organs of the animals and tumor tissue were investigated. It was shown that with i.v. injection of ADAS, bright fluorescent structures with spectral characteristics corresponding to TurboFP635 protein are locally accumulated in the marrow, lungs and tumors of animals. These findings indicate that ADAS cells integrate in the animal body with transplanted tumor and can be identified by fluorescence bioimaging techniques in vivo and ex vivo.

  5. Multimodality Molecular Imaging of Cardiac Cell Transplantation: Part II. In Vivo Imaging of Bone Marrow Stromal Cells in Swine with PET/CT and MR Imaging.

    Science.gov (United States)

    Parashurama, Natesh; Ahn, Byeong-Cheol; Ziv, Keren; Ito, Ken; Paulmurugan, Ramasamy; Willmann, Jürgen K; Chung, Jaehoon; Ikeno, Fumiaki; Swanson, Julia C; Merk, Denis R; Lyons, Jennifer K; Yerushalmi, David; Teramoto, Tomohiko; Kosuge, Hisanori; Dao, Catherine N; Ray, Pritha; Patel, Manishkumar; Chang, Ya-Fang; Mahmoudi, Morteza; Cohen, Jeff Eric; Goldstone, Andrew Brooks; Habte, Frezghi; Bhaumik, Srabani; Yaghoubi, Shahriar; Robbins, Robert C; Dash, Rajesh; Yang, Phillip C; Brinton, Todd J; Yock, Paul G; McConnell, Michael V; Gambhir, Sanjiv S

    2016-09-01

    Purpose To quantitatively determine the limit of detection of marrow stromal cells (MSC) after cardiac cell therapy (CCT) in swine by using clinical positron emission tomography (PET) reporter gene imaging and magnetic resonance (MR) imaging with cell prelabeling. Materials and Methods Animal studies were approved by the institutional administrative panel on laboratory animal care. Seven swine received 23 intracardiac cell injections that contained control MSC and cell mixtures of MSC expressing a multimodality triple fusion (TF) reporter gene (MSC-TF) and bearing superparamagnetic iron oxide nanoparticles (NP) (MSC-TF-NP) or NP alone. Clinical MR imaging and PET reporter gene molecular imaging were performed after intravenous injection of the radiotracer fluorine 18-radiolabeled 9-[4-fluoro-3-(hydroxyl methyl) butyl] guanine ((18)F-FHBG). Linear regression analysis of both MR imaging and PET data and nonlinear regression analysis of PET data were performed, accounting for multiple injections per animal. Results MR imaging showed a positive correlation between MSC-TF-NP cell number and dephasing (dark) signal (R(2) = 0.72, P = .0001) and a lower detection limit of at least approximately 1.5 × 10(7) cells. PET reporter gene imaging demonstrated a significant positive correlation between MSC-TF and target-to-background ratio with the linear model (R(2) = 0.88, P = .0001, root mean square error = 0.523) and the nonlinear model (R(2) = 0.99, P = .0001, root mean square error = 0.273) and a lower detection limit of 2.5 × 10(8) cells. Conclusion The authors quantitatively determined the limit of detection of MSC after CCT in swine by using clinical PET reporter gene imaging and clinical MR imaging with cell prelabeling. (©) RSNA, 2016 Online supplemental material is available for this article.

  6. Non-invasive imaging of human embryonic stem cells.

    Science.gov (United States)

    Hong, Hao; Yang, Yunan; Zhang, Yin; Cai, Weibo

    2010-09-01

    Human embryonic stem cells (hESCs) hold tremendous therapeutic potential in a variety of diseases. Over the last decade, non-invasive imaging techniques have proven to be of great value in tracking transplanted hESCs. This review article will briefly summarize the various techniques used for non-invasive imaging of hESCs, which include magnetic resonance imaging (MRI), bioluminescence imaging (BLI), fluorescence, single-photon emission computed tomography (SPECT), positron emission tomography (PET), and multimodality approaches. Although the focus of this review article is primarily on hESCs, the labeling/tracking strategies described here can be readily applied to other (stem) cell types as well. Non-invasive imaging can provide convenient means to monitor hESC survival, proliferation, function, as well as overgrowth (such as teratoma formation), which could not be readily investigated previously. The requirement for hESC tracking techniques depends on the clinical scenario and each imaging technique will have its own niche in preclinical/clinical research. Continued evolvement of non-invasive imaging techniques will undoubtedly contribute to significant advances in understanding stem cell biology and mechanisms of action.

  7. Live-cell imaging: new avenues to investigate retinal regeneration.

    Science.gov (United States)

    Lahne, Manuela; Hyde, David R

    2017-08-01

    Sensing and responding to our environment requires functional neurons that act in concert. Neuronal cell loss resulting from degenerative diseases cannot be replaced in humans, causing a functional impairment to integrate and/or respond to sensory cues. In contrast, zebrafish (Danio rerio) possess an endogenous capacity to regenerate lost neurons. Here, we will focus on the processes that lead to neuronal regeneration in the zebrafish retina. Dying retinal neurons release a damage signal, tumor necrosis factor α, which induces the resident radial glia, the Müller glia, to reprogram and re-enter the cell cycle. The Müller glia divide asymmetrically to produce a Müller glia that exits the cell cycle and a neuronal progenitor cell. The arising neuronal progenitor cells undergo several rounds of cell divisions before they migrate to the site of damage to differentiate into the neuronal cell types that were lost. Molecular and immunohistochemical studies have predominantly provided insight into the mechanisms that regulate retinal regeneration. However, many processes during retinal regeneration are dynamic and require live-cell imaging to fully discern the underlying mechanisms. Recently, a multiphoton imaging approach of adult zebrafish retinal cultures was developed. We will discuss the use of live-cell imaging, the currently available tools and those that need to be developed to advance our knowledge on major open questions in the field of retinal regeneration.

  8. Live-cell imaging: new avenues to investigate retinal regeneration

    Directory of Open Access Journals (Sweden)

    Manuela Lahne

    2017-01-01

    Full Text Available Sensing and responding to our environment requires functional neurons that act in concert. Neuronal cell loss resulting from degenerative diseases cannot be replaced in humans, causing a functional impairment to integrate and/or respond to sensory cues. In contrast, zebrafish (Danio rerio possess an endogenous capacity to regenerate lost neurons. Here, we will focus on the processes that lead to neuronal regeneration in the zebrafish retina. Dying retinal neurons release a damage signal, tumor necrosis factor α, which induces the resident radial glia, the Müller glia, to reprogram and re-enter the cell cycle. The Müller glia divide asymmetrically to produce a Müller glia that exits the cell cycle and a neuronal progenitor cell. The arising neuronal progenitor cells undergo several rounds of cell divisions before they migrate to the site of damage to differentiate into the neuronal cell types that were lost. Molecular and immunohistochemical studies have predominantly provided insight into the mechanisms that regulate retinal regeneration. However, many processes during retinal regeneration are dynamic and require live-cell imaging to fully discern the underlying mechanisms. Recently, a multiphoton imaging approach of adult zebrafish retinal cultures was developed. We will discuss the use of live-cell imaging, the currently available tools and those that need to be developed to advance our knowledge on major open questions in the field of retinal regeneration.

  9. Red blood cell cluster separation from digital images for use in sickle cell disease.

    Science.gov (United States)

    González-Hidalgo, Manuel; Guerrero-Peña, F A; Herold-García, S; Jaume-I-Capó, Antoni; Marrero-Fernández, P D

    2015-07-01

    The study of cell morphology is an important aspect of the diagnosis of some diseases, such as sickle cell disease, because red blood cell deformation is caused by these diseases. Due to the elongated shape of the erythrocyte, ellipse adjustment and concave point detection are applied widely to images of peripheral blood samples, including during the detection of cells that are partially occluded in the clusters generated by the sample preparation process. In the present study, we propose a method for the analysis of the shape of erythrocytes in peripheral blood smear samples of sickle cell disease, which uses ellipse adjustments and a new algorithm for detecting notable points. Furthermore, we apply a set of constraints that allow the elimination of significant image preprocessing steps proposed in previous studies. We used three types of images to validate our method: artificial images, which were automatically generated in a random manner using a computer code; real images from peripheral blood smear sample images that contained normal and elongated erythrocytes; and synthetic images generated from real isolated cells. Using the proposed method, the efficiency of detecting the two types of objects in the three image types exceeded 99.00%, 98.00%, and 99.35%, respectively. These efficiency levels were superior to the results obtained with previously proposed methods using the same database, which is available at http://erythrocytesidb.uib.es/. This method can be extended to clusters of several cells and it requires no user inputs.

  10. Autofluorescence-based diagnostic UV imaging of tissues and cells

    Science.gov (United States)

    Renkoski, Timothy E.

    Cancer is the second leading cause of death in the United States, and its early diagnosis is critical to improving treatment options and patient outcomes. In autofluorescence (AF) imaging, light of controlled wavelengths is projected onto tissue, absorbed by specific molecules, and re-emitted at longer wavelengths. Images of re-emitted light are used together with spectral information to infer tissue functional information and diagnosis. This dissertation describes AF imaging studies of three different organs using data collected from fresh human surgical specimens. In the ovary study, illumination was at 365 nm, and images were captured at 8 emission wavelengths. Measurements from a multispectral imaging system and fiber optic probe were used to map tissue diagnosis at every image pixel. For the colon and pancreas studies, instrumentation was developed extending AF imaging capability to sub-300 nm excitation. Images excited in the deep UV revealed tryptophan and protein content which are believed to change with disease state. Several excitation wavelength bands from 280 nm to 440 nm were investigated. Microscopic AF images collected in the pancreas study included both cultured and primary cells. Several findings are reported. A method of transforming fiber optic probe spectra for direct comparison with imager spectra was devised. Normalization of AF data by green reflectance data was found useful in correcting hemoglobin absorption. Ratio images, both AF and reflectance, were formulated to highlight growths in the colon. Novel tryptophan AF images were found less useful for colon diagnostics than the new ratio techniques. Microscopic tryptophan AF images produce useful visualization of cellular protein content, but their diagnostic value requires further study.

  11. Challenges in imaging cell surface receptor clusters

    Science.gov (United States)

    Medda, Rebecca; Giske, Arnold; Cavalcanti-Adam, Elisabetta Ada

    2016-01-01

    Super-resolution microscopy offers unique tools for visualizing and resolving cellular structures at the molecular level. STED microscopy is a purely optical method where neither complex sample preparation nor mathematical post-processing is required. Here we present the use of STED microscopy for imaging receptor cluster composition. We use two-color STED to further determine the distribution of two different receptor subunits of the family of receptor serine/threonine kinases in the presence or absence of their ligands. The implications of receptor clustering on the downstream signaling are discussed, and future challenges are also presented.

  12. Copyright Detection System for Videos Using TIRI-DCT Algorithm

    Directory of Open Access Journals (Sweden)

    S. Nirmal

    2012-12-01

    Full Text Available The copyright detection system is used to detect whether a video is copyrighted or not by extracting the features or fingerprints of a video and matching them with fingerprints other videos. The system is mainly used for copyright applications of multimedia content. The copyright detection system depends on an algorithm to extract fingerprints which is the TIRI-DCT Algorithm of a video followed by an approximate search algorithm which is the Inverted File Based Similarity Search. To find whether a video is copyrighted or not, the query video is taken and the feature values of the video are extracted using the fingerprint extraction algorithm, it extracts feature values from special images called frames constructed from the video. Each frame represents a part or a segment of the video and contains both temporal and spatial information of the video segment. These images are called Temporally Informative Representative Images (TIRI. The fingerprints of all the videos in the database are extracted and stored in advance. The approximate search algorithm searches the fingerprints which is stored in the database and produces the closest matches to the fingerprint of the query video and based on the match the query video is found whether it is a copyrighted video or not.

  13. Maximization of imaging resolution in optical wireless sensor/lab-on-chip/SoC networks with solar cells.

    Science.gov (United States)

    Arnon, Shlomi

    2010-09-01

    The availability of sophisticated and low-cost hardware on a single chip, for example, CMOS cameras, CPU, DSP, processors and communication transceivers, optics, microfluidics, and micromechanics, has fostered the development of system-on-chip (SoC) technology, such as lab-on-chip or wireless multimedia sensor networks (WMSNs). WMSNs are networks of wirelessly interconnected devices on a chip that are able to ubiquitously retrieve multimedia content such as video from the environment and transfer it to a central location for additional processing. In this paper, we study WMSNs that include an optical wireless communication transceiver that uses light to transmit the information. One of the primary challenges in SoC design is to attain adequate resources like energy harvesting using solar cells in addition to imaging and communication capabilities, all within stringent spatial limitations while maximizing system performances. There is an inevitable trade-off between enhancing the imaging resolution and the expense of reducing communication capacity and energy harvesting capabilities, on one hand, and increasing the communication or the solar cell size to the detriment of the imaging resolution, on the other hand. We study these trade-offs, derive a mathematical model to maximize the resolution of the imaging system, and present a numerical example that demonstrates maximum imaging resolution. Our results indicate that an eighth-order polynomial with only two constants provides the required area allocation between the different functionalities.

  14. Indexed Captioned Searchable Videos: A Learning Companion for STEM Coursework

    Science.gov (United States)

    Tuna, Tayfun; Subhlok, Jaspal; Barker, Lecia; Shah, Shishir; Johnson, Olin; Hovey, Christopher

    2016-09-01

    Videos of classroom lectures have proven to be a popular and versatile learning resource. A key shortcoming of the lecture video format is accessing the content of interest hidden in a video. This work meets this challenge with an advanced video framework featuring topical indexing, search, and captioning (ICS videos). Standard optical character recognition (OCR) technology was enhanced with image transformations for extraction of text from video frames to support indexing and search. The images and text on video frames is analyzed to divide lecture videos into topical segments. The ICS video player integrates indexing, search, and captioning in video playback providing instant access to the content of interest. This video framework has been used by more than 70 courses in a variety of STEM disciplines and assessed by more than 4000 students. Results presented from the surveys demonstrate the value of the videos as a learning resource and the role played by videos in a students learning process. Survey results also establish the value of indexing and search features in a video platform for education. This paper reports on the development and evaluation of ICS videos framework and over 5 years of usage experience in several STEM courses.

  15. Indexed Captioned Searchable Videos: A Learning Companion for STEM Coursework

    Science.gov (United States)

    Tuna, Tayfun; Subhlok, Jaspal; Barker, Lecia; Shah, Shishir; Johnson, Olin; Hovey, Christopher

    2017-02-01

    Videos of classroom lectures have proven to be a popular and versatile learning resource. A key shortcoming of the lecture video format is accessing the content of interest hidden in a video. This work meets this challenge with an advanced video framework featuring topical indexing, search, and captioning (ICS videos). Standard optical character recognition (OCR) technology was enhanced with image transformations for extraction of text from video frames to support indexing and search. The images and text on video frames is analyzed to divide lecture videos into topical segments. The ICS video player integrates indexing, search, and captioning in video playback providing instant access to the content of interest. This video framework has been used by more than 70 courses in a variety of STEM disciplines and assessed by more than 4000 students. Results presented from the surveys demonstrate the value of the videos as a learning resource and the role played by videos in a students learning process. Survey results also establish the value of indexing and search features in a video platform for education. This paper reports on the development and evaluation of ICS videos framework and over 5 years of usage experience in several STEM courses.

  16. A new morphometric implemented video-image analysis protocol for the study of social modulation in activity rhythms of marine organisms.

    Science.gov (United States)

    Menesatti, Paolo; Aguzzi, Jacopo; Costa, Corrado; García, José Antonio; Sardà, Francesc

    2009-10-30

    Video-image analysis can be an efficient tool for microcosm experiments portraying the modulation of individual behaviour based on sociality. The Norway lobster, Nephrops norvegicus is a burrowing decapod the commercial capture of which occurs by trawling only when animals are engaged in seabed excursions. Emergence behaviour is modulated by the day-night cycle but a further modulation occurs upon social interaction in a still unknown fashion. Here, we present a novel automated protocol for the tracking of the movement of different animals at once based on a multivariate morphometric approach. Four black and white tags were customized according to a precise geometric design. Shape Matching and Complex Fourier Descriptors analyses were used to track tag displacement through consecutive frames in a 7-day experiment under monochromatic blue light (480 nm)-darkness conditions. Shape Matching errors were evaluated in relation to tag geometry. Time series of centroid coordinates in pixels were transformed in centimetres. The FD analysis was slightly less efficient than the Shape Matching, although more rapid (i.e. up to 20 times faster). Nocturnal rhythms were reported for all animals. Waveform analysis indicated marked differences in the amplitude of activity phases as proof of interindividual interaction. Total diel activity presented a decrease in the rate of out of burrow locomotion as the testing progressed. N. norvegicus is a nocturnal species and present observations sustain the efficiency and fidelity of our automated tracking system.

  17. Quantitative phase imaging for cell culture quality control.

    Science.gov (United States)

    Kastl, Lena; Isbach, Michael; Dirksen, Dieter; Schnekenburger, Jürgen; Kemper, Björn

    2017-05-01

    The potential of quantitative phase imaging (QPI) with digital holographic microscopy (DHM) for quantification of cell culture quality was explored. Label-free QPI of detached single cells in suspension was performed by Michelson interferometer-based self-interference DHM. Two pancreatic tumor cell lines were chosen as cellular model and analyzed for refractive index, volume, and dry mass under varying culture conditions. Firstly, adequate cell numbers for reliable statistics were identified. Then, to characterize the performance and reproducibility of the method, we compared results from independently repeated measurements and quantified the cellular response to osmolality changes of the cell culture medium. Finally, it was demonstrated that the evaluation of QPI images allows the extraction of absolute cell parameters which are related to cell layer confluence states. In summary, the results show that QPI enables label-free imaging cytometry, which provides novel complementary integral biophysical data sets for sophisticated quantification of cell culture quality with minimized sample preparation. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  18. Electrical impedance tomographic imaging of a single cell electroporation.

    Science.gov (United States)

    Meir, Arie; Rubinsky, Boris

    2014-06-01

    A living cell placed in a high strength electric field, can undergo a process known as electroporation. It is believed that during electroporation nano-scale defects (pores) occur in the membrane of the cell, causing dramatic changes to the permeability of its membrane. Electroporation is an important technique in biotechnology and medicine and numerous methods are being developed to improve the understanding and use of the technology. We propose to extend the toolbox available for studying electroporation by generating impedance distribution images of the cell as it undergoes electroporation using Electrical Impedance Tomography (EIT). To investigate the feasibility of this concept, we develop a mathematical model of the process of electroporation in a single cell and of EIT of the process and show simulation results of a computer-based finite element model (FEM). Our work is an attempt to develop a new imaging tool for visualizing electroporation in a single cell, offering a different temporal and spatial resolution compared to the state of the art, which includes bulk measurements of electrical properties during single cell electroporation, patch clamp and voltage clamp measurement in single cells and optical imaging with colorimetric dyes during single cell electroporation. This paper is a preliminary theoretic feasibility study.

  19. Redefining circulating tumor cells by image processing

    NARCIS (Netherlands)

    Ligthart, Sjoerd

    2012-01-01

    Circulating tumor cells (CTC) in the blood of patients with metastatic carcinomas are associated with poor survival and can be used to guide therapy. However, CTC are very heterogeneous in size and shape, and are present at very low frequencies. Missing or misjudging a few events may have great

  20. Redefining circulating tumor cells by image processing

    NARCIS (Netherlands)

    Ligthart, S.T.

    2012-01-01

    Circulating tumor cells (CTC) in the blood of patients with metastatic carcinomas are associated with poor survival and can be used to guide therapy. However, CTC are very heterogeneous in size and shape, and are present at very low frequencies. Missing or misjudging a few events may have great cons